zlacker

[parent] [thread] 10 comments
1. fdgsdf+(OP)[view] [source] 2023-02-08 21:50:14
How could they have left such a massive gap in their product. They literally have the model and resources to revolutionize search. We all know LLMs will hurt their ad revenue, but regardless they have to have known this was coming. This is so similar to FB getting caught off guard by TikTok. There was a gap in the utility of their product (TikTok enabled grass roots content creation), and they just left it wide open.

Its some combination of:

1. ChatGPT is so much better than previous versions that Google themselves was stunned by the utility.

2. Incompetence/Gross negligence across Google

3. No way for them to reconcile the lost ad revenue, so they released nothing. This case is hard to argue for, as they would know theyre a sitting duck.

Regardless I am hoping for a massive Google failure. Theyre the ones responsible for the SEO content waste land that is the modern internet. We have all suffered at the feet of their ad machine

replies(1): >>hgsgm+Y
2. hgsgm+Y[view] [source] 2023-02-08 21:53:43
>>fdgsdf+(OP)
4. Google Search already has lots of useful AI already in it, but Google didn't want to integrate a racist, confabulating chatbot, forgetting that modern users have no preference for truth over lies.

Why are you blaming Google for not being perfect while making the best free search engine, after you spent your whole life refusing to pay for a non-free one?

replies(2): >>fdgsdf+r1 >>deevia+Cf1
◧◩
3. fdgsdf+r1[view] [source] [discussion] 2023-02-08 21:55:33
>>hgsgm+Y
If OpenAI is willing to release it and Microsoft invested 10B, I have a very hard time believing that censoring the model is impossible. Microsoft 100% did their due diligence on the model.

Google is a monopoly, there is nothing anyone can do. Their search engine and business model has structured the internet and thus society. This thing needs to die

replies(1): >>ESMirr+yb
◧◩◪
4. ESMirr+yb[view] [source] [discussion] 2023-02-08 22:35:53
>>fdgsdf+r1
This is the same Microsoft that had to close down their Twitter AI “Tay” after a single day because it immediately became a “racist asshole” (as per The Verge) in 2016?

The same OpenAI ran by Sam Altman, who just last year was part of a crypto biometric scam called “Worldcoin” that attempted to collect biometric data from some of the worlds poorest in exchange for a shitcoin?

I’m sure they’ve done their due diligence and aren’t just pushing out a broken product as quickly as possible after it went viral because they saw dollar signs…

replies(2): >>peyton+cf >>juve19+KR
◧◩◪◨
5. peyton+cf[view] [source] [discussion] 2023-02-08 22:52:25
>>ESMirr+yb
Everyone on the planet can already talk to racists by typing in 4chan.org. In the meantime, I’ve found ChatGPT useful for learning zsh commands.
replies(1): >>ESMirr+vg
◧◩◪◨⬒
6. ESMirr+vg[view] [source] [discussion] 2023-02-08 22:58:17
>>peyton+cf
I’m pleased you have the privilege to just ignore the potential negative outcomes of this technology, that as per the marketing hype is set to become the new way the world interacts with information, primarily owned by two unsavoury entities who have history failing to protect the most vulnerable.
◧◩◪◨
7. juve19+KR[view] [source] [discussion] 2023-02-09 02:46:23
>>ESMirr+yb
2016 was 7 years ago. And I completely forgot about that incident. And so did everyone else.

> I’m sure they’ve done their due diligence and aren’t just pushing out a broken product as quickly as possible after it went viral because they saw dollar signs…

Why wouldn't they? If they bet and win, they significantly disrupt the search market and many others. If they don't, people still don't use Bing. The rest of their business will continue on as is.

It's a no brainer.

replies(1): >>Kbelic+wm1
◧◩
8. deevia+Cf1[view] [source] [discussion] 2023-02-09 07:02:34
>>hgsgm+Y
Let's be clear, the reason Google didn't want to jump into conversation search is because it invalidates large portions of their business model.

At best they want to avoid bad publicity of their tech just being bad enough to throw out racist remarks. But the company who dropped "don't be evil" from their mission statement, and who fired AI ethics researchers for their research in AI bias that Google did not like is not a morale authority on the matter.

Also, Google does not provide an ad-free paid version of google, you statement is a non-sequitur.

replies(1): >>silisi+3q1
◧◩◪◨⬒
9. Kbelic+wm1[view] [source] [discussion] 2023-02-09 08:08:19
>>juve19+KR
> 2016 was 7 years ago. And I completely forgot about that incident. And so did everyone else.

Microsoft also?

> Why wouldn't they? If they bet and win, they significantly disrupt the search market and many others. If they don't, people still don't use Bing. The rest of their business will continue on as is.

The GP was responding to a post that said that Microsoft 100% did their due diligence.... I'm unsure to what you are responding.

◧◩◪
10. silisi+3q1[view] [source] [discussion] 2023-02-09 08:42:47
>>deevia+Cf1
> who fired AI ethics researchers for their research in AI bias that Google did not like

I'm no fan of Google, but this seriously glosses over a rather complex situation. A paper may have been the catalyst, but I'd argue definitely not the reason for the firings. You can't just demand things with the threat of quitting, then act surprised when you're terminated.

replies(1): >>ESMirr+cr1
◧◩◪◨
11. ESMirr+cr1[view] [source] [discussion] 2023-02-09 08:54:20
>>silisi+3q1
I don’t agree with that, it wasn’t even a complex situation, Google just didn’t like the idea that a paper attributed to them could point out all the obvious unethical implications of this technology and shut it down.

The drama afterwards is largely irrelevant and if anything Gebru’s refusal to just meekly accept the company line demonstrates just how valuable she is in this field. The dismissal isn’t the real story, the retraction of the paper is the real problem the OP is referring to when discussing Google’s moral issues.

[go to top]