zlacker

[return to "Ilya Sutskever to leave OpenAI"]
1. zoogen+Ix[view] [source] 2024-05-15 04:50:43
>>wavela+(OP)
Interesting, both Karpathy and Sutskever are gone from OpenAI now. Looks like it is now the Sam Altman and Greg Brockman show.

I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.

◧◩
2. nabla9+pH[view] [source] 2024-05-15 06:45:38
>>zoogen+Ix
Top 6 science guys are long gone. Open AI is run by marketing, business, software and productization people.

When the next wave of new deep learning innovations sweeps the world, Microsoft eats whats left of them. They make lots of money, but don't have future unless they replace what they lost.

◧◩◪
3. kinnth+r81[view] [source] 2024-05-15 11:32:06
>>nabla9+pH
AI has now evolved beyond just the science and it's biggest issue is in the productization. Finding use cases for what's already available ALONG with new models will be where success lies.

ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works. It always sucks when mission and vision don't align with the nerds ideas, but I think it's probably the best move for both parties.

◧◩◪◨
4. CooCoo+Ow1[view] [source] 2024-05-15 13:56:04
>>kinnth+r81
“Its biggest issue is in the productization.”

That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue.

◧◩◪◨⬒
5. anonym+lv2[view] [source] 2024-05-15 18:43:13
>>CooCoo+Ow1
> You can’t actually trust ai systems

For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.

◧◩◪◨⬒⬓
6. player+oR3[view] [source] 2024-05-16 07:19:27
>>anonym+lv2
What use cases? This kind of thing is stated all the time, never any examples.
◧◩◪◨⬒⬓⬔
7. roguas+9x4[view] [source] 2024-05-16 13:58:58
>>player+oR3
All the usecases we see. Take a look at perplexity optimising short internet research. If I get this mostly right its fine enough, saved my 30 minutes of mindless clicking and reading - even if some errors are there.
◧◩◪◨⬒⬓⬔⧯
8. CooCoo+TC4[view] [source] 2024-05-16 14:28:37
>>roguas+9x4
You make it sound like LLMs just make a few small mistakes when in reality they can hallucinate on a large scale.
[go to top]