zlacker

[parent] [thread] 32 comments
1. kinnth+(OP)[view] [source] 2024-05-15 11:32:06
AI has now evolved beyond just the science and it's biggest issue is in the productization. Finding use cases for what's already available ALONG with new models will be where success lies.

ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works. It always sucks when mission and vision don't align with the nerds ideas, but I think it's probably the best move for both parties.

replies(7): >>itsokt+N >>CooCoo+no >>Action+rq >>ChildO+Ou >>apppli+Ry >>ronald+t11 >>watt+O51
2. itsokt+N[view] [source] 2024-05-15 11:39:03
>>kinnth+(OP)
>ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works.

I'm not as in tune as some people here so: don't they need both? With the rate at which things are moving, how can it be otherwise?

replies(3): >>vasco+r5 >>throwt+r8 >>godels+Ll1
◧◩
3. vasco+r5[view] [source] [discussion] 2024-05-15 12:09:18
>>itsokt+N
I guess their point is you already have a lot out there to create new products, and you can still read papers you just won't be writing them.
◧◩
4. throwt+r8[view] [source] [discussion] 2024-05-15 12:28:19
>>itsokt+N
They do need both but it seems like they have enough engineering talent to keep improving. Time will tell now that Ilya is out but I expect they have enough cultural cache to attract excellent engineers even if they aren’t as famous as Ilya and Karpathy.

They have a strong focus on making the existing models fast and cheap without sacrificing capability which is music to the ears of those looking to build with them.

replies(1): >>Zacobi+C97
5. CooCoo+no[view] [source] 2024-05-15 13:56:04
>>kinnth+(OP)
“Its biggest issue is in the productization.”

That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue.

replies(5): >>startu+Hr >>sensei+2J >>tivert+4W >>tbrown+kY >>anonym+Um1
6. Action+rq[view] [source] 2024-05-15 14:06:29
>>kinnth+(OP)
> AI has now evolved beyond just the science

Pretty weak take there bud. If we just look at the Gartner Hype Cycle that marketing and business people love so much it would seem to me that we are at the peak, just before the downfall.

They are hyping hard to sell more, when they should be prepping for the coming dip, building their tech and research side more to come out the other side.

Regardless, a tech company without the inventors is doomed to fail.

replies(2): >>disqar+xI >>Zacobi+a97
◧◩
7. startu+Hr[view] [source] [discussion] 2024-05-15 14:12:57
>>CooCoo+no
Ilya is one of the Founders of the original nonprofit. This is also an issue. It does look like he was not the Founder or in any control of the for profit venture.
8. ChildO+Ou[view] [source] 2024-05-15 14:26:48
>>kinnth+(OP)
Eh maybe from a company point of view.

But this race to add 'AI' into everything is producing a lot of nonsense. I'd rather go fullsteam ahead on the science and the new models, because that is what will actually get us something decent, rather than milking what we already have.

9. apppli+Ry[view] [source] 2024-05-15 14:46:29
>>kinnth+(OP)
To this end, OpenAI is already off track. Their “GPT marketplace” or whatever they’re calling it is just misguided flailing from a product perspective.
replies(2): >>j45+3N >>javaun+KO
◧◩
10. disqar+xI[view] [source] [discussion] 2024-05-15 15:30:28
>>Action+rq
I'm siding with you here. The same is happening at Google, but they definitely have momentum from past decades, so even if they go "full Boeing", there's a long way to fall.

Meanwhile, OpenAI (and the rest of the folks riding the hype train) will soon enter the trough. They're not diversified and I'm not sure that they can keep running at a loss in this post-ZIRP world.

◧◩
11. sensei+2J[view] [source] [discussion] 2024-05-15 15:32:48
>>CooCoo+no
if the ai is the product, and the product isnt trustable, isnt that a product issue??
replies(1): >>shwaj+2W
◧◩
12. j45+3N[view] [source] [discussion] 2024-05-15 15:50:22
>>apppli+Ry
Or they were experimenting with people defining agentic A.I. slightly before it became more widely popular.
◧◩
13. javaun+KO[view] [source] [discussion] 2024-05-15 15:57:45
>>apppli+Ry
Ain’t there this pattern that innovations comes in waves and that the companies of the first wave most often just die but the second and third wave a build upon their artefacts and can be successful in a longer run?

I see this coming for sure for open ai and I do my part by just writing this comment on HN.

replies(1): >>JohnFe+9B1
◧◩◪
14. shwaj+2W[view] [source] [discussion] 2024-05-15 16:30:29
>>sensei+2J
It’s a core technology issue.

The AI isn’t the product, e.g. the ChatGPT interface is the main product that is layered above the core AI tech.

The issue is trustworthiness isn’t solvable by applying standard product management techniques on a predictable schedule. It requires scientific research.

◧◩
15. tivert+4W[view] [source] [discussion] 2024-05-15 16:30:37
>>CooCoo+no
> That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue.

I don't know about that, it seems to work just fine at creating spam and clone websites.

◧◩
16. tbrown+kY[view] [source] [discussion] 2024-05-15 16:40:54
>>CooCoo+no
It works fine for some things. You just need a clearly defined task where LLM + human reviewer is on average faster (ie cheaper) than a human doing the same task themselves without that assistance.
replies(1): >>still_+BN2
17. ronald+t11[view] [source] 2024-05-15 16:55:01
>>kinnth+(OP)
Agree in general. While there remains issues on making/using AI, there is plenty of utility that doesn't require new science but maturation of deployment. For those who say its junk, I can only speak for myself and disagree.
18. watt+O51[view] [source] 2024-05-15 17:14:20
>>kinnth+(OP)
> ChatGPT is the number 1 brand in AI

Not for long. They have no moat. Folks who did the science are now doing science for some other company, and will blow the pants off OpenAI.

replies(1): >>pembro+nh1
◧◩
19. pembro+nh1[view] [source] [discussion] 2024-05-15 18:12:38
>>watt+O51
I think you massively underestimate the power of viral media coverage and the role it plays in building a “brand.” You’ll never replicate the Musk/Altman/Satya soap opera again. ChatGPT will forever be in the history books as the Kleenex of LLM AI.
replies(1): >>menset+W13
◧◩
20. godels+Ll1[view] [source] [discussion] 2024-05-15 18:37:06
>>itsokt+N
> With the rate at which things are moving

Things have been moving fast because we had a bunch of top notch scientists in companies paired with top notch salesmen/hype machines. But you need both in combination.

Hypemen make promises that can't be kept, but get absurd amounts of funding for doing so. Scientists fill in as many of the gaps as possible, but also get crazy resources due to the aforementioned funding. Obviously this train can't go forever, but I think you might understand that one of these groups is a bit more important than the other while one of these groups is more of a catalyst (makes things happen faster) for the other.

◧◩
21. anonym+Um1[view] [source] [discussion] 2024-05-15 18:43:13
>>CooCoo+no
> You can’t actually trust ai systems

For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.

replies(2): >>player+XI2 >>still_+TL2
◧◩◪
22. JohnFe+9B1[view] [source] [discussion] 2024-05-15 20:00:47
>>javaun+KO
Yes, "the pioneers get all the arrows".

From a business point of view, you don't want to be first to market. You want to be the second or third.

◧◩◪
23. player+XI2[view] [source] [discussion] 2024-05-16 07:19:27
>>anonym+Um1
What use cases? This kind of thing is stated all the time, never any examples.
replies(2): >>roguas+Io3 >>jedber+QP3
◧◩◪
24. still_+TL2[view] [source] [discussion] 2024-05-16 08:01:56
>>anonym+Um1
What are examples of these (very profitable) use cases?

Producing spam has some margin on it, but is it really very profitable? And else?

◧◩◪
25. still_+BN2[view] [source] [discussion] 2024-05-16 08:25:01
>>tbrown+kY
Given the fact that you need to review, research, and usually correct every detail of AI output, how can that be faster than just doing it right yourself in the first place? Do you have some examples of such tasks?
replies(1): >>tbrown+y94
◧◩◪
26. menset+W13[view] [source] [discussion] 2024-05-16 11:38:37
>>pembro+nh1
“You are a bad user, I am a good bing!”
◧◩◪◨
27. roguas+Io3[view] [source] [discussion] 2024-05-16 13:58:58
>>player+XI2
All the usecases we see. Take a look at perplexity optimising short internet research. If I get this mostly right its fine enough, saved my 30 minutes of mindless clicking and reading - even if some errors are there.
replies(1): >>CooCoo+su3
◧◩◪◨⬒
28. CooCoo+su3[view] [source] [discussion] 2024-05-16 14:28:37
>>roguas+Io3
You make it sound like LLMs just make a few small mistakes when in reality they can hallucinate on a large scale.
◧◩◪◨
29. jedber+QP3[view] [source] [discussion] 2024-05-16 16:28:19
>>player+XI2
Any use case where you treat the output like the work of a junior person and check it. Coding, law, writing. Pretty much anywhere that you can replace a junior employee with an LLM.

Google or Meta (don't remember which) just put out a report about how many human-hours they saved last year using transformers for coding.

◧◩◪◨
30. tbrown+y94[view] [source] [discussion] 2024-05-16 18:15:41
>>still_+BN2
Yes. There's the one that $employer built a POC app for and found did in fact save time. There's also github copilot which apparently a large chunk of people find saves time for them (and which $employer is reviewing to figure out if they can identify which people / job functions benefit precisely enough to pay for group licensing).
◧◩
31. Zacobi+a97[view] [source] [discussion] 2024-05-17 18:40:28
>>Action+rq
You are on point my friend. OpenAI might keep going on with the current momentum for several years without a doubt. But they already lost the long term game. The true essence of the OpenAI empire is not its showmen or PR guys, it's the scientists like Ilya. What a shame to see Ilya leaving. I hope whatever he creates is a rival AI product to stop this OpenAI monopoly and put up some interesting competition.
replies(1): >>Breza+5zg
◧◩◪
32. Zacobi+C97[view] [source] [discussion] 2024-05-17 18:43:03
>>throwt+r8
I love the way you used "cultural cache" here Lol. In any case I do hope whatever Ilya is building is some sort of AI competition to stop this OpenAI monopoly.
◧◩◪
33. Breza+5zg[view] [source] [discussion] 2024-05-21 14:45:26
>>Zacobi+a97
OpenAI launched gpt3 on June 11, 2020. That's probably the biggest lead they'll ever have over the competition. Over the past several months, I've gotten to the point where OpenAI could vanish tomorrow and I honestly wouldn't miss it. Claude 3 Opus, DBRX, and Llama 3 are at least as good at the tasks that I spend most of my time doing. And if Google can get itself figured out, Gemini Pro has a lot of potential.
[go to top]