zlacker

[parent] [thread] 19 comments
1. ralfd+(OP)[view] [source] 2024-05-15 07:49:38
How important are top science guys though? OpenAI has a thousand employees and almost unlimited money, and llm are better understood, I would guess continous development will beat singular genius heroes?
replies(8): >>AdamN+r >>mrklol+i2 >>benter+R2 >>l5870u+26 >>Random+ha >>sheesh+cg >>belter+Qg >>spamiz+nI
2. AdamN+r[view] [source] 2024-05-15 07:54:05
>>ralfd+(OP)
Agreed - it's good to have some far thinking innovation but really that can be acquired as needed so you really just need a few people with their pulse on innovation which there will always be more of outside a given company than within it.

Right now it's all about reducing transaction costs, small-i innovating, onboarding integrations, maintaining customer and stakeholder trust, getting content, managing stakeholders, and selling.

3. mrklol+i2[view] [source] 2024-05-15 08:09:26
>>ralfd+(OP)
They definitely need them to find new approaches which you won’t find normally.
4. benter+R2[view] [source] 2024-05-15 08:13:35
>>ralfd+(OP)
> OpenAI has a thousand employees and almost unlimited money

You could say the same about Google - and yet they missed the consequences of their own discovery and got behind instead of being leaders. So you need specific talent to pull this off even if in theory you can hire anybody.

replies(1): >>wg0+J7
5. l5870u+26[view] [source] 2024-05-15 08:44:04
>>ralfd+(OP)
Difficult to quantify but as an example the 2017 scientific paper “Attention is all you need” changed the entire AI field dramatically. Without these landmark achievements delivered by highly skilled scientists, OpenAI wouldn’t exist or only be severely limited.
replies(1): >>belter+zb
◧◩
6. wg0+J7[view] [source] [discussion] 2024-05-15 09:06:43
>>benter+R2
I am just curious how it happened to Google? Like who were the product managers or others who didn't see an opportunity here exactly where the whole thing was invented and they had huge amounts of data already, whole web basically and the amount of video that no one else can ever hope to have?
replies(6): >>andy99+L8 >>aramat+M8 >>jack_r+U8 >>thebyt+e9 >>pembro+Kb >>iosjun+2t
◧◩◪
7. andy99+L8[view] [source] [discussion] 2024-05-15 09:18:34
>>wg0+J7
Data volume isn't that important, that's becoming clearer now. What OpenAI did was paid for a bunch of good labelled data. I'm convinced that's basically the differentiator. It's not a academic or fundamental thing to do which is why google didn't do it, it's a pure practical product thing.
◧◩◪
8. aramat+M8[view] [source] [discussion] 2024-05-15 09:18:59
>>wg0+J7
It's hard to invest millions in employees who are likely to leave to a competitor later. That's very risky, aka venture.
replies(1): >>JKCalh+ak
◧◩◪
9. jack_r+U8[view] [source] [discussion] 2024-05-15 09:21:11
>>wg0+J7
I think the discovery of the power of the LLM was almost stumbled upon at OpenAI, they certainly didn't set out initially with the goal of creating them. Afaik they had one guy who was doing a project of creating an LLM with amazon review text data and only off the back of playing around with that did they realise its potential
◧◩◪
10. thebyt+e9[view] [source] [discussion] 2024-05-15 09:24:30
>>wg0+J7
A lot of it was the unwillingness to take risk. LLMs were, and still are, hard to control, in terms of making sure they give correct and reliable answers, making sure they don't say inappropriate things that hurt your brand. When you're the stable leader you don't want to tank your reputation, which makes LLMs difficult to put out there. It's almost good for Google that OpenAI broke this ground for them and made people accepting of this imperfect technology.
11. Random+ha[view] [source] 2024-05-15 09:36:17
>>ralfd+(OP)
Most of every large business isn't science but getting organized, costs controlled, products made, risk managed, and so forth.
◧◩
12. belter+zb[view] [source] [discussion] 2024-05-15 09:51:36
>>l5870u+26
And ironically even the authors did not fully grasp at the time the paper importance. Reminds me of when Larry Page and Sergey Brin, tried to sell Google for $1 million ...
◧◩◪
13. pembro+Kb[view] [source] [discussion] 2024-05-15 09:53:57
>>wg0+J7
I’m 100% positive lots of people at Google were chomping at the bit to productize LLMs early on.

But the reality is, LLMs are a cannibalization threat to Search. And the Search Monopoly is the core money making engine of the entire company.

Classic innovators dilemma. No fat-and-happy corporate executive would ever say yes to putting lots of resources behind something risky that might also kill the golden goose.

The only time that happens at a big established company, is when driven by some iconoclastic founder. And Google’s founders have been MIA for over a decade.

replies(1): >>JKCalh+2k
14. sheesh+cg[view] [source] 2024-05-15 10:47:03
>>ralfd+(OP)
Talk to Microsoft
15. belter+Qg[view] [source] 2024-05-15 10:54:34
>>ralfd+(OP)
OpenAI has less than 800 employees
◧◩◪◨
16. JKCalh+2k[view] [source] [discussion] 2024-05-15 11:23:55
>>pembro+Kb
Golden goose is already being hoisted upon a spit — and your company is not even going to get even drippings of the fat. I am surprised by the short-sightedness of execs.
replies(1): >>pembro+Um
◧◩◪◨
17. JKCalh+ak[view] [source] [discussion] 2024-05-15 11:24:54
>>aramat+M8
So the alternative is to...?
◧◩◪◨⬒
18. pembro+Um[view] [source] [discussion] 2024-05-15 11:44:32
>>JKCalh+2k
I don’t work there, I’ve just worked for lots of big orgs — they are all the same. Any claimed uniqueness in “Organizational structure” and “culture” are just window dressing around good ol’ human nature.

It’s not short sightedness, it’s rational self-interest. The rewards for taking risk as employee #20,768 in a large company are minimal, whereas the downside can be catastrophic for your career & personal life.

◧◩◪
19. iosjun+2t[view] [source] [discussion] 2024-05-15 12:23:22
>>wg0+J7
Well for one, Ilya was poached from Google to work for OpenAI to eventually help build SOTA models.

Fast forward to today and we a discussing the implications of him leaving OpenAI on this very thread.

Evidence to support the notion that you can’t just throw mountains of cash and engineers at a problem to do something truly trailblazing.

20. spamiz+nI[view] [source] 2024-05-15 13:48:47
>>ralfd+(OP)
It depends on your views on LLMs

If your view is that LLMs only need minor improvements to their core technology and that the major engineering focus should be placed on productizing them, then losing a bunch of scientists might not be seen as that big of a deal.

But if your view is that they still need to overcome significant milestones to really unlock their value... then this is a pretty big loss.

I suppose there's a third view, which is: LLMs still need to overcome significant hurdles, but solutions to those hurdles are a decade or more away. So it's best to productize now, establish some positive cashflow and then re-engage with R&D when it becomes cheaper in the future and/or just wait for other people to solve the hard problems.

I would guess the dominant view of the industry right now is #1 or #3.

[go to top]