zlacker

[parent] [thread] 22 comments
1. mikhae+(OP)[view] [source] 2023-11-17 23:50:29
It doesn't take a genius to figure out they are losing stupid amounts of money, with no major plan to recoup their investments.

Board probably took a look at updated burn-rate projections, saw that they have 6 months of runway, saw that they don't have enough GPUs, saw that Llama and Mistral and whatever other open-source models are awesome and run on personal computers, and thought to themselves - why the hell are we spending so much God damn money? For $20 a month memberships? For bots to be able to auto-signup for accounts, not prepay, burn compute, and skip the bill?

Then Grok gets released on Twitter, and they are left wondering - what exactly is it that we do, that is so much better, that we are spending 100x of what cheapo Musk is?

replies(6): >>nwoli+H >>abi+O1 >>agleas+o3 >>sumedh+86 >>tevon+oc >>seedle+Ug
2. nwoli+H[view] [source] 2023-11-17 23:54:16
>>mikhae+(OP)
Feels like something like this plus some deal with microsoft for further funding and sama getting too aggressive with terms and them having him ousted
3. abi+O1[view] [source] 2023-11-18 00:00:41
>>mikhae+(OP)
I mean GPT-4 is just so good. Have you compared GPT-4 vs. other models for coding? I'd pay more for GPT-4.
replies(2): >>kridsd+45 >>system+Rm
4. agleas+o3[view] [source] 2023-11-18 00:08:26
>>mikhae+(OP)
OpenAI’s models are, quality wise, pretty far ahead of competition. So that’s what they’re spending so much money on. There’s a history of creating things that are expensive then rapidly bringing down the cost, which is what they’ve been doing rather than creating GPT-5
◧◩
5. kridsd+45[view] [source] [discussion] 2023-11-18 00:15:52
>>abi+O1
Concur. GPT4 is like having an infinite-energy L3 engineer reporting to me. That's worth $10,000 per month according to the labor market. Sam has been giving away the farm!
replies(3): >>nightf+2c >>jahnu+Dg >>VirusN+Di
6. sumedh+86[view] [source] 2023-11-18 00:20:12
>>mikhae+(OP)
Try using Mistral, Lama first then see if your statement is true.
replies(1): >>mikhae+gEq
◧◩◪
7. nightf+2c[view] [source] [discussion] 2023-11-18 00:47:23
>>kridsd+45
People overestimating LLMs like this terrifies me so much
replies(4): >>rokkit+mq >>bmitc+es >>blitz_+Nt >>bigEno+wu
8. tevon+oc[view] [source] 2023-11-18 00:48:47
>>mikhae+(OP)
This also doesn't fit with their recent announcements significantly lowering prices. If they were that worried about losing $ they wouldn't have cut prices, they're the clear leader from a performance perspective and can command a premium.

And up to today they probably had one of the best fundraising prospects of any private company in the world.

◧◩◪
9. jahnu+Dg[view] [source] [discussion] 2023-11-18 01:11:48
>>kridsd+45
lol. This is like the HN opposite of the infamous dropbox post.
10. seedle+Ug[view] [source] 2023-11-18 01:13:34
>>mikhae+(OP)
How does firing the CEO help in your scenario? Now they just burnt a tonne of trust.
replies(1): >>taf2+fr
◧◩◪
11. VirusN+Di[view] [source] [discussion] 2023-11-18 01:22:58
>>kridsd+45
L3 engineer is a net negative bro.
◧◩
12. system+Rm[view] [source] [discussion] 2023-11-18 01:49:45
>>abi+O1
GPT-4 is so expensive. for reading and editing a text I usually use 6k tokens ends up being 25 cents via api. do it 1000s times you are going to spend a fortune. 0.03 + 0.06 is extremely expensive.
◧◩◪◨
13. rokkit+mq[view] [source] [discussion] 2023-11-18 02:10:05
>>nightf+2c
I'd upgrade the parent comment to l4 with broad experience in every single open source tool in existence.

Historically, I'm a backend and distributed systems engineer, but integrating GPT4 into my workflows has unlocked an awe-inspiring ability to lay down fat beads of UI-heavy code in both professional and personal contexts.

But it's still an L3: gotta ask the right questions and doubt every line it produces until it compiles and the tests pass.

◧◩
14. taf2+fr[view] [source] [discussion] 2023-11-18 02:18:38
>>seedle+Ug
I just bumped our integration with anthropic to P1 above all other priorities - we need a backup plan. I'm also going to be putting more time and money into investigating ways of running a llama2 model on our own hardware financially viable compared to openai and anthropic.... not sure but this is definitely the motivation i needed to see openai could be gone tomorrow.
◧◩◪◨
15. bmitc+es[view] [source] [discussion] 2023-11-18 02:25:10
>>nightf+2c
I had this disagreement with people on this site just the other day. People basically were like "you're asking it too complicated questions", but my response was then why does everyone make statements like the commenter you replied to?
replies(1): >>agentc+8v
◧◩◪◨
16. blitz_+Nt[view] [source] [discussion] 2023-11-18 02:42:18
>>nightf+2c
Why?
◧◩◪◨
17. bigEno+wu[view] [source] [discussion] 2023-11-18 02:49:09
>>nightf+2c
It’s wild, I actually downgraded back to gpt 3.5 cuz gpt 4 just wasn’t worth the money.
◧◩◪◨⬒
18. agentc+8v[view] [source] [discussion] 2023-11-18 02:53:42
>>bmitc+es
Because 99.9% of people who say things like this are just using ChatGPT itself and not any of the various awe-inspiring tools with full access to your codebase dynamically inserted into context via RAG. I have yet to talk to anyone who has actually worked for any amount of time against the GPT4 API or through Cursor, say, who underestimates their capabilities. Sincerely hoping this 'coup' doesn't mean the beginning of the end of that experience for most...
replies(2): >>bmitc+Qy >>janeta+pE
◧◩◪◨⬒⬓
19. bmitc+Qy[view] [source] [discussion] 2023-11-18 03:19:50
>>agentc+8v
> who underestimates their capabilities

Did you happen to mean overestimates? Just trying to make sure I understand.

replies(1): >>agentc+TP
◧◩◪◨⬒⬓
20. janeta+pE[view] [source] [discussion] 2023-11-18 04:01:38
>>agentc+8v
Context is very important in these kinds of use cases. If you work with something niche, I think these tools are less valuable because the training data becomes sparse.

For example, GPT-4 produces Javascript code far better than it produces Clojure code. Often, when it comes to Clojure, GPT-4 produces broken examples, contradictory explanations, or even circular reasoning.

replies(1): >>agentc+vQ
◧◩◪◨⬒⬓⬔
21. agentc+TP[view] [source] [discussion] 2023-11-18 05:24:15
>>bmitc+Qy
I just mean people who have actually used the API directly or through task-specific applications like Cursor that are meant to maximize use of AI for their needs know how much of a breakthrough we’ve had this year. People who doubt or downplay the already existing capabilities of this technology tend to have just played with ChatGPT a little bit (or have whatever ideological or other reason to do so).
◧◩◪◨⬒⬓⬔
22. agentc+vQ[view] [source] [discussion] 2023-11-18 05:27:42
>>janeta+pE
Have you tried Cursor out of curiosity? No ties to the company and long-time dev (Scala mostly), just genuinely found it to be transformative to my development practice like no tool before.
◧◩
23. mikhae+gEq[view] [source] [discussion] 2023-11-25 17:29:25
>>sumedh+86
Extrapolate the trends dude. One day, those models will be just as good - you will be able to train them on your codebase’s context, and they will have similar performance.

They have no moat other than training data and computing power - over the long term, while they may become a huge company, Apple will keep making M chip computers.

[go to top]