zlacker

[parent] [thread] 14 comments
1. UncleO+(OP)[view] [source] 2023-11-17 21:56:56
If OpenAI is somehow a scam there's going to be a lot of tech stocks crashing next week.
replies(4): >>ryanSr+Hc >>dragon+Wd >>Paul-C+Mr >>august+Pr
2. ryanSr+Hc[view] [source] 2023-11-17 22:59:36
>>UncleO+(OP)
Yeah but what could possibly be the scam? OpenAI's product works (most of the time).
replies(2): >>ZiiS+fn >>Scarbl+pq
3. dragon+Wd[view] [source] 2023-11-17 23:06:07
>>UncleO+(OP)
I think an internal scam of OpenAI is more likely than OpenAI being a scam, if “scam” is even the right framing.
◧◩
4. ZiiS+fn[view] [source] [discussion] 2023-11-17 23:51:16
>>ryanSr+Hc
The Mechanical Turk really played chess.
replies(1): >>umanwi+Ro
◧◩◪
5. umanwi+Ro[view] [source] [discussion] 2023-11-18 00:00:28
>>ZiiS+fn
Are you suggesting that ChatGPT is secretly backed by humans? That’s impossible, it is faster than the fastest humans in many areas.
replies(3): >>jamilt+3H >>thehap+II >>mycolo+TJ
◧◩
6. Scarbl+pq[view] [source] [discussion] 2023-11-18 00:08:05
>>ryanSr+Hc
Altman didn't actually do his job, he just let ChatGPT run the company.
7. Paul-C+Mr[view] [source] 2023-11-18 00:13:57
>>UncleO+(OP)
I don't think so. LLMs are absolutely not a scam. There are LLMs out there that I can and do run on my laptop that are nearly as good as GPT4. Replacing GPT4 with another LLM is not the hardest thing in the world. I predict that, besides Microsoft, this won't be felt in the broader tech sector.
replies(1): >>int_19+f71
8. august+Pr[view] [source] 2023-11-18 00:14:06
>>UncleO+(OP)
it’s not really possible for it to be a scam. if you want to see its product you can go and try it yourself
◧◩◪◨
9. jamilt+3H[view] [source] [discussion] 2023-11-18 01:31:47
>>umanwi+Ro
They're joking.
◧◩◪◨
10. thehap+II[view] [source] [discussion] 2023-11-18 01:42:33
>>umanwi+Ro
They invented a relativistic device that slows time inside a chamber. A human can spend a whole day answering a prompt at their leisure.
replies(1): >>smegge+DS
◧◩◪◨
11. mycolo+TJ[view] [source] [discussion] 2023-11-18 01:48:56
>>umanwi+Ro
A more plausible theory is that the training actually relies on a ton of human labeling behind the scenes (I have no idea if this is true or not).
replies(1): >>Intral+ea1
◧◩◪◨⬒
12. smegge+DS[view] [source] [discussion] 2023-11-18 02:56:30
>>thehap+II
that wouldn't be scam that would be a invention worthy of a Nobel Prize and be world altering beyond the impact of AI. I mean controling the flow of time without creating a supermassive blackhole would allow sort of fun exploits in computation alone not to mention other practical uses like instantly aging cheese or wine
◧◩
13. int_19+f71[view] [source] [discussion] 2023-11-18 04:44:18
>>Paul-C+Mr
They really aren't nearly as good as GPT4, though. The best hobbyist stuff that we have right now is 70b LLaMA finetunes, which you can run on a high-end MacBook, but I would say it's only marginally better than GPT-3.5. As soon as it gets a hard task that requires reasoning, things break down. GPT-4 is leaps ahead of any of that stuff.
replies(1): >>Paul-C+Mp1
◧◩◪◨⬒
14. Intral+ea1[view] [source] [discussion] 2023-11-18 05:04:15
>>mycolo+TJ
> A more plausible theory is that the training actually relies on a ton of human labeling behind the scenes (I have no idea if this is true or not).

Isn't this already generally known to be true (and ironically involving Mechanical Turk-like services)?

Not sure if these are all the same sources I read a while ago, but E.G.:

https://www.theverge.com/features/23764584/ai-artificial-int...

https://www.marketplace.org/shows/marketplace-tech/human-lab...

https://www.technologyreview.com/2022/04/20/1050392/ai-indus...

https://time.com/6247678/openai-chatgpt-kenya-workers/

https://www.vice.com/en/article/wxnaqz/ai-isnt-artificial-or...

https://www.noemamag.com/the-exploited-labor-behind-artifici...

https://www.npr.org/2023/07/06/1186243643/the-human-labor-po...

◧◩◪
15. Paul-C+Mp1[view] [source] [discussion] 2023-11-18 07:19:08
>>int_19+f71
No, not by themselves, they're not as good as GPT-4. (I disagree that they're only "marginally" better than GPT-3.5, but that's just a minor quibble) If you use RAG and other techniques, you can get very close to GPT-4-level performance with other open models. Again, I'm not claiming open models are better than GPT-4, just that you can come close.
[go to top]