zlacker

[parent] [thread] 22 comments
1. killer+(OP)[view] [source] 2023-11-18 12:27:32
Yeah, AI will totally fail if people don't ship untested crap at breakneck speed.

Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!

replies(4): >>konsch+9 >>kolink+Ce >>criley+vf >>fennec+7q7
2. konsch+9[view] [source] 2023-11-18 12:28:19
>>killer+(OP)
Yea, change is bad.
replies(1): >>Feepin+se
◧◩
3. Feepin+se[view] [source] [discussion] 2023-11-18 14:00:56
>>konsch+9
Numerically, most change is bad.
replies(1): >>skohan+eO
4. kolink+Ce[view] [source] 2023-11-18 14:01:39
>>killer+(OP)
You haven't been around when Web2.0 and the whole modern internet arrived, were you? You know, all the sites that you consider stable and robust now (Google, YT and everything else) shipping with a Beta sign plastered onto them.
replies(2): >>chpatr+bh >>killer+cs
5. criley+vf[view] [source] 2023-11-18 14:07:28
>>killer+(OP)
All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".
replies(5): >>hgomer+El >>dudein+uq >>howrar+xq >>killer+5v >>roguec+3Zp
◧◩
6. chpatr+bh[view] [source] [discussion] 2023-11-18 14:16:24
>>kolink+Ce
You could also say that shipping social media algorithms with unknown effects on society as a whole are why we're in such a state right now. Maybe we should be more careful next time around.
◧◩
7. hgomer+El[view] [source] [discussion] 2023-11-18 14:40:01
>>criley+vf
Without supposing we're on this trajectory, humans no longer needing to focus on being productive is how we might be able to focus on being better humans.
◧◩
8. dudein+uq[view] [source] [discussion] 2023-11-18 15:09:53
>>criley+vf
Some humans hallucinate more than others
◧◩
9. howrar+xq[view] [source] [discussion] 2023-11-18 15:10:07
>>criley+vf
Well, that's the goal isn't it? Having AI take over everything that needs doing so that we can focus on doing things we want to do instead.
◧◩
10. killer+cs[view] [source] [discussion] 2023-11-18 15:20:14
>>kolink+Ce
I first got internet access in 1999, IIRC.

Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.

They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.

Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.

replies(1): >>dcow+Ax
◧◩
11. killer+5v[view] [source] [discussion] 2023-11-18 15:39:39
>>criley+vf
GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).

Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.

If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.

Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.

So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?

replies(2): >>fennec+Aq7 >>roguec+8Zp
◧◩◪
12. dcow+Ax[view] [source] [discussion] 2023-11-18 15:53:43
>>killer+cs
But it’s not supposed to ship totally “correct” answers. It is supposed to predict which text is most likely to follow the prompt. It does that correctly, whether the answer is factually correct or not.
replies(2): >>killer+1s1 >>roguec+qZp
◧◩◪
13. skohan+eO[view] [source] [discussion] 2023-11-18 17:21:21
>>Feepin+se
And yet we make progress. It seems we've historically mostly been effective at hanging on to positive change, and discarding negative change
replies(1): >>Feepin+lR
◧◩◪◨
14. Feepin+lR[view] [source] [discussion] 2023-11-18 17:37:07
>>skohan+eO
Yes, but that's an active process. You can't just be "pro change".

Occasionally, in high risk situations, "good change good, bad change bad" looks like "change bad" at a glance, because change will be bad by default without great effort invested in picking the good change.

◧◩◪◨
15. killer+1s1[view] [source] [discussion] 2023-11-18 20:58:57
>>dcow+Ax
Chat-based AI like ChatGPT are marketed as an assistant. People expect that it can answer their questions, and often it can answer even complex questions correctly. Then it can fail miserably on a basic question.

GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.

Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.

Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441

Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.

replies(2): >>int_19+pL2 >>fennec+8r7
◧◩◪◨⬒
16. int_19+pL2[view] [source] [discussion] 2023-11-19 05:29:23
>>killer+1s1
Perhaps we need a better term for them then. Because they are immensely useful as is - just not as a, say, Wikipedia replacement.
17. fennec+7q7[view] [source] 2023-11-20 12:56:11
>>killer+(OP)
Nobody's forcing anybody to use these tools.

They'll improve hallucinations and such later.

Imagine people not driving the model T cause it didn't have an airbag lmao. Things take time to be developed and perfected.

replies(1): >>roguec+hZp
◧◩◪
18. fennec+Aq7[view] [source] [discussion] 2023-11-20 12:59:15
>>killer+5v
Because people are are into tech? That's pretty much the whole point of this site?

Just imagining if we all only used proven products, no trying out cool experimental or incomplete stuff.

◧◩◪◨⬒
19. fennec+8r7[view] [source] [discussion] 2023-11-20 13:02:39
>>killer+1s1
Needlessly pedantic. Hold consumers accountable too. "Durr I thought autopilot meant it drove itself. Manual, nah brah I didn't read that shit, reading's for nerds. The huge warning and license terms, didn't read that either dweeb. Car trying to stop me for safety if I take my hands off the wheel? Brah I just watched a Tiktok that showed what to do and I turned that shit offff".
◧◩
20. roguec+3Zp[view] [source] [discussion] 2023-11-26 02:25:40
>>criley+vf
humanity is capable of taking feedback, citing its sources, and not outright lying

these models are built to sound like they know what they are talking about, whether they do or not. this violates our basic social coordination mechanisms in ways that usually only delusional or psychopathic people do, making the models worse than useless

◧◩◪
21. roguec+8Zp[view] [source] [discussion] 2023-11-26 02:27:26
>>killer+5v
Being better than the average human at coding is as easy as being better than the average human at surgery. Until it's better than actual skilled programmers, the people who are programming for a living are still responsible for learning to do the job well.
◧◩
22. roguec+hZp[view] [source] [discussion] 2023-11-26 02:29:28
>>fennec+7q7
The model T killed a _lot_ of people, and almost certainly should have been banned: https://www.detroitnews.com/story/news/local/michigan-histor...

If it had been, we wouldn't now be facing an extinction event.

◧◩◪◨
23. roguec+qZp[view] [source] [discussion] 2023-11-26 02:31:22
>>dcow+Ax
If that is how it was marketing itself, with the big disclaimers like tarot readers have that this is just for entertainment and not meant to be taken as factual advice, it might be doing a lot less harm but Sam Altman would make fewer billions so that is apparently not an option.
[go to top]