zlacker

[parent] [thread] 2 comments
1. ericmc+(OP)[view] [source] 2023-08-03 15:52:00
Seriously, the google generative AI actively suggests completely inaccurate things. It has no ability to say: "I don't know", which seems like a huge failing.

I just asked "what does the JS ** operator do" and it made up an answer about it being a bitwise XOR. 1 ** 2 === 3. The fact that all these LLMs will confidently suggest wrong information makes me feel like LLM is going to be a difficult path to AGI. It will be a big problem if an AI receptionist just confidently spews misinformation and is unable to tell customers they are wrong.

replies(1): >>incrud+N3
2. incrud+N3[view] [source] 2023-08-03 16:06:58
>>ericmc+(OP)
> It has no ability to say: "I don't know"

So do many humans. The expression of ignorance and self-doubt must certainly be woefully underrepresented in training data.

replies(1): >>galang+wa
◧◩
3. galang+wa[view] [source] [discussion] 2023-08-03 16:38:42
>>incrud+N3
Yeah, no one posts to say they don't know the answer. It is the smallest of the problems that come from using the internet to train. I realize these are just statistical text generators, but if we do end up training a real AGI on the internet I find that both apalling and terrifying. If I said my parenting strategy was to lock my genius newborn in a room with provided food and water and a web browser, you'd call me insane and expect my child to be a sociopath...
[go to top]