zlacker

[return to "A New Mode of Cancer Treatment"]
1. lelag+Ic[view] [source] 2023-08-03 09:52:29
>>atomro+(OP)
If 2023 ends up giving us AGI, room-temperature superconductors, Starships and a cure for cancer, I think we will able to call it a good year...
◧◩
2. azinma+B41[view] [source] 2023-08-03 15:06:05
>>lelag+Ic
We’re not getting AGI anytime soon…
◧◩◪
3. ericmc+ge1[view] [source] 2023-08-03 15:52:00
>>azinma+B41
Seriously, the google generative AI actively suggests completely inaccurate things. It has no ability to say: "I don't know", which seems like a huge failing.

I just asked "what does the JS ** operator do" and it made up an answer about it being a bitwise XOR. 1 ** 2 === 3. The fact that all these LLMs will confidently suggest wrong information makes me feel like LLM is going to be a difficult path to AGI. It will be a big problem if an AI receptionist just confidently spews misinformation and is unable to tell customers they are wrong.

◧◩◪◨
4. incrud+3i1[view] [source] 2023-08-03 16:06:58
>>ericmc+ge1
> It has no ability to say: "I don't know"

So do many humans. The expression of ignorance and self-doubt must certainly be woefully underrepresented in training data.

[go to top]