zlacker

[parent] [thread] 6 comments
1. woadwa+(OP)[view] [source] 2023-07-05 21:52:22
Meanwhile, GPT-4 still can’t reliably multiply small numbers.

https://arxiv.org/abs/2304.02015

replies(4): >>fprott+X2 >>mhb+c8 >>famous+L8 >>Camper+ta
2. fprott+X2[view] [source] 2023-07-05 22:08:56
>>woadwa+(OP)
A minor inconvenience when GPT-4 has no problem learning how to use a code interpreter.
3. mhb+c8[view] [source] 2023-07-05 22:38:36
>>woadwa+(OP)
Do you find that comforting when an emergent property of a system whose objective is to complete the next word is able to make drawings?
replies(1): >>Strict+8g
4. famous+L8[view] [source] 2023-07-05 22:41:37
>>woadwa+(OP)
It's alright with algorithmic prompts - https://arxiv.org/abs/2211.09066

also it knows when to use a calculator if it has access to one so it's not a big deal

5. Camper+ta[view] [source] 2023-07-05 22:52:34
>>woadwa+(OP)
"This Apple II is useless. It can't even run Crysis."
◧◩
6. Strict+8g[view] [source] [discussion] 2023-07-05 23:25:45
>>mhb+c8
Imagine you meet a human who is eloquent, expressive, speaks ten languages, can pass the bar or the medical board exams easily, but who cannot reliably distinguish between truth and falsehood on the smallest of questions ("what is 6x9? 42") and has no persistent memory or sense of self.

Would you be "comforted" that this mega-genius is worse at arithmetic than you are and doesn't remember what it did yesterday?

Probably not. You might well be worried that this weird psychopath is going to get a medical license and cut the wrong number of fingers off of a whole bunch of patients.

replies(1): >>mhb+5h
◧◩◪
7. mhb+5h[view] [source] [discussion] 2023-07-05 23:32:45
>>Strict+8g
We're agreeing, aren't we?
[go to top]