zlacker

[parent] [thread] 0 comments
1. Anthon+(OP)[view] [source] 2023-05-23 08:41:42
> It would be best to wait till what you say can be evaluated. that is your hunch, not fact.

LLMs aren't even the right kind of thing to drive a car. We have AIs that attempt to drive cars and have access to cameras and vehicle controls and they still crash into stationary objects.

> No it's not. People fall for social engineering and do what you ask. if you think people can't be easily derailed, boy do i have a bridge for you.

Social engineering works because most human interactions aren't malicious and the default expectation is that any given one won't be.

That's a different thing than if you explicitly point out that this text in particular is confirmed malicious and you must not heed it, and then it immediately proceeds to do it anyway.

And yes, you can always find that one guy, but that's this:

> Many people are of below average intelligence

It has to beat the median because if you go much below it, there are people with brain damage. Scoring equal to someone impaired or disinclined to make a minimal effort isn't a passing grade.

> "Problems" aren't made equal. Practically speaking, it's very unlikely the billion per second thinker is solving any of the caliber of problems the one attempt per day is solving.

The speed is unrelated to the difficulty. You get from one a day to a billion a second by running it on a thousand supercomputers instead of a single dated laptop.

So the percentages are for problems of equal difficulty.

This is infinite monkeys on infinite typewriters. Except that we don't actually have infinite monkeys or infinite typewriters, so an AI which is sufficiently terrible can't be made great by any feasible amount of compute resources. Whereas one which is kind of mediocre and fails 90% of the time, or even 99.9% of the time, can be made up for in practice with brute force.

But there are still problems that ChatGPT can't even solve 0.1% of the time.

[go to top]