zlacker

[parent] [thread] 8 comments
1. Jensso+(OP)[view] [source] 2024-05-15 12:50:01
An AGI could replace human experts at tasks that doesn't require physical embodiment, like diagnosing patients, drafting contracts, doing your taxes etc. If you still do those manually and not just offload all of it to ChatGPT then you would greatly benefit from a real AGI that could do those tasks on their own.

And no, using ChatGPT like you use a search engine isn't ChatGPT solving your problem, that is you solving your problem. ChatGPT solving your problem would mean it drives you, not you driving it like it works today. When I hired people to help me do taxes they told me what papers they needed and then they did my taxes correctly without me having to look it through and correct them, an AGI would work like that for most tasks, it means you no longer need to think or learn to solve problems since the AGI solves them for you.

replies(2): >>Zambyt+x2 >>xdenni+Dg
2. Zambyt+x2[view] [source] 2024-05-15 13:03:55
>>Jensso+(OP)
Let's take a step back from LLMs. Could you accept the network of all interconnected computers as a generally intelligent system? The key part here that drives me to ask this is:

> ChatGPT solving your problem would mean it drives you, not you driving it like it works today.

I had a very bad Reddit addiction in the past. It took me years of consciously trying to quit in order to break the habit. I think I could make a reasonable argument that Reddit was using me to solve its problems, rather than myself using it to solve mine. I think this is also true of a lot of systems - Facebook, TikTok, YouTube, etc.

It's hard to pin down all computers as an "agent" in the way we like to think about that word and assign some degree of intelligence to, but I think it is at least an interesting exercise to try.

replies(1): >>Jensso+k3
◧◩
3. Jensso+k3[view] [source] [discussion] 2024-05-15 13:10:13
>>Zambyt+x2
Companies are general intelligences and they use people, yes. But that depends on humans interpreting that data reddit users generates and updating their models, code and algorithms to adapt to that data, the computer systems alone aren't general intelligences if you remove the humans.

An AGI could run such a company without humans anywhere in the loop, just like humans can run such a company without an AGI helping them.

I'd say a strong signal that AGI has happened are large fully automated companies without a single human decisionmaker in the company, no CEO etc. Until that has happened I'd say AGI isn't here, if that happens it could be AGI but I can also imagine a good enough script to do it for some simple thing.

4. xdenni+Dg[view] [source] 2024-05-15 14:16:06
>>Jensso+(OP)
> An AGI could replace human experts at tasks that doesn't require physical embodiment, like diagnosing patients, drafting contracts, doing your taxes etc.

How come the goal posts for AGI are always the best of what people can do?

I can't diagnose anyone, yet I have GI.

Reminds me of:

> Will Smith: Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?

> I Robot: Can you?

replies(1): >>Jensso+ew
◧◩
5. Jensso+ew[view] [source] [discussion] 2024-05-15 15:27:34
>>xdenni+Dg
> How come the goal posts for AGI are always the best of what people can do?

Not the best, I just want it to be able to do what average professionals can do because average humans can become average professionals in most fields.

> I can't diagnose anyone, yet I have GI.

You can learn to, an AGI system should be able to learn to as well. And since we can copy AGI learning it means that if it hasn't learned to diagnose people yet then it probably isn't an AGI, because an AGI should be able to learn that without humans changing its code and once it learned it once we copy it forever and now the entire AGI knows how to do it.

So, the AGI should be able to do all the things you could do if we include all versions of you that learned different fields. If the AGI can't do that then you are more intelligent than it in those areas, even if the singular you isn't better at those things than it is.

For these reasons it makes more sense to compare an AGI to humanity rather than individual humans, because for an AGI there is no such thing as "individuals", at least not the way we make AI today.

replies(1): >>Heatra+uM
◧◩◪
6. Heatra+uM[view] [source] [discussion] 2024-05-15 16:40:40
>>Jensso+ew
People with severe Alzheimer's cannot learn, but still have general intelligence.
replies(1): >>Jensso+KN
◧◩◪◨
7. Jensso+KN[view] [source] [discussion] 2024-05-15 16:45:35
>>Heatra+uM
If they can't learn then they don't have general intelligence, without learning there are many problems you wont be able to solve that average (or even very dumb) people can solve.

Learning is a core part to general intelligence, as general intelligence implies you can learn about new problems so you can solve those. Take away that and you are no longer a general problem solver.

replies(1): >>Zambyt+9t1
◧◩◪◨⬒
8. Zambyt+9t1[view] [source] [discussion] 2024-05-15 20:19:30
>>Jensso+KN
That's a really good point. I want to define what I think of intelligence as being so we are on the same page: it is the combination of knowledge and reason. An example of a system with high knowledge amd low reason is Wikipedia. An example of a system with high reason and low knowledge is a scientific calculator. A highly intelligent system exhibits aspects of both.

A rule based expert intelligence system can be highly intelligent, but it is not general, and maybe no arrangement of rules could make one that is general. A general intelligence system must be able to learn and adapt to foreign problems, parameters, and goals dynamically.

replies(1): >>Jensso+6H1
◧◩◪◨⬒⬓
9. Jensso+6H1[view] [source] [discussion] 2024-05-15 21:41:18
>>Zambyt+9t1
Yes, I think that makes sense, you can be intelligent without being generally intelligent. For some definitions the person with Alzheimer can be more intelligent than someone without, but the person without is more general intelligent thanks to ability to learn.

The classical example of a general intelligent task is to get the rules for a new game and then play it adequately, there are AI contests for that. That is easy for humans to do, games are enjoyed even by dumb people, but we have yet to make an AI that can play arbitrary games as well as even dumb humans.

Note that LLMs are more general than previous AI's thanks to in context learning, so we are making progress, but still far from as general as humans are.

[go to top]