Engineering Level:
Solve CO2 Levels
End sickness/death
Enhance cognition by integrating with willing minds.
Safe and efficient interplanetary travel.
Harness vastly higher levels of energy (solar, nuclear) for global benefit.
Science: Uncover deeper insights into the laws of nature.
Explore fundamental mysteries like the simulation hypothesis, Riemann hypothesis, multiverse theory, and the existence of white holes.
Effective SETI
Misc: End of violent conflicts
Fair yet liberal resource allocation (if still needed), "from scarcity to abundance" AI does not experience fatigue or distractions => consistent performance.
AI can scale its processing power significantly, despite the challenges associated with it (I understand the challenges)
AI can ingest and process new information at an extraordinary speed.
AIs can rewrite themselves
AIs can be multiplicated (solving scarcity of intelligence in manufacturing)
Once achieving AGI, progress could compound rapidly, for better or worse, due to the above points.So, if you want to meet with someone, instead of opening you calendar app and looking for an opening, you'd ask your AGI assistant to talk to their AGI assistant and set up a 1h meeting soon. Or, instead of going on Google to find plane tickets, you'd ask you AGI assistant to find the most reasonable tickets for a certain date range.
This would not require any special intelligence more advanced than a human's, but it does require a very general understanding of the human world that is miles beyond what LLMs can achieve today.
Going only slightly further with assumptions about how smart an AGI would be, it could revolutionize education, at any level, by acting as a true personalized tutor for a single student, or even for a small group of students. The single biggest problem in education is that it's impossible to scale the highest quality education - and an AGI with capabilities similar to a college professor would entirely solve that.
And no, using ChatGPT like you use a search engine isn't ChatGPT solving your problem, that is you solving your problem. ChatGPT solving your problem would mean it drives you, not you driving it like it works today. When I hired people to help me do taxes they told me what papers they needed and then they did my taxes correctly without me having to look it through and correct them, an AGI would work like that for most tasks, it means you no longer need to think or learn to solve problems since the AGI solves them for you.
I guess the issue here is: can a system be "generally intelligent" if it doesn't have access to general tools to act on that intelligence? I think so, but I also can see how the line is very fuzzy between an AI system and the tools it can leverage, as really they both do information processing of some sort.
Thanks for the insight.
> ChatGPT solving your problem would mean it drives you, not you driving it like it works today.
I had a very bad Reddit addiction in the past. It took me years of consciously trying to quit in order to break the habit. I think I could make a reasonable argument that Reddit was using me to solve its problems, rather than myself using it to solve mine. I think this is also true of a lot of systems - Facebook, TikTok, YouTube, etc.
It's hard to pin down all computers as an "agent" in the way we like to think about that word and assign some degree of intelligence to, but I think it is at least an interesting exercise to try.
An AGI could run such a company without humans anywhere in the loop, just like humans can run such a company without an AGI helping them.
I'd say a strong signal that AGI has happened are large fully automated companies without a single human decisionmaker in the company, no CEO etc. Until that has happened I'd say AGI isn't here, if that happens it could be AGI but I can also imagine a good enough script to do it for some simple thing.
GPT4o's context window is 128k tokens which is somewhere on the order of 128kB. Your brain's context window, all the subliminal activations from the nerves in your gut and the parts of your visual field you aren't necessarily paying attention to is on the order of 2MB. So a similar order of magnitude though GPT has a sliding window and your brain has more of an exponential decay in activations. That LLMs can accomplish everything they do just with what seems analogous to human reflex rather than human reasoning is astounding and more than a bit scary.
How come the goal posts for AGI are always the best of what people can do?
I can't diagnose anyone, yet I have GI.
Reminds me of:
> Will Smith: Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?
> I Robot: Can you?
People want their suburban lifestyle with their red meat and their pick-up truck or SUV. They drive fuel inefficient vehicles long-distances to urban work environments and they seem to have very limited interest in changing that. People who like detached homes aren't suddenly affording the rare instances of that closer to their work. We burn lots of oil because we drive fuel inefficient vehicles long distances. This is a problem of changing human preferences which you just aren't going to solve with an AGI.
I'm at the European AI Conference for our startup tomorrow, and they use a platform that just booked me 3 meetings automatically with other people there based on our availability... It's not rocket science.
And you don't even need those narrow tools. You could easily ask GPT-4o (or lesser versions) something along the lines of :
> "you're going to interact with another AI assistant to book meetings for me: [here would be the details about the meeting]. Come up with a protocol that you'll send to the other assistant so it can understand what the meetings are about, communicate you their availability, etc. I want you to come up with the entire protocol, send it, and communicate with the other assistant end-to-end. I won't be available to provide any more context ; I just want the meeting to be booked. Go."
With abundant electric cars (at this future point in time) and clean electricity powering heating, transportation, and manufacturing, some AIs could be repurposed for CO2 capture.
It sounds deceptively easy, but from an engineering standpoint, it likely holds up. With free energy and AGI handling labor and thinking, we can achieve what a civilization could do and more (cause no individual incentives come into play).
However, human factors could be a problem: protests (luddites), wireheading, misuse of AI, and AI-induced catastrophes (alignment).
Not the best, I just want it to be able to do what average professionals can do because average humans can become average professionals in most fields.
> I can't diagnose anyone, yet I have GI.
You can learn to, an AGI system should be able to learn to as well. And since we can copy AGI learning it means that if it hasn't learned to diagnose people yet then it probably isn't an AGI, because an AGI should be able to learn that without humans changing its code and once it learned it once we copy it forever and now the entire AGI knows how to do it.
So, the AGI should be able to do all the things you could do if we include all versions of you that learned different fields. If the AGI can't do that then you are more intelligent than it in those areas, even if the singular you isn't better at those things than it is.
For these reasons it makes more sense to compare an AGI to humanity rather than individual humans, because for an AGI there is no such thing as "individuals", at least not the way we make AI today.
Do you work in education? Because I don't think many who do would agree with this take.
Where I live, the single biggest problem in education is that we can't scale staffing without increasing property taxes, and people don't want to pay higher property taxes. And no, AGI does not fix this problem, because you need staff to be physically present in schools to deal with children.
Even if we had an AGI that could do actual presentation of coursework and grading, you need a human being in there to make sure they behave and to meet the physical needs of the students. Humans aren't software to program around.
Learning is a core part to general intelligence, as general intelligence implies you can learn about new problems so you can solve those. Take away that and you are no longer a general problem solver.
A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI.
Geohot (https://geohot.github.io/blog/) estimates that a human brain equivalent requires 20 PFLOPS. Current top-of-the-line GPUs are around 2 PFLOPS and consume up to 500W. Scaling that linearly results in 5kW, which translates to approximately 3 EUR per hour if I calculate correctly.
Does it? I am quite certain those things are achievable right now without anything like AI in the sense being discussed here.
A rule based expert intelligence system can be highly intelligent, but it is not general, and maybe no arrangement of rules could make one that is general. A general intelligence system must be able to learn and adapt to foreign problems, parameters, and goals dynamically.
The classical example of a general intelligent task is to get the rules for a new game and then play it adequately, there are AI contests for that. That is easy for humans to do, games are enjoyed even by dumb people, but we have yet to make an AI that can play arbitrary games as well as even dumb humans.
Note that LLMs are more general than previous AI's thanks to in context learning, so we are making progress, but still far from as general as humans are.
And getting it to actually buy stuff like plane tickets on your behalf would be entirely crazy.
Sure, it can be made to do some parts of this for very narrowly defined scenarios, like the specific platform of a single three day conference. But it's nowhere near good enough for dealing with the general case of the messy general world.
Sure, this doesn't mean you could just fire all teachers and dissolve all schools. You still need people to physically be there and interact with the children in various ways. But if you could separate the actual teaching from the child care part, and if you could design individualized courses for each child with something approaching the skill of the best teachers in the whole world, you would get an inconceivably better educational system for the entire population.
And I don't need to work in education for much of this. Like all others, I was intimately acquainted with the educational system (in my country) for 16 years of my life through direct experience, and much more since in increasingly less direct experience. I have very very good and very direct experience of the variance between teachers and the impact that has on how well students understand and interact with the material.
The difference though is the amount of work. Today if you wanted GPT-4 to work as I describe, you would have to write an integration for Gmail, another one for Office365, another one for Proton etc. You would probably have to create a management interface to give access to your auth tokens for each of these to OpenAI so they can activate these interactions. The person you want to sync with would have to do the same.
In contrast, an AGI that only has average human intelligence, or even below, would just need access to, say, Firefox APIs, and should easily be able to achieve all of this. And it would work regardless if the other side is a different AGI using a different provider, or even if they are just a regular human assistant.
> given only my and your email address.
AI or not, such an application would need more than just email addresses. It would need access to our schedules.
If you were in a room with no computer, would you consider yourself to be not intelligent enough to send an email? Does the tooling you have access to change your level of intelligence?
If you're looking for insight into the problems faced in education, speak to educators. I really doubt they would tell you that the quality of individual instructors is their biggest problem.
I had a (human) assistant in my previous business, super-smart MBA type, and by your definition she wasn't a general intelligence on the day of onboarding:
- she didn't have access to my email account or calendar
- she didn't know my usual lunch time hours
- she didn't have a company card yet.
All of those points you're raising are logistics, not intelligence.
Intelligence is "When trying to achieve a goal, can you conceive of a plan to get there despite adverse conditions, by understanding them and charting/reviewing a sequence of actions".
You can definitely be an intelligent entity without hands or tools.
> AI or not, such an application would need more than just email addresses. It would need access to our schedules.
It needs access to my schedule, yes, but it only needs your email address. It can then ask you (or your own AGI assistant) if a particular date and time is convenient. If you then propose another time, it can negotiate appropriately.
Educators are the best people to ask about how to make their jobs easier. They are not necessarily the best people to ask about how to make children's education better.
Edit:
> That's like claiming you know how to run a restaurant because you like to eat out.
No, it's like claiming you know some things about the problems of restaurants, and about the difference between good and bad restaurants, after spending 8+ hours a day almost every day, for 16 years, eating out at restaurants. Which I think would be a decent claim.
But you certainly didn't have to write a special program for your assistant to integrate with your inbox, they just used an existing email/calendar client and looked at their screen.
GPT-4 is nowhere near able to interact with, say, the Gmail web page at this level. And even if you created the proper integrations, it's nowhere near the level that it could read all incoming email and intelligently decide, with high accuracy, which emails necessitate updates to your calendar, which don't, and which necessitate back-and-forth discussions to negotiate a better date for you.
Sure, your assistant didn't know all of this on day one, but they learned how to do it on their own, presumably with a few dozen examples at most. That is the mark of a general intelligence.
I'm pretty sure, from previous interactions with GPT-4o and from their demos, that if you used their desktop app (which enables screensharing) and asked it to tell you where to click, step-by-step, in the Gmail web page, it would be able to do a pretty good job of navigating through it.
Let's remember that the Gmail UI is one of the most heavily documented (in blogs, FAQs, support pages, etc) in the world. I can't see GPT-4o having any difficulty locating elements in there.