I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.
When the next wave of new deep learning innovations sweeps the world, Microsoft eats whats left of them. They make lots of money, but don't have future unless they replace what they lost.
They have the next iteration of GPT Sutskever helped to finalize. OpenAI lost it's future unless they find new same caliber people.
Right now it's all about reducing transaction costs, small-i innovating, onboarding integrations, maintaining customer and stakeholder trust, getting content, managing stakeholders, and selling.
Before the Apple partnership, maybe it seemed like the moat was shrinking, but I'm tno sure now.
Likely they have access to a LOT of data now too.
You could say the same about Google - and yet they missed the consequences of their own discovery and got behind instead of being leaders. So you need specific talent to pull this off even if in theory you can hire anybody.
How do you know that they have the next GPT?
How do you know what Sutskever contributed? (There was talk that the most valuable contributions came from the less well known researchers not from him)
By the 90s they were still mainly used as fancy typewriters by “normal” people (my parents, school, etc) although the ridiculous potential was clear from day one.
It just took a looong time to go from pong to ping and then to living online. I’m still convinced even this stage is temporary and only a milestone on the way to bigger and better things. Computing and computational thought still has to percolate into all corners of society.
Again not saying “LLM’s” are the same, but AI in general will probably walk a similar path. It just takes a long time, think decades, not years.
Edit: wanted to mention The Mother of All Demos by Engelbart (1968), which to me looks like it captures all essential aspects of what distributed online computing can do. In a “low resolution”, of course.
AKA 'the four horsemen of enshitification'.
But the reality is, LLMs are a cannibalization threat to Search. And the Search Monopoly is the core money making engine of the entire company.
Classic innovators dilemma. No fat-and-happy corporate executive would ever say yes to putting lots of resources behind something risky that might also kill the golden goose.
The only time that happens at a big established company, is when driven by some iconoclastic founder. And Google’s founders have been MIA for over a decade.
that won't happen, the next scam will be different
it was crypto until FTX collapsed then the usual suspects led by a16z leaned on OpenAI to rush whatever they had on market hence the odd naming of ChatGPT 3.5.
When the hype is finally realized to be just mass printing bullshit -- relevant bullshit, yes, which sometimes can be useful but not billions of dollars of useful -- there will be something else.
Same old, same old. The only difference is there is no new catchy tunes. Yet? https://youtu.be/I6IQ_FOCE6I https://locusmag.com/2023/12/commentary-cory-doctorow-what-k...
They became viable in the 2000's, let's say 2007 with the iPhone, and by late 2010's everyone was living online, so "decades" is a stretch.
Being first at the start (i.e. first mover advantage) is huge.
If something like Q* is provided organically with GPT5 (which may have a different name), and allows proper planning, error correction and direct interaction with tools, that gaps is getting really close to 0.
E.g. Oppenheimer’s team created the bomb, then following experts finetuned the subsequent weapon systems and payload designs. Etc.
1978: the apple ][. 1mhz 8 bit microprocessor, 4kb of ram, monochrome all-,caps display.
1990:Mac IIci, 25mhz 32-bit CPU, 4MB ram, 640x480 color graphics and an easy to use GUI.
Ask any of us who used both of these at the time: it was really amazing.
AI, on the other hand, has a near infinite potential. It's conceivable that it will grow the global economy by 2% OR MORE per MONTH for decades or more.
AI is going to be much more impactful than the internet. Probably more than internal combustion, the steam engine and electricity combined.
The question is about the timescale. It could take 2 years before it really starts to generate profits, or it could take 10 or even more.
Attention and scale is all you need
Anything else you do will be overtaken by LLM when it builds its internal structures
Well, LLM and MCTS
The rest is old news. Like Cyc
Enlighten us
By 1990 home computer use was still a niche interest. They were still toys, mainly. DTP, word processing and spreadsheets were a thing, but most people had little use for them - I had access to a Mac IIci with an ImageWriter dot matrix around that time and I remember nervously asking a teacher whether I would be allowed to submit a printed typed essay for a homework project - the idea that you could do all schoolwork on a computer was crazy talk. By then, tools like Mathematica existed but as a curiosity not an essential tool like modern maths workbooks are.
The internet is what changed everything.
I have integrated 6 independent, specialized "AI attorneys" into a project management system where they are collaborating with "AI web developers", "AI creative writers", "AI spreadsheet gurus", "AI negotiators", "AI financial analysts" and an "AI educational psychologist" that looks at the user, the nature and quality of their requests, and makes a determination of how much help the user really needs, modulating how much help the other agents provide.
I've got a separate implementation that is all home solar do-it-yourself, that can guide someone from nothing all the way to their own self made home solar setup.
Currently working on a new version that exposes my agent creation UI with a boatload of documentation, aimed at general consumers. If one can write well, as in write quality prose, that person can completely master using these LLMs to superior results.
I would say instead, stay tuned.
You don't really need to look at history, that's basically science vs engineering in a nutshell.
Maybe history could tell us if that's an accident or a division that arose out of 'natural' occurrence, but I suppose a question for an economist or psychologist or sociologist how natural that could really be anyway or if it's biased by e.g. academics not financially motivated because it happens that there isn't money there; so they don't care about productising; leaving it for others who are so motivated.
Regular people shrug and say, yeah sure, but what can I do with it. They still do this day.
ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works. It always sucks when mission and vision don't align with the nerds ideas, but I think it's probably the best move for both parties.
I'm not as in tune as some people here so: don't they need both? With the rate at which things are moving, how can it be otherwise?
It’s not short sightedness, it’s rational self-interest. The rewards for taking risk as employee #20,768 in a large company are minimal, whereas the downside can be catastrophic for your career & personal life.
Ah yes, "it's so obvious no one sees it but me". Until you show people your work, and have real experts examining the results, I'm going to remain skeptical and assume you have LLMs talking nonsense to each each other.
Bank fees don't disappear into the ether when they're collected, so I doubt they have this much affect.
Oh, made my very first retail purchase with Bitcoin the other day. While the process was pretty slick and easy, the network charged $15.00 in fees. Long way to go until "free".
Engineering Level:
Solve CO2 Levels
End sickness/death
Enhance cognition by integrating with willing minds.
Safe and efficient interplanetary travel.
Harness vastly higher levels of energy (solar, nuclear) for global benefit.
Science: Uncover deeper insights into the laws of nature.
Explore fundamental mysteries like the simulation hypothesis, Riemann hypothesis, multiverse theory, and the existence of white holes.
Effective SETI
Misc: End of violent conflicts
Fair yet liberal resource allocation (if still needed), "from scarcity to abundance"It's going to get orders of magnitude less expensive, but for now, the capital requirements feel like a pretty deep moat.
1-3% was intended as a ceiling for what cryptocurrency could bring to the economy, after adjusting for the reduction in inflation once those costs are gone.
Fast forward to today and we a discussing the implications of him leaving OpenAI on this very thread.
Evidence to support the notion that you can’t just throw mountains of cash and engineers at a problem to do something truly trailblazing.
One day at work about 10-15 years ago I looked at my daily schedule and found that on that day my team were responsible for delivering a 128kb build of Tetris and a 4GB build of Real Racing.
They have a strong focus on making the existing models fast and cheap without sacrificing capability which is music to the ears of those looking to build with them.
Your claim was that people should care about compute based on what the provider has done in the AI space, but Microsoft was pretty far behind on that side until OpenAI - Google was really the only player in town. Should they have wanted GCP credits instead? Do you care about their AI results or the ex post facto GPU shipments?
Or, if what you actually want to argue is that Anthropic would be able to get more GPUs with Azure than AWS or GCP then this is a different argument which is going to require different evidence than raw GPU shipments.
AI has a certain mystique that helps get money. In the 1980s I was on a DARPA neural network tools advisory panel, and I concurrently wrote a commercial product that included the 12 most common network architectures. That allowed me to step in when a project was failing (a bomb detector we developed for the FAA) that used a linear model, with mediocre results. It was a one day internal consult to provide software for a simple one hidden layer backprop model. During that time I was getting mediocre results using symbolic AI for NLP, but the one success provided runway internally in my company to keep going.
AI does not experience fatigue or distractions => consistent performance.
AI can scale its processing power significantly, despite the challenges associated with it (I understand the challenges)
AI can ingest and process new information at an extraordinary speed.
AIs can rewrite themselves
AIs can be multiplicated (solving scarcity of intelligence in manufacturing)
Once achieving AGI, progress could compound rapidly, for better or worse, due to the above points.So, if you want to meet with someone, instead of opening you calendar app and looking for an opening, you'd ask your AGI assistant to talk to their AGI assistant and set up a 1h meeting soon. Or, instead of going on Google to find plane tickets, you'd ask you AGI assistant to find the most reasonable tickets for a certain date range.
This would not require any special intelligence more advanced than a human's, but it does require a very general understanding of the human world that is miles beyond what LLMs can achieve today.
Going only slightly further with assumptions about how smart an AGI would be, it could revolutionize education, at any level, by acting as a true personalized tutor for a single student, or even for a small group of students. The single biggest problem in education is that it's impossible to scale the highest quality education - and an AGI with capabilities similar to a college professor would entirely solve that.
A tiny fraction of the current funding. 2-4 orders of magnitude less.
> It's just that fundamental scientific discovery bears little relationship to the pallets of cash
Heavy funding may not automatically lead to breakthroughs such as Special Relativity or Quantum Mechanics (though it helps there too). But once the most basic ideas are in place, massive is what causes the breakthroughs like in the Manhatten Project and Apollo Program.
And it's not only the money itself. It's the attention and all the talent that is pulled in due to that.
And in this case, there is also the fear that the competition will reach AGI first, whether the competition is a company or a foreign government.
It's certainly possible the the ability to monetize the investments may lead to some kind of slowdown at some point (like if there is a recession).
But it seems to me that such a recession will have no more impact on the development of AGI than the dotcom bust had for the importance of the internet.
Yes you may have short term growth, this is solely due to there being less regulation.
Despite what many people think regulation is a good thing, put in place to avoid the excesses that lead to lost livelihoods. It stops whales from exploiting the poor, provides tools for central banks to try avoid depressions.
Costs wise, banks acting as trust authorities actually can theoretically be cheaper too.
Broadband. Dial-up was still too much of an annoyance, too expensive.
Once broadband was ubiquitous in the US and Europe, that's when the real explosion of computer usage happened.
But compared to the 100s of billions (possibly trillions, globally) that is currently being plowed into AI, that's peanuts.
I think the closest recent analogy to the current spending on AI, was the nuclear arms race during the cold war.
If China is able to field ASI before the US even have full AGI, nukes may not matter much.
We don't know if full AGI can be built using just current technology (like transformers) given enough scale, or if 1 or more fundamental breakthroughs are needed beyond just the scale.
My hypothesis has always been that AGI will arrive roughly when the compute power and model size matches the human brain. That means models of about 100 trillion params, which is not that far away now.
And no, using ChatGPT like you use a search engine isn't ChatGPT solving your problem, that is you solving your problem. ChatGPT solving your problem would mean it drives you, not you driving it like it works today. When I hired people to help me do taxes they told me what papers they needed and then they did my taxes correctly without me having to look it through and correct them, an AGI would work like that for most tasks, it means you no longer need to think or learn to solve problems since the AGI solves them for you.
I guess the issue here is: can a system be "generally intelligent" if it doesn't have access to general tools to act on that intelligence? I think so, but I also can see how the line is very fuzzy between an AI system and the tools it can leverage, as really they both do information processing of some sort.
Thanks for the insight.
Operational costs were correspondingly lower, as they didn't need to pay electricity and compute bills for tens of millions concurrent users.
> But once the most basic ideas are in place, massive is what causes the breakthroughs like in the Manhatten Project and Apollo Program.
There is no reason to think that the ideas are in place. It could be that the local optimum is reached as it happened in many other technology advances before. The current model is mass scale data driven, the Internet has been sucked dry for data and there's not much more coming. This may well require a substantial change in approach and so far there are no indications of that.
From this pov monetization is irrelevant, as except for a few dozen researchers the rest of the crowd are expensive career tech grunts.
Basically, crypto is more like gold rush than a tech breakthrough. And gold rushes rarely lead to much more than increased inflation.
> ChatGPT solving your problem would mean it drives you, not you driving it like it works today.
I had a very bad Reddit addiction in the past. It took me years of consciously trying to quit in order to break the habit. I think I could make a reasonable argument that Reddit was using me to solve its problems, rather than myself using it to solve mine. I think this is also true of a lot of systems - Facebook, TikTok, YouTube, etc.
It's hard to pin down all computers as an "agent" in the way we like to think about that word and assign some degree of intelligence to, but I think it is at least an interesting exercise to try.
An AGI could run such a company without humans anywhere in the loop, just like humans can run such a company without an AGI helping them.
I'd say a strong signal that AGI has happened are large fully automated companies without a single human decisionmaker in the company, no CEO etc. Until that has happened I'd say AGI isn't here, if that happens it could be AGI but I can also imagine a good enough script to do it for some simple thing.
GPT4o's context window is 128k tokens which is somewhere on the order of 128kB. Your brain's context window, all the subliminal activations from the nerves in your gut and the parts of your visual field you aren't necessarily paying attention to is on the order of 2MB. So a similar order of magnitude though GPT has a sliding window and your brain has more of an exponential decay in activations. That LLMs can accomplish everything they do just with what seems analogous to human reflex rather than human reasoning is astounding and more than a bit scary.
You can really flip the entire ad supported industry upside down if you integrate with a bunch of publishers and offer them a deal where they are paid every time an article from their website is returned. If they make this good enough people will pay $15-20 a month for no ads in a search engine.
That isn't the case, at all. All I'm stating is what the chart clearly shows - Azure has invested deeply in this technology and at a rate that far exceeds AWS.
This round of AI is only capable of producing bullshit. Relevant bullshit but bullshit. This can be useful https://hachyderm.io/@inthehands/112006855076082650 but it doesn't mean it's more impactful than the Internet.
That depends what you mean when you say "ideas". If you consider ideas at the level of transformers, well then I would consider those ideas of the same magnitude as many of the ideas the Manhatten Project or Apollo Program had to figure out on the way.
If you mean ideas like going from expert system to Neural Networks with backprop, then that's more fundamental and I would agree.
It's certainly still conceivable that Penrose is right in that "true" AGI requires something like microtubules to be built. If so, that would be on the level of going from expert systems to NNs. I believe this is considered extremely exotic in the field, though. Even LeCun probably doesn't believe that. Btw, this is the only case where I would agree that funding is more or less irrelevant.
If we require 1-2 more breakthroughs on par with Transformers, then those could take anything from 2-15 years to be discovered.
For now, though, those who have predicted that AI development will mostly be limited by network size and the compute to train it (like Sutskever or implicitly Kurzweil) have been the ones most accurate in the expected rate of progress. If they're right, then AGI some time between 2025-2030 seems most likely.
Those AGI's may be very large, though, and not economical to run for a wider audience until some time in the 30's.
So, to summarize: Unless something completely fundamental is needed (like microtubules), which happens to be a fringe position, AGI some time between 2025 and 2040 seems likely. The "pessimists" (or optimists, in term of extinction risk) may think it's closer to 2040, while the optimists seem to think it's arriving very soon.
If your view is that LLMs only need minor improvements to their core technology and that the major engineering focus should be placed on productizing them, then losing a bunch of scientists might not be seen as that big of a deal.
But if your view is that they still need to overcome significant milestones to really unlock their value... then this is a pretty big loss.
I suppose there's a third view, which is: LLMs still need to overcome significant hurdles, but solutions to those hurdles are a decade or more away. So it's best to productize now, establish some positive cashflow and then re-engage with R&D when it becomes cheaper in the future and/or just wait for other people to solve the hard problems.
I would guess the dominant view of the industry right now is #1 or #3.
Compare that the 6+ trillions that were spent in the US alone on nuclear weapons, and then consider, what is of greater strategic importance: ASI or nukes?
That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue.
I actually expected objections on the opposite direction. But then, this is not twitter/X.
The point is that something that can easily generate 20%-100% growth per year (AGI/ASI) is so much more important that the best case prediction for crypto's effect on the economy are not even noticeable.
That's why comparing the crypto bubble to AI is so meaningless. Crypto was NEVER* going to be something hugely important, while AI is potentially almost limitless.
*If crypto had anything to offer at all, it would be ways to avoid fees, taxes and the ability to trace transactions.
The thing is, if crypto at any point seriously threatens to replace traditional currencies as stores of value in the US or EU, it will be banned instantly. Simply because it would make it impossible for governments to run budget deficits, prevent tax evasion and sever other things that governments care about.
Pretty weak take there bud. If we just look at the Gartner Hype Cycle that marketing and business people love so much it would seem to me that we are at the peak, just before the downfall.
They are hyping hard to sell more, when they should be prepping for the coming dip, building their tech and research side more to come out the other side.
Regardless, a tech company without the inventors is doomed to fail.
How come the goal posts for AGI are always the best of what people can do?
I can't diagnose anyone, yet I have GI.
Reminds me of:
> Will Smith: Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?
> I Robot: Can you?
But this race to add 'AI' into everything is producing a lot of nonsense. I'd rather go fullsteam ahead on the science and the new models, because that is what will actually get us something decent, rather than milking what we already have.
People want their suburban lifestyle with their red meat and their pick-up truck or SUV. They drive fuel inefficient vehicles long-distances to urban work environments and they seem to have very limited interest in changing that. People who like detached homes aren't suddenly affording the rare instances of that closer to their work. We burn lots of oil because we drive fuel inefficient vehicles long distances. This is a problem of changing human preferences which you just aren't going to solve with an AGI.
I've created a version of one of the resume GPTs that analyses my resume's fit to a position when fed the job description along with a lookup of said company. I then have a streamlined manner in which it points out what needs to be further highlighted or omitted in my resume. It then helps me craft a cover letter based on a template I put together. Should I stop using it just because I can't feed it 50 job roles and have it automatically select which ones to apply to and then create all necessary changes to documents and then apply?
I'm at the European AI Conference for our startup tomorrow, and they use a platform that just booked me 3 meetings automatically with other people there based on our availability... It's not rocket science.
And you don't even need those narrow tools. You could easily ask GPT-4o (or lesser versions) something along the lines of :
> "you're going to interact with another AI assistant to book meetings for me: [here would be the details about the meeting]. Come up with a protocol that you'll send to the other assistant so it can understand what the meetings are about, communicate you their availability, etc. I want you to come up with the entire protocol, send it, and communicate with the other assistant end-to-end. I won't be available to provide any more context ; I just want the meeting to be booked. Go."
With abundant electric cars (at this future point in time) and clean electricity powering heating, transportation, and manufacturing, some AIs could be repurposed for CO2 capture.
It sounds deceptively easy, but from an engineering standpoint, it likely holds up. With free energy and AGI handling labor and thinking, we can achieve what a civilization could do and more (cause no individual incentives come into play).
However, human factors could be a problem: protests (luddites), wireheading, misuse of AI, and AI-induced catastrophes (alignment).
Will they though? Last I heard OpenAI isn't profitable, and I don't know if it's safe to assume they every will be.
People keep saying that LLMs are an existential threat to search, but I'm not so sure. I did a quick search (didn't verify in any way if this is a feasible number) to find that Google on average makes about 30 cents in revenue per query. They make a good profit on that because processing the query costs them almost nothing.
But if processing a query takes multiple seconds on a high-end GPU, is that still a profitable model? How can they increase revenue per query? A subscription model can do that, but I'd argue that a paywalled service immediately means they're not a threat to traditional ad-supported search engines.
Not the best, I just want it to be able to do what average professionals can do because average humans can become average professionals in most fields.
> I can't diagnose anyone, yet I have GI.
You can learn to, an AGI system should be able to learn to as well. And since we can copy AGI learning it means that if it hasn't learned to diagnose people yet then it probably isn't an AGI, because an AGI should be able to learn that without humans changing its code and once it learned it once we copy it forever and now the entire AGI knows how to do it.
So, the AGI should be able to do all the things you could do if we include all versions of you that learned different fields. If the AGI can't do that then you are more intelligent than it in those areas, even if the singular you isn't better at those things than it is.
For these reasons it makes more sense to compare an AGI to humanity rather than individual humans, because for an AGI there is no such thing as "individuals", at least not the way we make AI today.
Meanwhile, OpenAI (and the rest of the folks riding the hype train) will soon enter the trough. They're not diversified and I'm not sure that they can keep running at a loss in this post-ZIRP world.
I see this coming for sure for open ai and I do my part by just writing this comment on HN.
Do you work in education? Because I don't think many who do would agree with this take.
Where I live, the single biggest problem in education is that we can't scale staffing without increasing property taxes, and people don't want to pay higher property taxes. And no, AGI does not fix this problem, because you need staff to be physically present in schools to deal with children.
Even if we had an AGI that could do actual presentation of coursework and grading, you need a human being in there to make sure they behave and to meet the physical needs of the students. Humans aren't software to program around.
The AI isn’t the product, e.g. the ChatGPT interface is the main product that is layered above the core AI tech.
The issue is trustworthiness isn’t solvable by applying standard product management techniques on a predictable schedule. It requires scientific research.
I don't know about that, it seems to work just fine at creating spam and clone websites.
Learning is a core part to general intelligence, as general intelligence implies you can learn about new problems so you can solve those. Take away that and you are no longer a general problem solver.
Not for long. They have no moat. Folks who did the science are now doing science for some other company, and will blow the pants off OpenAI.
A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI.
I think academia and startups are currently better suited to optimize tinyml and edge ai hardware/compilers/frameworks etc.
We absolutely do and the answer is such a resounding no it's not even funny.
For some strange reason html forms is an incredibly impotent technology. Pretty standard things are missing like radioboxes with an other text input. 5000+ years ago the form labels aligned perfectly with the value.
I can picture it already, ancient Mesopotamia, the clay tablet needs name and address fields for the user to put their name and address behind. They pull out a stamp or a roller.
Of course if you have a computer you can have stamps with localized name and address formatting complete with validation as a basic building block of the form. Then you have a single clay file with all the information neatly wrapped together. You know, a bit like that e-card no one uses only without half data mysteriously hidden from the record by some ignorant clerk saboteur.
We've also failed to hook up devices to computers. We went from the beautiful serial port to IoT hell with subscriptions for everything. One could go on all day like that, payments, arithmetic, identification, etc much work still remains. I'm unsure what kind of revolution would follow.
Talking thinking machines will no doubt change everything. That people believe it is possible is probably the biggest driver. You get more people involved, more implementations, more experiments, more papers, improved hardware, more investments.
Now using transformers doesn't mean they have to be assembled like LLM's. There are other ways to stich them together to solve a lot of other problems.
We may very well have the basic types of lego pieces needed to build AGI. We won't know until we try to build all the brain's capacities into a model of size of a few 100 trillion parameters.
And if we actually lack some types of pieces, they may even be available by then.
Geohot (https://geohot.github.io/blog/) estimates that a human brain equivalent requires 20 PFLOPS. Current top-of-the-line GPUs are around 2 PFLOPS and consume up to 500W. Scaling that linearly results in 5kW, which translates to approximately 3 EUR per hour if I calculate correctly.
Things have been moving fast because we had a bunch of top notch scientists in companies paired with top notch salesmen/hype machines. But you need both in combination.
Hypemen make promises that can't be kept, but get absurd amounts of funding for doing so. Scientists fill in as many of the gaps as possible, but also get crazy resources due to the aforementioned funding. Obviously this train can't go forever, but I think you might understand that one of these groups is a bit more important than the other while one of these groups is more of a catalyst (makes things happen faster) for the other.
For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.
From a business point of view, you don't want to be first to market. You want to be the second or third.
Right, because those are two very different things. Science is about figuring out truths of how reality works. Engineering is about taking those truths and using them to make useful things.
People often talk in a way that conflates the two, but they are completely different activities.
Does it? I am quite certain those things are achievable right now without anything like AI in the sense being discussed here.
A rule based expert intelligence system can be highly intelligent, but it is not general, and maybe no arrangement of rules could make one that is general. A general intelligence system must be able to learn and adapt to foreign problems, parameters, and goals dynamically.
The classical example of a general intelligent task is to get the rules for a new game and then play it adequately, there are AI contests for that. That is easy for humans to do, games are enjoyed even by dumb people, but we have yet to make an AI that can play arbitrary games as well as even dumb humans.
Note that LLMs are more general than previous AI's thanks to in context learning, so we are making progress, but still far from as general as humans are.
And getting it to actually buy stuff like plane tickets on your behalf would be entirely crazy.
Sure, it can be made to do some parts of this for very narrowly defined scenarios, like the specific platform of a single three day conference. But it's nowhere near good enough for dealing with the general case of the messy general world.
Sure, this doesn't mean you could just fire all teachers and dissolve all schools. You still need people to physically be there and interact with the children in various ways. But if you could separate the actual teaching from the child care part, and if you could design individualized courses for each child with something approaching the skill of the best teachers in the whole world, you would get an inconceivably better educational system for the entire population.
And I don't need to work in education for much of this. Like all others, I was intimately acquainted with the educational system (in my country) for 16 years of my life through direct experience, and much more since in increasingly less direct experience. I have very very good and very direct experience of the variance between teachers and the impact that has on how well students understand and interact with the material.
The difference though is the amount of work. Today if you wanted GPT-4 to work as I describe, you would have to write an integration for Gmail, another one for Office365, another one for Proton etc. You would probably have to create a management interface to give access to your auth tokens for each of these to OpenAI so they can activate these interactions. The person you want to sync with would have to do the same.
In contrast, an AGI that only has average human intelligence, or even below, would just need access to, say, Firefox APIs, and should easily be able to achieve all of this. And it would work regardless if the other side is a different AGI using a different provider, or even if they are just a regular human assistant.
Producing spam has some margin on it, but is it really very profitable? And else?
(And, irrelevant, but my parents were in fact both posting to Usenet in 1983.)
> given only my and your email address.
AI or not, such an application would need more than just email addresses. It would need access to our schedules.
If you were in a room with no computer, would you consider yourself to be not intelligent enough to send an email? Does the tooling you have access to change your level of intelligence?
Google or Meta (don't remember which) just put out a report about how many human-hours they saved last year using transformers for coding.
If you're looking for insight into the problems faced in education, speak to educators. I really doubt they would tell you that the quality of individual instructors is their biggest problem.
I had a (human) assistant in my previous business, super-smart MBA type, and by your definition she wasn't a general intelligence on the day of onboarding:
- she didn't have access to my email account or calendar
- she didn't know my usual lunch time hours
- she didn't have a company card yet.
All of those points you're raising are logistics, not intelligence.
Intelligence is "When trying to achieve a goal, can you conceive of a plan to get there despite adverse conditions, by understanding them and charting/reviewing a sequence of actions".
You can definitely be an intelligent entity without hands or tools.
> AI or not, such an application would need more than just email addresses. It would need access to our schedules.
It needs access to my schedule, yes, but it only needs your email address. It can then ask you (or your own AGI assistant) if a particular date and time is convenient. If you then propose another time, it can negotiate appropriately.
Educators are the best people to ask about how to make their jobs easier. They are not necessarily the best people to ask about how to make children's education better.
Edit:
> That's like claiming you know how to run a restaurant because you like to eat out.
No, it's like claiming you know some things about the problems of restaurants, and about the difference between good and bad restaurants, after spending 8+ hours a day almost every day, for 16 years, eating out at restaurants. Which I think would be a decent claim.
But you certainly didn't have to write a special program for your assistant to integrate with your inbox, they just used an existing email/calendar client and looked at their screen.
GPT-4 is nowhere near able to interact with, say, the Gmail web page at this level. And even if you created the proper integrations, it's nowhere near the level that it could read all incoming email and intelligently decide, with high accuracy, which emails necessitate updates to your calendar, which don't, and which necessitate back-and-forth discussions to negotiate a better date for you.
Sure, your assistant didn't know all of this on day one, but they learned how to do it on their own, presumably with a few dozen examples at most. That is the mark of a general intelligence.
I'm pretty sure, from previous interactions with GPT-4o and from their demos, that if you used their desktop app (which enables screensharing) and asked it to tell you where to click, step-by-step, in the Gmail web page, it would be able to do a pretty good job of navigating through it.
Let's remember that the Gmail UI is one of the most heavily documented (in blogs, FAQs, support pages, etc) in the world. I can't see GPT-4o having any difficulty locating elements in there.