zlacker

[parent] [thread] 8 comments
1. trasht+(OP)[view] [source] 2024-05-15 10:36:45
Unlike AI in the past, there is now massive amounts of money going into AI. And the number things humans are still doing significantly better than AI is going down continously now.

If something like Q* is provided organically with GPT5 (which may have a different name), and allows proper planning, error correction and direct interaction with tools, that gaps is getting really close to 0.

replies(1): >>varjag+z1
2. varjag+z1[view] [source] 2024-05-15 10:56:50
>>trasht+(OP)
AI in the past (adjusted for 1980s) was pretty well funded. It's just that fundamental scientific discovery bears little relationship to the pallets of cash.
replies(2): >>mark_l+Re >>trasht+mf
◧◩
3. mark_l+Re[view] [source] [discussion] 2024-05-15 12:31:46
>>varjag+z1
Funding in the 1980s was sometimes very good. My company bought me an expensive Lisp Machine in 1982 and after that, even in “AI winters” it mostly seemed that money was available.

AI has a certain mystique that helps get money. In the 1980s I was on a DARPA neural network tools advisory panel, and I concurrently wrote a commercial product that included the 12 most common network architectures. That allowed me to step in when a project was failing (a bomb detector we developed for the FAA) that used a linear model, with mediocre results. It was a one day internal consult to provide software for a simple one hidden layer backprop model. During that time I was getting mediocre results using symbolic AI for NLP, but the one success provided runway internally in my company to keep going.

replies(1): >>trasht+eg
◧◩
4. trasht+mf[view] [source] [discussion] 2024-05-15 12:35:44
>>varjag+z1
> AI in the past (adjusted for 1980s) was pretty well funded.

A tiny fraction of the current funding. 2-4 orders of magnitude less.

> It's just that fundamental scientific discovery bears little relationship to the pallets of cash

Heavy funding may not automatically lead to breakthroughs such as Special Relativity or Quantum Mechanics (though it helps there too). But once the most basic ideas are in place, massive is what causes the breakthroughs like in the Manhatten Project and Apollo Program.

And it's not only the money itself. It's the attention and all the talent that is pulled in due to that.

And in this case, there is also the fear that the competition will reach AGI first, whether the competition is a company or a foreign government.

It's certainly possible the the ability to monetize the investments may lead to some kind of slowdown at some point (like if there is a recession).

But it seems to me that such a recession will have no more impact on the development of AGI than the dotcom bust had for the importance of the internet.

replies(1): >>varjag+4j
◧◩◪
5. trasht+eg[view] [source] [discussion] 2024-05-15 12:41:58
>>mark_l+Re
That funding may have felt good at the time compared to some other academic fields.

But compared to the 100s of billions (possibly trillions, globally) that is currently being plowed into AI, that's peanuts.

I think the closest recent analogy to the current spending on AI, was the nuclear arms race during the cold war.

If China is able to field ASI before the US even have full AGI, nukes may not matter much.

replies(1): >>mark_l+Ho
◧◩◪
6. varjag+4j[view] [source] [discussion] 2024-05-15 12:58:19
>>trasht+mf
> A tiny fraction of the current funding. 2-4 orders of magnitude less.

Operational costs were correspondingly lower, as they didn't need to pay electricity and compute bills for tens of millions concurrent users.

> But once the most basic ideas are in place, massive is what causes the breakthroughs like in the Manhatten Project and Apollo Program.

There is no reason to think that the ideas are in place. It could be that the local optimum is reached as it happened in many other technology advances before. The current model is mass scale data driven, the Internet has been sucked dry for data and there's not much more coming. This may well require a substantial change in approach and so far there are no indications of that.

From this pov monetization is irrelevant, as except for a few dozen researchers the rest of the crowd are expensive career tech grunts.

replies(1): >>trasht+as
◧◩◪◨
7. mark_l+Ho[view] [source] [discussion] 2024-05-15 13:29:07
>>trasht+eg
You are right about funding levels, even taking inflation into account. Some of the infrastructure, like Connection Machines and Butterfly Machines seemed really expensive at the time though.
replies(1): >>trasht+Vt
◧◩◪◨
8. trasht+as[view] [source] [discussion] 2024-05-15 13:44:36
>>varjag+4j
> There is no reason to think that the ideas are in place.

That depends what you mean when you say "ideas". If you consider ideas at the level of transformers, well then I would consider those ideas of the same magnitude as many of the ideas the Manhatten Project or Apollo Program had to figure out on the way.

If you mean ideas like going from expert system to Neural Networks with backprop, then that's more fundamental and I would agree.

It's certainly still conceivable that Penrose is right in that "true" AGI requires something like microtubules to be built. If so, that would be on the level of going from expert systems to NNs. I believe this is considered extremely exotic in the field, though. Even LeCun probably doesn't believe that. Btw, this is the only case where I would agree that funding is more or less irrelevant.

If we require 1-2 more breakthroughs on par with Transformers, then those could take anything from 2-15 years to be discovered.

For now, though, those who have predicted that AI development will mostly be limited by network size and the compute to train it (like Sutskever or implicitly Kurzweil) have been the ones most accurate in the expected rate of progress. If they're right, then AGI some time between 2025-2030 seems most likely.

Those AGI's may be very large, though, and not economical to run for a wider audience until some time in the 30's.

So, to summarize: Unless something completely fundamental is needed (like microtubules), which happens to be a fringe position, AGI some time between 2025 and 2040 seems likely. The "pessimists" (or optimists, in term of extinction risk) may think it's closer to 2040, while the optimists seem to think it's arriving very soon.

◧◩◪◨⬒
9. trasht+Vt[view] [source] [discussion] 2024-05-15 13:54:47
>>mark_l+Ho
They only seem expensive because they're not expected to generate a lot of value (or military/strategic benefit).

Compare that the 6+ trillions that were spent in the US alone on nuclear weapons, and then consider, what is of greater strategic importance: ASI or nukes?

[go to top]