I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.
When the next wave of new deep learning innovations sweeps the world, Microsoft eats whats left of them. They make lots of money, but don't have future unless they replace what they lost.
E.g. Oppenheimer’s team created the bomb, then following experts finetuned the subsequent weapon systems and payload designs. Etc.
And no, using ChatGPT like you use a search engine isn't ChatGPT solving your problem, that is you solving your problem. ChatGPT solving your problem would mean it drives you, not you driving it like it works today. When I hired people to help me do taxes they told me what papers they needed and then they did my taxes correctly without me having to look it through and correct them, an AGI would work like that for most tasks, it means you no longer need to think or learn to solve problems since the AGI solves them for you.
How come the goal posts for AGI are always the best of what people can do?
I can't diagnose anyone, yet I have GI.
Reminds me of:
> Will Smith: Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?
> I Robot: Can you?
Not the best, I just want it to be able to do what average professionals can do because average humans can become average professionals in most fields.
> I can't diagnose anyone, yet I have GI.
You can learn to, an AGI system should be able to learn to as well. And since we can copy AGI learning it means that if it hasn't learned to diagnose people yet then it probably isn't an AGI, because an AGI should be able to learn that without humans changing its code and once it learned it once we copy it forever and now the entire AGI knows how to do it.
So, the AGI should be able to do all the things you could do if we include all versions of you that learned different fields. If the AGI can't do that then you are more intelligent than it in those areas, even if the singular you isn't better at those things than it is.
For these reasons it makes more sense to compare an AGI to humanity rather than individual humans, because for an AGI there is no such thing as "individuals", at least not the way we make AI today.