zlacker

[return to "Ilya Sutskever to leave OpenAI"]
1. zoogen+Ix[view] [source] 2024-05-15 04:50:43
>>wavela+(OP)
Interesting, both Karpathy and Sutskever are gone from OpenAI now. Looks like it is now the Sam Altman and Greg Brockman show.

I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.

◧◩
2. nabla9+pH[view] [source] 2024-05-15 06:45:38
>>zoogen+Ix
Top 6 science guys are long gone. Open AI is run by marketing, business, software and productization people.

When the next wave of new deep learning innovations sweeps the world, Microsoft eats whats left of them. They make lots of money, but don't have future unless they replace what they lost.

◧◩◪
3. fsloth+O21[view] [source] 2024-05-15 10:40:27
>>nabla9+pH
If we look at history of innovation and invention it’s very typical the original discovery and final productization are done by different people. For many reasons, but a lot of them are universal I would say.

E.g. Oppenheimer’s team created the bomb, then following experts finetuned the subsequent weapon systems and payload designs. Etc.

◧◩◪◨
4. fprog+I51[view] [source] 2024-05-15 11:12:12
>>fsloth+O21
Except OpenAI hasn’t yet finished discovery on its true goal: AGI. I wonder if they risk plateauing at a local maximum.
◧◩◪◨⬒
5. Zambyt+hc1[view] [source] 2024-05-15 11:58:23
>>fprog+I51
I'm genuinely curious: what do you expect an "AGI" system to be able to do that we can't do with today's technology?
◧◩◪◨⬒⬓
6. tsimio+Th1[view] [source] 2024-05-15 12:34:44
>>Zambyt+hc1
The simplest answer, without adding any extraordinary capabilities to the AGI that veer into magical intelligence, is to have AI assistants that can seemlessly interact with technology the way a human assistant would.

So, if you want to meet with someone, instead of opening you calendar app and looking for an opening, you'd ask your AGI assistant to talk to their AGI assistant and set up a 1h meeting soon. Or, instead of going on Google to find plane tickets, you'd ask you AGI assistant to find the most reasonable tickets for a certain date range.

This would not require any special intelligence more advanced than a human's, but it does require a very general understanding of the human world that is miles beyond what LLMs can achieve today.

Going only slightly further with assumptions about how smart an AGI would be, it could revolutionize education, at any level, by acting as a true personalized tutor for a single student, or even for a small group of students. The single biggest problem in education is that it's impossible to scale the highest quality education - and an AGI with capabilities similar to a college professor would entirely solve that.

◧◩◪◨⬒⬓⬔
7. Zambyt+Xk1[view] [source] 2024-05-15 12:53:47
>>tsimio+Th1
This is definitely an interesting way to look at it. My initial reaction is to consider that I can enhance the capabilities of a system without increasing its inteligence. For example, if I give a monkey a hammer, it can do more than it could do when it didn't have the hammer, but it is not more intelligent (though it could probably learn things by interacting with the world with the hammer). That leads me to think: can we enhance the capabilities of what we call "AI systems" to do these things, without increasing their intelligence? It seems like you can glue GPT-4o to some calendar APIs to do exactly this. This seems more like an issue of tooling rather than an issue of intelligence to me.

I guess the issue here is: can a system be "generally intelligent" if it doesn't have access to general tools to act on that intelligence? I think so, but I also can see how the line is very fuzzy between an AI system and the tools it can leverage, as really they both do information processing of some sort.

Thanks for the insight.

◧◩◪◨⬒⬓⬔⧯
8. tsimio+HD3[view] [source] 2024-05-16 04:11:46
>>Zambyt+Xk1
I'm sure some aspects of this can be achieved by manually programming GPT-4 links to other specific services. And obviously, some interaction tools would have to be written manually even for an AGI.

The difference though is the amount of work. Today if you wanted GPT-4 to work as I describe, you would have to write an integration for Gmail, another one for Office365, another one for Proton etc. You would probably have to create a management interface to give access to your auth tokens for each of these to OpenAI so they can activate these interactions. The person you want to sync with would have to do the same.

In contrast, an AGI that only has average human intelligence, or even below, would just need access to, say, Firefox APIs, and should easily be able to achieve all of this. And it would work regardless if the other side is a different AGI using a different provider, or even if they are just a regular human assistant.

◧◩◪◨⬒⬓⬔⧯▣
9. Zambyt+EG4[view] [source] 2024-05-16 14:51:37
>>tsimio+HD3
What if you ask GPT-4 to write the integration between its API and an email provider? You're not really "manually" creating the integration then.
◧◩◪◨⬒⬓⬔⧯▣▦
10. tsimio+qS5[view] [source] 2024-05-16 21:53:07
>>Zambyt+EG4
You can try that. I don't think it will be as reliable as you'd want for something like this.
[go to top]