zlacker

[return to "Ilya Sutskever to leave OpenAI"]
1. zoogen+Ix[view] [source] 2024-05-15 04:50:43
>>wavela+(OP)
Interesting, both Karpathy and Sutskever are gone from OpenAI now. Looks like it is now the Sam Altman and Greg Brockman show.

I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.

◧◩
2. nabla9+pH[view] [source] 2024-05-15 06:45:38
>>zoogen+Ix
Top 6 science guys are long gone. Open AI is run by marketing, business, software and productization people.

When the next wave of new deep learning innovations sweeps the world, Microsoft eats whats left of them. They make lots of money, but don't have future unless they replace what they lost.

◧◩◪
3. fsloth+O21[view] [source] 2024-05-15 10:40:27
>>nabla9+pH
If we look at history of innovation and invention it’s very typical the original discovery and final productization are done by different people. For many reasons, but a lot of them are universal I would say.

E.g. Oppenheimer’s team created the bomb, then following experts finetuned the subsequent weapon systems and payload designs. Etc.

◧◩◪◨
4. fprog+I51[view] [source] 2024-05-15 11:12:12
>>fsloth+O21
Except OpenAI hasn’t yet finished discovery on its true goal: AGI. I wonder if they risk plateauing at a local maximum.
◧◩◪◨⬒
5. Zambyt+hc1[view] [source] 2024-05-15 11:58:23
>>fprog+I51
I'm genuinely curious: what do you expect an "AGI" system to be able to do that we can't do with today's technology?
◧◩◪◨⬒⬓
6. jagrsw+ae1[view] [source] 2024-05-15 12:10:55
>>Zambyt+hc1
Some first ideas coming to mind:

Engineering Level:

  Solve CO2 Levels
  End sickness/death
  Enhance cognition by integrating with willing minds.
  Safe and efficient interplanetary travel.
  Harness vastly higher levels of energy (solar, nuclear) for global benefit.
Science:

  Uncover deeper insights into the laws of nature.
  Explore fundamental mysteries like the simulation hypothesis, Riemann hypothesis, multiverse theory, and the existence of white holes.
  Effective SETI
 
Misc:

  End of violent conflicts
  Fair yet liberal resource allocation (if still needed), "from scarcity to abundance"
◧◩◪◨⬒⬓⬔
7. Zambyt+3g1[view] [source] 2024-05-15 12:22:09
>>jagrsw+ae1
Do you believe the average human has general intelligence, and do you believe the average human can intellectually achieve these things in ways existing technology cannot?
◧◩◪◨⬒⬓⬔⧯
8. jagrsw+Ih1[view] [source] 2024-05-15 12:33:46
>>Zambyt+3g1
Yes, considering that AI operates differently from human minds, there are several advantages:

  AI does not experience fatigue or distractions => consistent performance.
  AI can scale its processing power significantly, despite the challenges associated with it (I understand the challenges)
  AI can ingest and process new information at an extraordinary speed.
  AIs can rewrite themselves
  AIs can be multiplicated (solving scarcity of intelligence in manufacturing)
  Once achieving AGI, progress could compound rapidly, for better or worse, due to the above points.
◧◩◪◨⬒⬓⬔⧯▣
9. Jensso+of2[view] [source] 2024-05-15 17:19:24
>>jagrsw+Ih1
The first AGI will probably take way too much compute to have a significant effect, unless there is a revolution in architecture that gets us fast and cheap AGI at once the AGI revolution will be very slow and gradual.

A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI.

[go to top]