> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration
Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?
Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."
[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...
above that in the charter is "Broadly distributed benefits", with details like:
"""
Broadly distributed benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
"""
In that sense, I definitely hate to see rapid commercialization and Microsoft's hands in it. I feel like the only person on HN that actually wanted to see Team Sam lose, although it's pretty clear Team Helen/Ilya didn't have a chance, the org just looks hijacked by SV tech bros to me, but I feel like HN has a blindspot to seeing that at all and considering it anything other than a good thing if they do see it.
Although GPT barely looks like the language module of AGI to me and I don't see any way there from here (part of the reason I don't see any safety concern). The big breakthrough here relative to earlier AI research is massive amounts more compute power and a giant pile of data, but it's not doing some kind of truly novel information synthesis at all. It can describe quantum mechanics from a giant pile of data, but I don't think it has a chance of discovering quantum mechanics, and I don't think that's just because it can't see, hear, etc., but a limitation of the kind of information manipulation it's doing. It looks impressive because it's reflecting our own intelligence back at us.