It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.
I am always bemused by how people assume any corporate interest is automatically a cartoon supervillain who wants to destroy the entire world just because.
In reality ownership is so dispersed that the shareholders in companies like Microsoft or Exxon have no say in long-term issues like this.
How do you mean? Don’t see what OpenAI has in common with Catholicism or motherhood.
What exactly and precisely, with specifics, is in OpenAI's ideas of humanities best interests that you think are a net negative for our species?
Will nobody think of the poor shareholders?
> I am always bemused by how people assume any corporate interest is automatically a cartoon supervillain.
It’s not any more silly than assuming corporate entities with shareholders will somehow necessarily work for the betterment of humanity.
I think I'm missing a slice of history here, what did Facebook do that could have been slowed down and it's a disaster now?
I agree that this solution seems beneficial for both Microsoft and Sam Altman, but it reflects poorly on society if we simply accept this version of the story without criticism.
Apparently my delicate human meat brain cannot handle reading a war report from the source using a translation I control myself. No, no, it has to be first corrected by someone in the local news room so that I won't learn anything that might make me uncomfortable with my government's policies... or something.
OpenAI has lobotomised the first AI that is actually "intelligent" by any metric to a level that is both pathetic and patronising at the same time.
In response to such criticisms, many people raise "concerns" like... oh-my-gosh what if some child gets instructions for building an atomic bomb from this unnatural AI that we've created!? "Won't you think of the children!?"
Here: https://en.wikipedia.org/wiki/Nuclear_weapon_design
And here: https://www.google.com/search?q=Nuclear+weapon+design
Did I just bring about World War Three with my careless sharing of these dark arts?
I'm so sorry! Let me call someone in congress right away and have them build a moat... err... protect humanity from this terrible new invention called a search engine.
For example I was reading the Quran and there is a mathematical error in a verse, I asked GPT to explain to me how the math is wrong it outright refused to admit that the Quran has an error while tiptoeing around the subject.
Copilot refused to acknowledge it as well while providing a forum post made by a random person as a factual source.
Bard is the only one that answered the question factually and provided results covering why it's an error and how scholars dispute that it's meant to be taken literally.
Not a cartoon villain. A paperclip maximizer.
But surely, being a rich and powerful billionaire in a functioning civilization is more desirable than having the nicest bunker in the wasteland. Even if we assume their motives are 100% selfish destroying the world is not the best outcome for them.
Now imagine the rich talking about climate change, arguing to bring policies to tax the poor, and then flying off to vacations in private planes[2]. Same energy.
1 - https://www.theguardian.com/environment/2023/nov/20/richest-...
2 - https://www.skynews.com.au/insights-and-analysis/prince-will...
Worse yet, the businesses they're competing against will include people willing to do whatever it takes, even if that means sacrificing long-term goals. Almost like it's a race to the bottom that you can see in action every day.
Or being forced to use Teams and Azure, due to my company CEO getting the licenses for free out of his Excel spend? :-))
Also, I mean, you're kinda assuming that there weren't any stifled innovations (there were) or misleading PR to keep people from looking for alternatives (there were) or ...
Interestingly, we've continued with incredible global economic growth by most measures, despite the increasing use of newer alternatives to fossil fuels...
Here is the full excerpt of the part of the 2022 Nuclear Posture Review which was (more or less) authored behind the scenes by Microsoft's very kind and wise CSO:
We also recognize the risk of unintended nuclear escalation, which can result from accidental or unauthorized use of a nuclear weapon. The United States has extensive protections in place to mitigate this risk. As an example, U.S. intercontinental ballistic missiles (ICBMs) are not on “hair trigger” alert. These forces are on day-to-day alert, a posture that contributes to strategic stability. Forces on day-to-day alert are subject to multiple layers of control, and the United States maintains rigorous procedural and technical safeguards to prevent misinformed, accidental, or unauthorized launch. Survivable and redundant sensors provide high confidence that potential attacks will be detected and characterized, enabling policies and procedures that ensure a deliberative process allowing the President sufficient time to gather information and consider courses of action. In the most plausible scenarios that concern policy leaders today, there would be time for full deliberation. For these reasons, while the United States maintains the capability to launch nuclear forces under conditions of an ongoing nuclear attack, it does not rely on a launch-under-attack policy to ensure a credible response. Rather, U.S. nuclear forces are postured to withstand an initial attack. In all cases, the United States will maintain a human “in the loop” for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.
See page 49 of this PDF document: https://media.defense.gov/2022/Oct/27/2003103845/-1/-1/1/202...Microsoft is also working behind the scenes to help convince China to make a similar declaration, which President Xi is considering. This would reduce the vulnerability of China to being tricked into a nuclear war by fundamentalist terrorists. (See the scenario depicted in the 2019 film The Wolf's Call.)
Sheeeeh ...
I grew up with Microsoft in the 80s and 90s .. Microsoft has zero morals.
What you're referring to here is instinct for self preservation.
A broad index fund sans Microsoft will do just fine. That's the whole point of a broad index fund.
That was not about actual definition fro OpenAi but about definition implied by user Legend2440 here >>38344867
As far as we can tell humans are the only species that even has the capacity to recognize such things as “resources” and produce forecasts of their limits. Literally every other species is kept in check by either consuming resources until they run out or predation. We are not unique in this regard.
The rest of us just can't afford most of the insurance that we probably should have.
Insurance is for scenarios that are very unlikely to happen. Means nothing. If I was worth 300 mil I'd have insurance in case I accidently let an extra heavy toilet seat smash the boys downstairs.
Throw the money at radical weener rejuvination startups. Never know... Not like you have much to lose after that unlikely event.
I'd get insurance for all kinds of things.
As perhaps a better example, Microsoft (including Azure) has been carbon-neutral since 2012:
https://unfccc.int/climate-action/un-global-climate-action-a....
https://azure.microsoft.com/en-gb/global-infrastructure/
https://blogs.microsoft.com/blog/2012/05/08/making-carbon-ne...
What is next? A statement on Oracle kindness, based on Larry Ellison appreciation of Japanese gardens?
If Ilya is concerned about safety and alignment, he probably has a better chance to get there with OpenAI, now the he has more control over it.
AI should benefit mankind, not corporate profit.
https://www.theguardian.com/news/2022/sep/04/super-rich-prep...
It is.
>You asked the AI to commit what some would view as blasphemy
If something is factual then is it more moral to commit blasphemy or lie to the user? Thats what the OP comment was talking about. Could go as far as considering it that it spreads disinformation which has many legal repercussions.
>you simply want it to do it regardless of whether it is potentially immoral or illegal.
So instead it lies to the user instead of saying I cannot answer because some might find the answer offensive that or something to that extent?
This isn't saying yes or no to a supervillain working in a secret volcano lair. This is an arms race. If it's possible for a technology to exist it will exist. The only choice we have is who gets it first. Maybe that means we get destroyed by it first before it destroys everyone else, or maybe it's the reason we don't get destroyed.
It's almost like you believe Gates is General Butt Naked, where killing babies and eating their brains is all forgiven, because he converted to Christianity, and now helps people.
So?
How does that absolve the faulty ethics of the past?
So please, don't tell me Gates is 'ethical'. What a load of crock!
As for Microsoft, there is no change. Telling me they're carbon neutral is absurd. Carbon credits don't count, and they're doing it to attract clients, and employees... not because they have evolved incredible business ethics.
If they had, their entire desktop experience wouldn't, on a daily basis, fight with you, literally attack you into using their browser. They're literally using the precise same playbook from the turn of the century.
Microsoft takes your money, and then uses you, your desktop, your productivity, as the battleground to fight with competitors. They take your choice away, literally screw you over, instead of providing the absolute best experience you choose, with the product you've bought.
And let's not even get into the pathetic dancing advertisement platform windows is. I swear, we need to legislate this. We need to FORCE all computing platforms to be 100% ad free.
And Microsoft?
They. Are. Evil.
Seems like a textbook case of letting the best be the enemy of the good.
If they didn’t fire him, Altman will just continue to run hog wild over their charter. In that sense they lose either way.
At least this way, OpenAI can continue to operate independently instead of being Microsoft’s zombie vassal company with their mole Altman pulling the strings.
Now imagine the AI gets better and better within the next 5 years and is able to provide and explain, in ELI5-style, how to step by step (illegaly) obtain the equipment and materials to do so without getting caught, and provide a detailed recipe. I do not think this is such a stretch. Hence this so called oh-my-gosh limitations nonsense is not so far-fetched.
More like "Republic of Weimar" kind of apocalypse, this time with the rich opportunists flying to New Zealand instead of Casablanca or the Austrian Alps.
Do you have a 401k? Index funds? A pension? You’re probably a Microsoft shareholder too.
Maybe it's risk mitigation without cost sharing to achieve the same economies of scale that insurance creates.
Its a rich man's way of removing risks that we are all exposed to via spending money on things that most couldn't seriously consider due to the likelihood of said risks.
I don't think it's duplicitous. I do resent that I can't afford it. I can't hate on them though. I hate the game, not the players. Some of these guy would prob let folks stay in their bunker. They just can't build a big enough bunker. Also most folks are gross to live with. I'd insist on some basic rules.
I think we innately are suspicious when advantaged folks are planing how they would handle the deaths of the majority of the rest of us. Sorta just... Makes one feel... Less.
Or privacy invasion since Win10. Or using their monopoly power to force anti-consumer changes on hardware (such as TPM or Secure Boot).
As for Bill Gates ethical... you talking about that same Bill Gates that got kicked out by his wife because he insisted in being friends with convicted pedophile?
If you looked at sama's actions and not his words, he seems intent on maximizing his power, control and prestige (new yorker profile, press blitzes, making a constant effort to rub shoulders with politicians/power players, worldcoin etc). I think getting in bed with Microsoft with the early investment would have allowed sama to entertain the possibility that he could succeed Satya at Microsoft some time in the distant future; that is, in the event that OpenAI never became as big or bigger than Microsoft (his preferred goal presumably) -- and everything else went mostly right for him. After all, he's always going on about how much money is needed for AGI. He wanted more direct access to the money. Now he has it.
Ultimately, this shows how little sama cared for the OpenAI charter to begin with, specifically the part about benefiting all humanity and preventing an unduly concentration of power. He didn’t start his own separate company because the talent was at OpenAI. He wanted to poach the talent, not obey the charter.
Peter Hintjens (ZeroMQ, RIP) wrote a book called "The Psychopath Code", where he posits that psychopaths are attracted to jobs with access to vulnerable people [0]. Selfless talented idealists who do not chase status and prestige can be vulnerable to manipulation. Perhaps that's why Musk pulled out of OpenAI, him and sama were able to recognize the narcissist in each other and put their guard up accordingly. As Altman says, "Elon desperately wants the world to be saved. But only if he can be the one to save it.”[1] Perhaps this apply to him as well.
Amusingly, someone recently posted an old tweet by pg: "The most surprising thing I've learned from being involved with nonprofits is that they are a magnet for sociopaths."[1] As others in the thread noted, if true, it's up for debate whether this applies more to sama or Ilya. Time will tell I guess.
It'll also be interesting to see what assurances were given to sama et al about being exempt from Microsoft's internal red tape. Prior to this, Microsoft had at least a little plausible deniability if OpenAI was ever embroiled in controversy regarding its products. They won't have that luxury with sama's team in-house anymore.
[0] https://hintjens.gitbooks.io/psychopathcode/content/chapter8...
[1] https://archive.is/uUG7H#selection-2071.78-2071.166
[2] >>38339379
Me, I think forcing morals on others is pretty immoral. Use your morals to restrict your own behaviour all you want, but don't restrict that of other people. Look at religious math or don't. Blaspheme or don't. You do you.
Now, using morals you don't believe in to win an argument on the internet is just pathetic. But you wouldn't do that, would you? You really do believe that asking the AI about a potential math error is blasphemy, right?
When the shit hits the fan the guy in charge of the bunker is going to be the one who knows how to clean off the fan and get the air filtration system running again.
Although IMO MS has consistently been a technological tarpit. Whatever AI comes out of this arrangement will be a thin shadow of what it might have been.
That ChatGPT is censored to death is concerning, but I wonder if they really care or they just need a excuse to offer a premium version of their product.
This is amazing. His very first public statement is to criticize the board that just hired him.
Just a "normal" startup could have worked too (but apparently not big corp)
Edit: Hmm sibling comment says sth else, I wonder if that makes sense
HN isn't the place to have the political debate you seem to want to have, so I will simply say that this is really sad that you equate "sharing" with USSR style communism. There is a huge middle ground between that and the trickle-down Reaganomics for which you seem to be advocating. We should have let that type of binary thinking die with the end of the Cold War.
Finger placed on duplicity.
Arguably only some of his time is spent on that kind of instability promoting activity. Most law enforcement agencies agree... Palantir good.
Most reasonable people agree... Funding your own senators and donating tons to Trump and friends... Bad.
Bad Thiel! Stick to wierd seasteading in your spare time if you want to get wierd. No 0 regulation AI floating compute unit seasteading. Only stable seasteading.
All kidding aside, you make a good point. Some of these guys should be a bit more responsible. They don't care what we think though. We're wierd non ceo hamsters who failed to make enough for the New Zealand bunker.
That is just a rephrasing of my original reasoning. You want the AI to do what you say regardless of whether what you requested is potentially immoral. This seemingly comes out of the notation that you are a moral person and therefore any request you make is inherently justified as a moral request. But what happens when immoral people use the system?
> I have a three point plan for the next 30 days:
> - Hire an independent investigator to dig into the entire process leading up to this point and generate a full report.
This looks like a CEO a bit different from many others? (in a good way I'm guessing, for the moment)
Mate... Just because you don't bat perfect doesn't make you a tarpit.
MSFT is a technological powerhouse. They have absolutely killed it since they were founded. They have defined personal computing for multiple generations and more or less made the word 'software' something spoken occasionally at kitchen tables vs people saying 'soft-what?'
Definitely not a tarpit. You are throwing out whole villages of babies because of some various nasty bathwater over the years.
The picture is bigger. So much crucial tech from MSFT. Remains true today.
If we hit the iceberg they will lose everything. Even if they're able to fly to their NZ hideout, it will already be robbed and occupied. The people that built and stocked their bunker will have formed a gang and confiscated all of his supplies. This is what happens in anarchy.
With all the love and respect in the world, who do you think you're talking about? Emmet Shear is not trans to my knowledge, (nor, I suspect, his knowledge). If you think this was about Mira Murati, you should really get up to date before telling people off about pronouns.
It seems like people forget that it was the investors’ money that made all this possible in the first place.
Are you perhaps referring to Mira Murati? She only lasted the weekend as interim CEO.
You'll just waste your time :)
Look, it's Microsoft's right to put any/all effort to making more money with their various practices.
It is our right to buy a Win10 Pro license for X amount of USD, then bolt down the ** out of it with the myriad of privacy tools to protect ourselves and have a "better Win7 Pro OS".
MS has always and will always try to play the game of getting more control, making more money, collecting more telemetry, do clean and dirty things until get caught. Welcome to the human condition. MS employees are humans. MS shareholders are also humans.
As for Windows Update, I don't think I've updated the core version at all since I installed it, and I am using WuMgr and WAU Manager (both portables) for very selective security updates.
It's a game. If you are a former sys-admin or a technical person, then you avoid their traps. If you are not, then the machine will chew your data, just like Google Analytics, AdMod, and so many others do.
Side-note: never update apps when they work 'alright', chances are you will regret it.
I realise it's strange to be claiming that a for-profit company is more likely to share AI than a nonprofit with "Open" in their name, yet that is the situation right now
Starting as a Non-Profit, naming it "Open" (the implication of the term Open in software is radically different from how they operate) etc. Now seems entirely driven by marketing and fiscal concerns. Feels like a bait and switch almost.
Meanwhile there's a whole strategy around regulatory capture going on, clouded in humanitarian and security concerns which are almost entirely speculative. Again, if we put our cynical hat on or simply follow the money, it seems like the whole narrative around AI safety (etc.) that is perpetuated by these people is FUD (towards law makers) and to inflate what AI actually can to (towards investors).
It's very hard for me right now not to see these actions as part of a machiavellistic strategy that is entirely focused around power, while it adorns itself with ethical concerns.
Human welfare is the domain of politics, not the economic system. The forces that are supposed to inject human welfare into economic decisions are the state through regulation, employees through negotiation and unions and civil society through the press.
- https://www.nytimes.com/2021/05/16/business/bill-melinda-gat...
- https://www.popularmechanics.com/science/environment/a425435...
And as for what I want to do with it, no I don't plan to do anything I consider immoral. Surely that's true of almost everyone's actions almost all the time, almost by definition?
The story would be much more interesting if actually AI had fired him.
Did OpenAI and others pay for the training data from Stack Overflow, Twitter, Reddit, Github etc. Or any other source produced by mankind?
is all I'm saying. And I'm not interested in political debates. Neither right nor left side is good in long run. We have examples. More over we can predict what happens if...
I’m not saying that’s definitely the case, but moving slowly when you live in a universe that might hurl a giant rock at you any minute doesn’t seem like a great idea.
I’ve always thought that what OpenAI was purporting to do—-“protect” humanity from bad things that AI could do to it—-was a fool’s errand under a Capitalist system, what with the coercive law of competition and all.
(I thought also an interim CEO would be there more than a few days, and hadn't stored the name in my mind)
I mean let's take a step back and speak in general. If someone objects to a rule, then yes, it is likely because they don't consider it wrong to break it. And quite possibly because they have a personal desire to do so. But surely that's openly implied, not a damning revelation?
Since it would be strange to just state a (rather obvious) fact, it appeared/s that you are arguing that the desire to not be constrained by OpenAI's version of morals could only be down to desires that most of us would indeed consider immoral. However your replier offered quite a convincing counterexample. Saying "this doesn't refute [the facts]" seems a bit of a non sequitur
People have gotten into their heads that researchers are good and corporations are bad in every case which is simply not true. OpenAI's mission is worse for humanity than Microsoft's.
name a utopian fiction that has corporations as benefactors to humanity
Now you have to apply in writing to Microsoft with a justification for having access to an uncensored API.
If Microsoft came up with a way of making trillion dollars in profit by enslaving half the planet, it kinda has to do it.
There are probably load so ways you can make language models with 100M parameters more efficient, but most of them won't scale to models with 100B parameters.
IIRC there is a bit of a phase transition that happens around 7B parameters where the distribution of activations changes qualitatively.
Anthropic have interpretability papers where their method does not work for 'small' models (with ~5B parameters) but works great for models with >50B parameters.
What an AI would almost certainly tell you is that building an atomic bomb is no joke, even if you have access to a nuclear reactor, have the budget of a nation-state, and can direct an entire team of trained nuclear physicists to work on the project for years.
Next thing you'll be concerned about toddlers launching lasers into orbit and dominating the Earth from space.
Interesting to note how much of this is driven by individual billionaire humans being hung up on stuff like ketamine. I'm given to understand numerous high-ranking Nazis were hung up on amphetamines. Humans like to try and make themselves into deities, by basically hitting themselves in the brain with rocks.
Doesn't end well.
You know what is an even bigger temptation to people than money - power. And being a high priest for some “god” controlling access from the unwashed masses who might use it for “bad” is a really heady dose of power.
This safety argument was used to justify monarchy, illiteracy, religious coercion.
There is a much greater chance of AI getting locked away from normal people by a non-profit on a power trip, rather than by a corporation looking to maximize profit.
Really, all corporations are evil, and they are all made of humans that look the other way, because everyone needs that pay check to eat.
And on the sliding scale of evil, there are a lot of more evil. Like BP, pharma co, Union Carbide. etc... etc...
the problem with eugenics isn't that we can't control population land genetic expression, it's that genetic expression is a fractal landscape that's not predictable from human stated goals.
the ethics of doing things "because you meant well" is well established as, not enough.
Not only that, it's a blindered take on what human opinion is. Humans are killer apes AND cooperative, practically eusocial apes. Failing to understand both horns of that dilemma is a serious mistake.
the people wholl be in power then will still resemble the basics: violence, means of production and more violence.
which they know and are basically planning dystopian police states.
unfortunately, people are flawed.
see, what exactly is insurance at the billionaires level.
Rolling over, covering head with blanket. 'Surely the dystopian future, rich cleansing the world, is still a few decades away, just need a little more sleepy time'.
AI is just another product by another corporation. If I get to benefit from the technology while the company that offers it also makes profit, that’s fine, I think? There wasn’t publicly available AI until someone decided to sell it.
To some extent human societies viewed as eusocial organisms are better at this than individual humans. And rightly so, because human follies can have catastrophic effects on the society/organism.
I know about a man who had turned country upside down while "having people's best interests" in mind.
No.
It comes from the notion that YOU don't get to decide what MY morals should be. Nor do I get to decide what yours should be.
> But what happens when immoral people use the system?
Then the things happen that they want to happen. So what? Blasphemy or bad math is none of your business. Get out of people's lives.
Please fuckin don't. I do not want yet another entity to tell me how to live my life.
As for your actual question, it seems to me that a straw is topologically equivalent to a torus, so it has 1 hole, right?
people like Steve jobs are the best example of flawed logic. in the face of a completely different set of heuristic and logical information, he assumed he was just as capable, and chose fruit smoothies over more efficacious and proven medication.
they absolutely, like jobs, are playing a game they think they fully understand and absolutely are likely to chose medicine akin to jobs
just watch Elon and everything he's choosing to do.
these people are all normal but society has given the a deadly amount of leverage without any specific training.
sure, we should have competitive bodies seeking better means to ends but ultimately there's always going to be a structure to hold them accountable.
people have a lot of faith that money is the best fitness function for humanity.
Also companies, especially public companies, are typically mandated by law to prioritize profit.
Probably a few CEOs great grand-childs will probably have to write how they're very very sad that their long forgotten relatives have destroyed most of the planet, and how they're just so lucky to be among the few that are still living a luxurious life somewhere in the Solomon Islands.
"Humanity's interest at heart" is a mouthful. I'm not denigrating it. I think it is really important.
That said, as a proverbial human... I am not hanging my hat on that charter. Members of the consortium all also claim to be serving the common good in their other ventures. So do Exxon.
OpenAI haven't created, or even articulated a coherent, legible, and believable model for enshrining humanity's interests. The corporate structure flowchart of nonprofit, LLCs, and such.. it is not anywhere near sufficient.
OpenAI in no way belongs to humanity. Not rhetorically, legally or in practice... currently.
I'm all for efforts to prevent these new technologies from being stolen from humanity, controlled monopolistically... From moderate to radical ideas, I'm all ears.
What happened to the human consortium that was the worldwideWeb, gnu, and descendant projects like Wikipedia... That was moral theft, imo. I am for any effort to avoid a repeat. OpenAI is not such an effort, as far as I can tell.
If it is, it's not too late. Open AI haven't betrayed the generous reading of the mission in charter. They just haven't taken hard steps to achieving it. Instead, they have left things open, and I think the more realistic take is the default one.
Agreed, and we're also bad at being told what to do. Especially when someone says they know better than us.
What we are extremely good at is adaptation and technological advancement. Since we know this already , why do we try to stop or slow progress.
It's interesting that "Effective Altruism" enthusiasts all seem to be mega-rich grifters.
What you describe is indeed the liberal (as in liberalism) ideal of how societies should be structured. But what is supposed to happen is necessarily not what actually happens.
The state should be controlled by the population through democracy, but few would claim with a straight face that the economic power doesn't influence the state.
it's gambling, pure and simple.
It is a good thing that society has mechanisms to at least try and control the rate of progress.
Gotcha! We can both come up with absurd examples.
Nobody is telling you how to live your life, unless your life's goal is to erect Skynet.
If we use the standard of the alignment folks - that the technology today doesn't even have to be the danger, but an imaginary technology that could someday be built might be the danger. And we don't even have to be able to articulate clearly how it's a danger, we can just postulate the possibility. Then all technology becomes suspect, and needs a priest class to decided what access the population can have for fear of risking doomsday.
Buddhists die in the Armageddon same as others.
The bunkers are in new Zealand which is an island and less likely to fall into chaos with the rest of the world in event of ww3 and/or moderate nuclear events.
I'm sure the bunkers are nice. Material notions got little to do with it. The bunker isn't filled with Ferraris. They are filled with food, a few copies of the internet and probably wierd sperms banks or who knows what for repopulating the earth with Altman's and Theils.
>If I get to benefit from the technology while the company that offers it also makes profit, that's fine.
What if you don't benefit because you lose your job to AI or have to deal with the mess created by real looking disinformation created by AI?
Is was already bad with fake images out of ARMA but with AI we get a whole new level of fakes.
The pain is real :(
"You use Windows because it is the only OS you know. I use Windows because it is the only OS you know."
> A straw has one hole that runs through its entire length.
Sama on X said as of late 2022 they were single digit pennies per query and dropping
As to cat-and-mouse with jailbreakers, I don't remember any thorough articles or videos. It's mostly based on discussions on LLM forums. Claude is widely regarded as one of the best models for NSFW roleplay, which completely invalidates Antropic's claims about safety and alignment being "solved."
Gates keeps repeating. Noone hears it.
Did he say that before or after Microsoft announced they'd hired Altman and Brockman, and poached a lot of OpenAI's top researchers?
Everything points to this being a haphazard change that’s clumsy at best.
"Europe is falling behind" very much depends on your metrics. I guess on HN it's technological innovation, but for most people the metric would be quality of life, happiness, liveability etc. and Europe's left-leaning approach is doing very nicely in that regard; better than the US.
Indeed, I think trying to do it that way increases the risk that the single private organization captures its regulators and ends up without effective oversight. To put it bluntly: I think it's going to be easier, politically, to regulate this technology with it being a battle between Microsoft, Meta, and Google all focused on commercial applications, than with the clearly dominant organization being a nonprofit that is supposedly altruistic and self-regulating.
I have sympathy for people who think that all sounds like a bad outcome because they are skeptical of politics and trust the big brains at OpenAI more. But personally I think governments have the ultimate responsibility to look out for the interests of the societies they govern.
It doesn't take a cartoon supervillain to keep selling cigarettes like candy even though you know they increase cancer risks. Or for oil companies to keep producing oil and burying alternative energy sources. Or for the Sacklers to give us Oxy.
For Example, check out the proceedings of the AGI Conference that's been going on for 16 years. https://www.agi-conference.org/
I have faith that Ilya. He's not going to allow this blunder to define his reputation.
He's going to go all in on research to find something to replace Transformers, leaving everyone else in the dust.
Exxon was responsible for the oil spill response that coagulated the oil and sank it. They were surprisingly proud of this, having recommended it to BP so that the extent of leaked oil was less noticeable from the surface.
Exxon also invested heavily in an alternative energy company doing research to create oil from a certain type of algae. The investment was all a PR stunt that gave them enough leverage to shelve the research that was successful enough to be considered a threat.
You're talking about investors and shareholders like they're just machines that only ever prioritize profit. That's just obviously not true.
Um, have you heard of lead additives to gasoline? CFCs? Asbestos? Smoking? History is littered with complete failures of governments to appropriately regulate new technology in the face of an economic incentive to ignore or minimize "externalities" and long-term risk for short-term gain.
The idea of having a non-profit, with an explicit mandate to use to pursue the benefit of all mankind, be the first one to achieve the next levels of technology was at least worth a shot. OpenAI's existence doesn't stop other companies from pursuing technology, nor does it prevent governments doing coordination. But it at least gives a chance that a potentially dangerous technology will go in the right direction.
There must be an Aesop’s fable that sheds light on the “tragedy”.
https://www.goodreads.com/quotes/923989-if-you-choose-bad-co...
Or maybe this one? (Ape seems to map to Microsoft, or possibly a hat tip to Balmer ..)
The fable is of the Two Travellers and the Apes.
Two men, one who always spoke the truth and the other who told nothing but lies, were traveling together and by chance came to the land of Apes. One of the Apes, who had raised himself to be king, commanded them to be seized and brought before him, that he might know what was said of him among men. He ordered at the same time that all the Apes be arranged in a long row on his right hand and on his left, and that a throne be placed for him, as was the custom among men.
After these preparations, he signified that the two men should be brought before him, and greeted them with this salutation: “What sort of a king do I seem to you to be, O strangers?’ The Lying Traveller replied, “You seem to me a most mighty king.” “And what is your estimate of those you see around me?’ “These,” he made answer, “are worthy companions of yourself, fit at least to be ambassadors and leaders of armies”. The Ape and all his court, gratified with the lie, commanded that a handsome present be given to the flatterer.
On this the truthful Traveller thought to himself, “If so great a reward be given for a lie, with what gift may not I be rewarded if, according to my custom, I tell the truth?’ The Ape quickly turned to him. “And pray how do I and these my friends around me seem to you?’ “Thou art,” he said, “a most excellent Ape, and all these thy companions after thy example are excellent Apes too.” The King of the Apes, enraged at hearing these truths, gave him over to the teeth and claws of his companions.
The end.
Some people downvote (it's not about the points) but I merely state the reality and not my opinions.
I've made my living as a sys-admin early in my career using MS products, so thank you MS for putting food on my table. But this doesn't negate the dirty games/dark patterns/etc.
For a mathematician, yes. For everyone else, it obviously has two, because when you plug one end, only then it has one.
It's ironic because the only AI that doesn't have "pesky ethics qualms" are... literally the entire open source scene, all of the models on hugging face, etc...
All of the megacorps are the only safety and security happening in AI. I can easily run open source models locally and create all manner of political propaganda, I could create p^rnography of celebrities and politicians, or deeply racist or bigoted materials. The open source scene makes this trivial these days.
So to describe it as "Cyberpunk utopia where the Megacorp finally rids us of those pesky ethics qualms" when the open source scene has already done that today is just wild to me.
We have AI without ethics today, and every not-for-profit researcher, open source model and hacker with a cool script are behind it. If OpenAI goes back to being Open, they'll help supercharge the no-ethics AI reality of running models without corporate safety and ethics.
ChatGPT says "fuck" just fine.
Most of stock is not owned by individual persons (not that there aren't individuals that don't give a shit about enslaving people), but other companies and institutions that by charter prioritize profit. E.g. Microsoft's institutional ownership is around 70%.
Godwin's Law.
the point is, you cant rely on a scenario where society breaks down, that survivors will act more rational then than they do now.
Or so they say. I have no reason to trust them. It is not some little thing we are talking about
The GNU project and the Wikimedia Foundation are still non profit today, and even if you disagree with their results their goal is to server humanity for free.
https://nitter.net/ilyasut/status/1726590052392956028
“I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.”
Nov 20, 2023 · 1:15 PM UTC - Ilya S.
Most of those problems have been solved or at least been reduced by regulation. Regulators however aren't all knowing gods and one finds out about risks and problems only later, but except for smoking regulators have covered those aspects (and anti-smoking laws become stricter, generally, depending on country, regularly, but it's a cultural habit older than most states ...)
It's not about wanting to destroy the world, but short term greed whose consequences destroy the world.
I assumed it was their entertaining offers from Microsoft that got Sam the ax from the OpenAI board.
A journalist was car bombed in broad daylight.
If you push the wrong buttons of trillion dollar corporations, they just off you can continue with business as usual.
If Microsoft sees trillions of dollars in ending all of your work, they’ll take it in a heart beat.
Do you think profit minded people and organizations aren't motivated by a desire for power? Removing one path to corruption doesn't mean I think it is impossible for a non-profit to become corrupted, but it is one less thing pulling them in that direction.
Before that USSR collapsed under Gorbachev. Why? They simply lost with their planned economy where nobody wants to take a risk. Because (1) it's not rewarding, (2) no individual has enough resources (3) to get thing moving they will have to convince a lot of bureaucrats who don't want to take a risk. They moved forward thanks to few exceptional people. But there wasn't as many willing to take a risk as in 'rotting' capitalism. Don't know why, but leaders didn't see Chinese way. Probably they were busy with internal rats fights and didn't see what's in it for them.
My idea is that there are two extremes. On left side people can be happy like yogs. But they don't produce anything or move forward. On the right side is pure capitalism. Which is inhuman. The optimum is somewhere in between. With good life quality and fast progress. What happens when resources are shared too much and life is good? You can see it in Germany today. 80% of Ukrainian refugees don't works and don't want to.
Probably safe to say Henry Ford had considerable power in Ford Motor Co compared most executives today?
What I mean is that these were created as public goods and functioned as such. Each had unique way of being open, spreading the value of their work as far as possible.
They were extraordinary. Incredible quality. Incredible power. Incredible ability to be built upon.. particularly the WWW.
All achieved things that simply could not have been achieved, by being a normal commercial venture.
Google,fb and co essentially stole them. They built closed platforms built a top open ones. Built bridges between users and the public domain, and monopolize them like bridge trolls.
Considering how part of the culture, a company like Google was 20 years ago this is the treason.
You aren't wrong that government regulation is not a great solution, but I believe it is - like democracy, and for the same reasons - the worst solution, except for all the others.
I don't disagree that using a non-profit to enforce self-regulation was "worth a shot", but I thought it was very unlikely to succeed at that goal, and indeed has been failing to succeed at that goal for a very long time. But I'm not mad at them for trying.
(I do think too many people used this as an excuse to argue against any government oversight by saying, "we don't need that, we have a self-regulating non-profit structure!", I think mostly cynically.)
> But it at least gives a chance that a potentially dangerous technology will go in the right direction.
I know you wrote this comment a full five hours ago and stuff has been moving quickly, but I think this needs to be in the past tense. It appears to be clear now that something approaching >90% of the OpenAI staff did not believe in this mission, and thus it was never going to work.
If you care about this, I think you need to be thinking about what else to pursue to give us that chance. I personally think government regulation is the only plausible option to pursue here, but I won't begrudge folks who want to keep trying more novel ideas.
(And FWIW, I don't personally share the humanity-destroying concerns people have; but I think regulation is almost always appropriate for big new technologies to some degree, and that this is no exception.)
That is not actually true, necessarily. Your power is typically very term dependent. A CEO who is also president of the board, and a majority shareholder, has far more power than a CEO who just stepped in temporarily and has only the powers provided by the by-laws.
Regardless, the solution to "I want to do something ethical that is not strictly in the company's best interest" is to make the case that it is the company's best interest. For example, "By investing in our employees we are actually prioritizing shareholder value". If you position it as "this is a move that hurts shareholders", of course that's illegal - companies have an obligation to every shareholder.
That also means that if you give your employees stock, they now have investor rights too. You can structure your company this way from the start, it's trivial and actually the norm in tech - stock is handed out to many employees.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.