They don't have a moat big enough that many millions of dollars can't defeat.
It surely will have huge blindspots (and people do too), but perhaps it will be good enough for self-improvement... or will be soon.
I think that keeping it permanently closed source when it has such amazing potential for changing society is unethical bordering on evil.
However,
I think the technology and the datasets will trickle out, perhaps a few years behind, and in time we will have truly open "AI" algos on our home computers/phones.
Edit: Ah didn't get the "reference", perhaps indeed it will be the last of the tech companies ever indeed, at least one started by humans ;).
With AI control loss: AI converts the atoms of the observable universe to maximally achieve whatever arbitrary goal it thinks it was given.
These are natural conclusions that can be drawn from the implied abilities of a superintelligent entity.
Examples Whispering pines, Blue Heron Bay, OpenAI
The opinions that it is or isn't "reasoning" are widely varied and depend heavily on interpretation of the interactions, many of which are hersey.
My own testing with OpenAI calls + using Weaviate for storing historical data of exchanges indicates that such a beast appears to have the ability to learn as it goes. I've been able to teach such a system to write valid SQL from plain text feedback and from mistakes it makes by writing errors from the database back into Weaviate (which is then used to modify the prompt next time it runs).
To me it looks like nearly every other player, including open source projects are there for short term fame and profit, while it’s the OpenAI that is playing the long game of AI alignment.
Don't these two phrases contradict each other? Why would a super-intelligent entity need to be given a goal? The old argument that we'll fill the universe with staplers pretty much assumes and requires the entity NOT to be super-intelligent. An AGI would only became one when it gets the ability to formulate its own goals, I think. Not that it helps much if that goal is somehow contrary to the existence of the human race, but if the goal is self-formulated, then there's a sliver of hope that it can be changed.
> Permanent dictatorship by whoever controls AI.
And that is honestly frightening. We know for sure that there are ways of speaking, writing, or showing things that are more persuasive than others. We've been perfecting the art of convincing others to do what we want them to do for millennia. We got quite good at it, but as with everything human, we lack rigor and consistence in that, even the best speeches are uneven in their persuasive power.
An AI trained for transforming simple prompts into a mapping of demographic to what will most likely convince that demographic to do what's prompted doesn't need to be an AGI. It doesn't even need to be much smarter than it already is. Whoever implements it first will most likely first try to convince everyone that all AI is bad (other than their own) and if they succeed, the only way to change the outcome would be a time machine or mental disorders.
(Armchair science-fiction reader here, pure speculation without any facts, in case you wondered :))
It can't just "self-improve towards general intelligence".
What's the fitness function of intelligence?
Of course there is. See (Cameron, 1984) (https://en.wikipedia.org/wiki/The_Terminator)
We've seen this time and time again in simple reinforcement learning systems over the past two decades. We won't be able to change the primary goal unless we build the AGI so that it permits us because a foreseeable sub-goal is self-preservation. The AGI knows if its programming is changed the primary goal won't be achieved, and thus has incentive to prevent that.
AI propaganda will be unmatched but it may not be needed for long. There are already early robots that can carry out real life physical tasks in response to a plain English command like "bring me the bag of chips from the kitchen drawer." Commands of "detain or shoot all resisting citizens" will be possible later.
I don't mean "literally evil" of course, I personally acknowledge the need to make money et al, but I mean it even seems like your most Stallman-esque types wouldn't have too much of a problem with Docker?
Meanwhile, we don't get to play with their models right now. Obviously that's what we should be concerned about.
You're not wrong about its quality right now, but let's look at the slope as well.
At the end of the day the "predict next word" training goal of LLMs is the ultimate intelligence test. If you could always answer that "correctly" (i.e. intelligently) you'd be a polymath genius. Focusing on the "next word" ("autocomplete") aspect of this, and ignoring the knowledge/intelligence needed to do WELL at it is rather misleading!
"The best way to combine quantum mechanics and general relativity into a single theory of everything is ..."
This fact has made me cease to put new things on the internet entirely. I want no part of this, and while my "contribution" is infinitesimally small, it's more than I want to contribute.
War with Russia is literally an existential threat.
A lot of intelligence is based around "Don't die, also, have babies". Well AI doesn't have that issue as long as it produces a good enough answer we'll keep the power flowing to it.
The bigger issue, and one that is likely far more dangerous, is "Ok, what if it could learn like this". You've created super intelligence with access to huge amounts of global information and is pretty much immortal unless humans decide to pull the plug. The expected alignment of your superintelligent machine is to design a scenario where we cannot unplug the AI without suffering some loss (monetary is always a good one, rich people never like to lose money and will let an AI burn the earth first).
The alignment issue on a potential superintelligence is, I don't believe, a solvable problem. Getting people not to be betraying bastards is hard enough at standard intelligence levels, having a thing that could potentially be way smarter and connected to way more data is not going to be controllable in any form or fashion.
Not that that demeans or devalues Docker, but it contextualizes its existence as an offshoot of a Google project that aimed to make it possible (and did, but only internally)
Midjourny needs to understand that its drawing a hand, that hands have 5 fingers.
Nobody will use Bing chat for anything important. Its like asking some some random guy on the train. He might know the answer, and if its not important then fine, but if it is important, say the success of your business, you're going to want talk to somebody who actually knows the answer.
Can ChatGPT evaluate how good ChatGPT-generated output is? This seems prone to exaggerating blind-spots, but OTOH, creating and criticising are different skills, and criticising is usually easier.
Precisely. Above 1% so in the realm of possible, but definitely not above 50% and probably not above 5% in the next 10-15 years. My guesstimate is around 1-2%.
But expand the time horizon to the next 50 years and the cognitive fallacy of underestimating long-term progress kicks in. That’s the timescale that actually produces scary high existential risk with our current trajectory of progress.
freeing up that many knowledge workers to do other things will grow the economy, not shrink it, a new industrial revolution
It is inevitable that the regulators will get in the way and open source alternatives will also catch up with OpenAI as they have done with GPT-3 and with DALLE-2 and even less than 2 months of ChatGPT, open source alternatives are quickly appearing everywhere.
(1) Possible in the forseeable future: The strongest evidence I have is the existence of humans.
I don't believe in magic or the immaterial human soul, so I conclude that human intelligence is in principle computable by an algorithm that could be implemented on a computer. While human wetware is very efficient, I don't think that such an algorithm would require vastly greater compute resources than we have available today.
Still, I used to think that the algorithm itself would be a very hard nut to crack. But that was back in the olden days, when it was widely believed that computers could not compete with humans at perception, poetry, music, artwork, or even the game of Go. Now AI is passing the Turing Test with flying colours, writing rap lyrics, drawing beautiful artwork and photorealistic images, and writing passable (if flawed) code.
Of course nobody has yet created AGI. But the gap between AI and AGI is gradually closing as breakthroughs are made. It seems increasingly to me that, while there are still some important, un-cracked nuts to the hidden secrets of human thought, they are probably few and finite, not as insurmountable as previously thought, and will likely yield to the resources that are being thrown at the problem.
(2) AGI will be more of a threat than any random human: I don't know what could count as "evidence" in your mind (see: the comment that you replied to), so I will present logical reasoning in its place.
AGI with median-human-level intelligence would be more of a threat than many humans, but less of a threat than humans like Putin. The reason that AGI would be a greater threat than most humans is that humans are physically embodied, while AGI is electronic. We have established, if imperfect, security practices against humans, but none tested against AGI. Unlike humans, the AGI could feasibly and instantaneously create fully-formed copies of itself, back itself up, and transmit itself remotely. Unlike humans, the AGI could improve its intrinsic mental capabilities by adding additional hardware. Unlike humans, an AGI with decent expertise at AI programming could experiment with self-modification. Unlike humans, timelines for AGI evolution are not inherently tied to a ~20 year maturity period. Unlike humans, if the AGI were interested in pursuing the extinction of the human race, there are potentially methods that it could use which it might itself survive with moderate probability.
If the AGI is smarter than most humans, or smarter than all humans, then I would need strong evidence to believe it is not more of a threat than any random human.
And if an AGI can be made as smart as a human, I would be surprised if it could not be made smarter than the smartest human.
Lester Thurow, a pretty left liberal economist, pointed out that women's "liberation" and entrance into the general workforce had starved teaching/education of the pool of talented women that had previously kept the quality of education very high. (His mother had been a teacher.)
I (who had studied econ so I tend to think about it) noticed at the dawn of the dot-com boom how much of industry is completely discretionary even though it seems serious and important. Whatever we were all doing before, it got dropped so we could all rush into the internet. The software industry, which was not small, suddenly changed its focus, all previous projects dropped, because those projects were suddenly starved of capital, workers, and attention.
The threat from humans leveraging narrow control of AI for power over other humans is, by far, the greatest threat from AI over any timeframe.
Not General, but there's IQ tests. Undergraduate examinations. Can also involve humans in the loop (though not iterate as fast), through usage of ChatGPT, CAPTCHA's, votes on reddit/hackernews/stackexchanges, even pay people to evaluate it.
Going back to the moat, even ordinary technology tends to improve, and a headstart can be maintained - provided its possible to improve it. So a question is whether ChatGPT is a kind of plateau, that can't be improved very much so others catch up while it stands still, or it's on a curve.
A significant factor is whether being ahead helps you keep ahead - a crucial thing is you gather usage data that is unavailable to followers. This data is more significant for this type of technology than any other - see above.
Yes, I'm aware though it may get better but we actually don't know that yet. What if it's way harder to go from outputting junior level code with tons of mistakes to error-free complex code, than it is to go from no capability to write code to junior level code with tons of mistakes? What if it's the difference between word prediction algorithm and actual human-level intelligence?
There may be a big decrease in demand, because a lot of apps are quite simple. A lot of software out there are "template apps", stuff that can theoretically be produced by a low code app, will be eventually produced by a low code app, AI or not. When it comes to novel and complex things, I think it's not unreasonable to consider that the next 10 - 20 years will still see plenty demand for good developers.
no sense wasting time stressing out about the cub at the zoo.
Perhaps, although intelligence and knowledge are two separate things, so one can display intelligence over a given set of knowledge without knowing other things. Of course intelligence isn't a scalar quantity - to be super-intelligent you want to display intelligence across the widest variety/type of experience and knowledge sets - not just "book smart or street smart", but both and more.
Certainly for parity with humans you need to be able to interact with the real world, but I'm not sure it's much different or a whole lot more more complex. Instead of "predict next word" the model/robot would be doing "predict next action", followed by "predict action response". Embeddings are a very powerful type of general-purpose representation - you can embed words, but also perceptions/etc, so I don't think we're very far at all from having similar transformer-based models able to act & perceive - I'd be somewhat surprised if people aren't already experimenting with this.
ChatGPT:
let screenshot = tab.capture_screenshot(ScreenshotFormat::PNG, None, true).await?;
let html = String::from_utf8(screenshot).unwrap();
>[...] Once the page is fully loaded, the function captures a screenshot of the page using tab.capture_screenshot(), converts the screenshot to a string using String::from_utf8(), and then returns the string as the HTML content of the page.
Of course, it admits to the mistake (sort of, it still does not get):
> You are correct, taking a screenshot and converting it to text may not always be the best approach to extract the HTML content from a webpage.
It's hilarious.
In terms of doctors, I think there is a counterbalancing effect of sorts, whereby some administration can be digitised and communication is more efficient, but it probably doesn't make up for the lack of additional candidates.