Case example: I tried seeing what its limits on chemical knowledge were, starting with simple electron structures of molecules, and it does OK - remarkably, it got the advanced high-school level of methane's electronic structure right. It choked when it came to the molecular orbital picture and while it managed to list the differences between old-school hybrid orbitals and modern molecular orbitals, it couldn't really go into any interesting details about the molecular orbital structure of methane. Searching the web, I notice such details are mostly found in places like figures in research papers, not so much in text.
On the other hand, since I'm a neophyte when it comes to database architecture, it was great at answering what I'm sure any expert would consider basic questions.
Allowing comments sections to be clogged up with ChatGPT output would thus be like going to a restaurant that only served averaged-out mediocre but mostly-acceptable takes on recipes.
I asked it a few questions for which I consider myself a subject matter expert and the answers were laughably wrong.
The code looked right, initialized boto3 correctly and called a function on it get_account_numbers_by_tag on the organizations object.
I wondered why I never heard of that function and nor did I find it when searching. Turns out, there is no such function.
1. AN AI MODEL IS GIVEN ENOUGH CAPACITY to capture (some of) our human perspective, a snapshot of our world as reflected in its training data. <== We've been here for a while
2. AN AI MODEL IS GIVEN ENOUGH CAPACITY to fabulate and imagine things. <== We're unambiguously here now
The fabulations are of a charmingly naive "predict the most probable next token" sort for now, with chatGPT. But even as a future model is (inevitably) given the ability to probe and correct its errors, the initial direction of its fabulations will still reflect that "inception worldview" snapshot.
For example, if a particular fashion trend or political view was popular around the time the model was trained (with training data typically skewing toward the "recent", simply because "recent" is when most digital data will have been produced), that model can be expected to fabulate along the lines of that imprinted political view.
3. AN AI MODEL IS GIVEN ENOUGH CAPACITY to make the is-vs-ought choice between "CORRECT ITSELF" = adapt to the world; or "CORRECT THE WORLD" = imprint its worldview back onto the world (probably indirectly through humans paying attention to its outputs and acting as actuators, but that makes no difference). <== We're getting there rapidly
Will it be more reasonable or unreasonable?
And which mode wins out long-term, be more energy efficient in that entropic struggle for survival that all physical systems go through?