I asked it a few questions for which I consider myself a subject matter expert and the answers were laughably wrong.
in so many instances, it’s just wrong but continues on so confidently.
one of the things i find most interesting is that it has no idea when it’s wrong so it just keeps going. we already have a fairly significant growing problem of people who refuse to admit (even to themselves) what they don’t know, and now this will just exacerbate the problem.
it’s like the worst of debateBro culture has just been automated.
Sounds sociopathic, and also like many politicians and people in leadership positions.
The code looked right, initialized boto3 correctly and called a function on it get_account_numbers_by_tag on the organizations object.
I wondered why I never heard of that function and nor did I find it when searching. Turns out, there is no such function.
Just now I asked
Write a Python script that returns all of the accounts in an AWS organization with a given tag where the user specifies the tag key and value using command line arguments
I thought the code had to be wrong because it used concepts I had never heard of. This time it used the resource group API.
I have never heard of the API. But it does exist. I also couldn’t find sample code on the internet that did anything similar. But from looking at the documentation it should work. I learned something new today.
BTW, for context when I claimed to be a “subject matter expert” above, I work at AWS in Professional Services, code most days using the AWS API and I would have never thought of the solution it gave me.
That sounds pretty damn human to me.
1. AN AI MODEL IS GIVEN ENOUGH CAPACITY to capture (some of) our human perspective, a snapshot of our world as reflected in its training data. <== We've been here for a while
2. AN AI MODEL IS GIVEN ENOUGH CAPACITY to fabulate and imagine things. <== We're unambiguously here now
The fabulations are of a charmingly naive "predict the most probable next token" sort for now, with chatGPT. But even as a future model is (inevitably) given the ability to probe and correct its errors, the initial direction of its fabulations will still reflect that "inception worldview" snapshot.
For example, if a particular fashion trend or political view was popular around the time the model was trained (with training data typically skewing toward the "recent", simply because "recent" is when most digital data will have been produced), that model can be expected to fabulate along the lines of that imprinted political view.
3. AN AI MODEL IS GIVEN ENOUGH CAPACITY to make the is-vs-ought choice between "CORRECT ITSELF" = adapt to the world; or "CORRECT THE WORLD" = imprint its worldview back onto the world (probably indirectly through humans paying attention to its outputs and acting as actuators, but that makes no difference). <== We're getting there rapidly
Will it be more reasonable or unreasonable?
And which mode wins out long-term, be more energy efficient in that entropic struggle for survival that all physical systems go through?
One thing I noticed, it’s either trained naturally or tweaked by humans not to be political or say anything controversial. I asked if a simple question “Does open door have a good business model”. It punted like any good politician.