zlacker

[return to "Ask HN: Should HN ban ChatGPT/generated responses?"]
1. photoc+K8[view] [source] 2022-12-11 18:54:13
>>djtrip+(OP)
Yes, ban it. I've been playing around with ChatGPT and where it starts failing is just where things start becoming interesting. What that means is that it's wikipedia-smart, i.e. it doesn't really tell you anything you can't find out with a minimal Google search. It does however cut the time-to-answer quite a bit, particularly if it's an area of knowledge you're not really that familiar with. But it bottoms out right as things start getting interesting, expertise wise.

Case example: I tried seeing what its limits on chemical knowledge were, starting with simple electron structures of molecules, and it does OK - remarkably, it got the advanced high-school level of methane's electronic structure right. It choked when it came to the molecular orbital picture and while it managed to list the differences between old-school hybrid orbitals and modern molecular orbitals, it couldn't really go into any interesting details about the molecular orbital structure of methane. Searching the web, I notice such details are mostly found in places like figures in research papers, not so much in text.

On the other hand, since I'm a neophyte when it comes to database architecture, it was great at answering what I'm sure any expert would consider basic questions.

Allowing comments sections to be clogged up with ChatGPT output would thus be like going to a restaurant that only served averaged-out mediocre but mostly-acceptable takes on recipes.

◧◩
2. scarfa+Km1[view] [source] 2022-12-12 04:30:35
>>photoc+K8
The problem with ChatGPT is that it often reads authoritative. But is often just flat out wrong.

I asked it a few questions for which I consider myself a subject matter expert and the answers were laughably wrong.

◧◩◪
3. culanu+dq1[view] [source] 2022-12-12 05:07:24
>>scarfa+Km1
For me it happend when I asked to write a function using BigQuery, it wrote a function that made a lot of sense but was wrong, because the command didn't exist in BigQuery. When I replay that the function didn't work, it told me, something like this: You right the function that I used it was only working on beta mode, now you have to use the following.... And again it was wrong. I made a little research and never was such a beta commandm.. And then I got that it just makes up things that it don't know, but says it with authority.
◧◩◪◨
4. scarfa+1r1[view] [source] 2022-12-12 05:15:39
>>culanu+dq1
I asked it to write a function in Python that would return the list of AWS accounts in an organization with a given tag key and value.

The code looked right, initialized boto3 correctly and called a function on it get_account_numbers_by_tag on the organizations object.

I wondered why I never heard of that function and nor did I find it when searching. Turns out, there is no such function.

◧◩◪◨⬒
5. Radim+O82[view] [source] 2022-12-12 12:14:00
>>scarfa+1r1
It gives the old saying "The reasonable man adapts himself to the world; the unreasonable man adapts the world to himself; therefore all progress depends on the unreasonable man." a new twist, doesn't it?

1. AN AI MODEL IS GIVEN ENOUGH CAPACITY to capture (some of) our human perspective, a snapshot of our world as reflected in its training data. <== We've been here for a while

2. AN AI MODEL IS GIVEN ENOUGH CAPACITY to fabulate and imagine things. <== We're unambiguously here now

The fabulations are of a charmingly naive "predict the most probable next token" sort for now, with chatGPT. But even as a future model is (inevitably) given the ability to probe and correct its errors, the initial direction of its fabulations will still reflect that "inception worldview" snapshot.

For example, if a particular fashion trend or political view was popular around the time the model was trained (with training data typically skewing toward the "recent", simply because "recent" is when most digital data will have been produced), that model can be expected to fabulate along the lines of that imprinted political view.

3. AN AI MODEL IS GIVEN ENOUGH CAPACITY to make the is-vs-ought choice between "CORRECT ITSELF" = adapt to the world; or "CORRECT THE WORLD" = imprint its worldview back onto the world (probably indirectly through humans paying attention to its outputs and acting as actuators, but that makes no difference). <== We're getting there rapidly

Will it be more reasonable or unreasonable?

And which mode wins out long-term, be more energy efficient in that entropic struggle for survival that all physical systems go through?

[go to top]