zlacker

[return to "Ask HN: Should HN ban ChatGPT/generated responses?"]
1. photoc+K8[view] [source] 2022-12-11 18:54:13
>>djtrip+(OP)
Yes, ban it. I've been playing around with ChatGPT and where it starts failing is just where things start becoming interesting. What that means is that it's wikipedia-smart, i.e. it doesn't really tell you anything you can't find out with a minimal Google search. It does however cut the time-to-answer quite a bit, particularly if it's an area of knowledge you're not really that familiar with. But it bottoms out right as things start getting interesting, expertise wise.

Case example: I tried seeing what its limits on chemical knowledge were, starting with simple electron structures of molecules, and it does OK - remarkably, it got the advanced high-school level of methane's electronic structure right. It choked when it came to the molecular orbital picture and while it managed to list the differences between old-school hybrid orbitals and modern molecular orbitals, it couldn't really go into any interesting details about the molecular orbital structure of methane. Searching the web, I notice such details are mostly found in places like figures in research papers, not so much in text.

On the other hand, since I'm a neophyte when it comes to database architecture, it was great at answering what I'm sure any expert would consider basic questions.

Allowing comments sections to be clogged up with ChatGPT output would thus be like going to a restaurant that only served averaged-out mediocre but mostly-acceptable takes on recipes.

◧◩
2. scarfa+Km1[view] [source] 2022-12-12 04:30:35
>>photoc+K8
The problem with ChatGPT is that it often reads authoritative. But is often just flat out wrong.

I asked it a few questions for which I consider myself a subject matter expert and the answers were laughably wrong.

◧◩◪
3. culanu+dq1[view] [source] 2022-12-12 05:07:24
>>scarfa+Km1
For me it happend when I asked to write a function using BigQuery, it wrote a function that made a lot of sense but was wrong, because the command didn't exist in BigQuery. When I replay that the function didn't work, it told me, something like this: You right the function that I used it was only working on beta mode, now you have to use the following.... And again it was wrong. I made a little research and never was such a beta commandm.. And then I got that it just makes up things that it don't know, but says it with authority.
◧◩◪◨
4. scarfa+1r1[view] [source] 2022-12-12 05:15:39
>>culanu+dq1
I asked it to write a function in Python that would return the list of AWS accounts in an organization with a given tag key and value.

The code looked right, initialized boto3 correctly and called a function on it get_account_numbers_by_tag on the organizations object.

I wondered why I never heard of that function and nor did I find it when searching. Turns out, there is no such function.

◧◩◪◨⬒
5. lhuser+Is1[view] [source] 2022-12-12 05:33:36
>>scarfa+1r1
It sounds a lot similar to normal thinking errors that we make.
◧◩◪◨⬒⬓
6. scarfa+sw1[view] [source] 2022-12-12 06:15:09
>>lhuser+Is1
The second time, it gave me code that was almost right.

Just now I asked

Write a Python script that returns all of the accounts in an AWS organization with a given tag where the user specifies the tag key and value using command line arguments

I thought the code had to be wrong because it used concepts I had never heard of. This time it used the resource group API.

I have never heard of the API. But it does exist. I also couldn’t find sample code on the internet that did anything similar. But from looking at the documentation it should work. I learned something new today.

BTW, for context when I claimed to be a “subject matter expert” above, I work at AWS in Professional Services, code most days using the AWS API and I would have never thought of the solution it gave me.

[go to top]