zlacker

[return to "Memory and new controls for ChatGPT"]
1. anothe+Pf[view] [source] 2024-02-13 19:29:04
>>Josely+(OP)
This is a bit off topic to the actual article, but I see a lot of top ranking comments complaining that ChatGPT has become lazy at coding. I wanted to make two observations:

1. Yes, GPT-4 Turbo is quantitatively getting lazier at coding. I benchmarked the last 2 updates to GPT-4 Turbo, and it got lazier each time.

2. For coding, asking GPT-4 Turbo to emit code changes as unified diffs causes a 3X reduction in lazy coding.

Here are some articles that discuss these topics in much more detail.

https://aider.chat/docs/unified-diffs.html

https://aider.chat/docs/benchmarks-0125.html

◧◩
2. omalle+6q[view] [source] 2024-02-13 20:21:14
>>anothe+Pf
Can you say in one or two sentences what you mean by “lazy at coding” in this context?
◧◩◪
3. Me1000+Mq[view] [source] 2024-02-13 20:24:55
>>omalle+6q
It has a tendency to do:

"// ... the rest of your code goes here"

in it's responses, rather than writing it all out.

◧◩◪◨
4. asaddh+Ds[view] [source] 2024-02-13 20:36:32
>>Me1000+Mq
It's incredibly lazy. I've tried to coax it into returning the full code and it will claim to follow the instructions while regurgitating the same output you complained about. GPT-4 was great, GPT-4 Turbo first version was pretty terrible bordering on unusable, then they came out with the Turbo second version, which almost feels worse to me, though I haven't compared, but if someone comes claiming they fixed an issue, but you still see it, it will bias you to see it more.

Claude is doing much better in this area, local/open LLMs are getting quite good, it feels like OpenAI is not heading in a good direction here, and I hope they course correct.

◧◩◪◨⬒
5. mister+PA[view] [source] 2024-02-13 21:21:44
>>asaddh+Ds
I have a feeling full powered LLM's are reserved for the more equal animals.

I hope some people remember and document details of this era, future generations may be so impressed with future reality that they may not even think to question it's fidelity, if that concept even exists in the future.

◧◩◪◨⬒⬓
6. bbor+tE[view] [source] 2024-02-13 21:43:28
>>mister+PA
…could you clarify? Is this about “LLMs can be biased, thus making fake news a bigger problem”?
◧◩◪◨⬒⬓⬔
7. _puk+AP[view] [source] 2024-02-13 22:47:23
>>bbor+tE
Imagine if the first version of ChatGPT we all saw was fully sanitised..

We know it knows how to make gunpowder (for example), but only because it would initially tell us.

Now it won't without a lot of trickery. Would we even be pushing to try and trick it into doing so if we didn't know it actually could?

◧◩◪◨⬒⬓⬔⧯
8. bbor+Qk1[view] [source] 2024-02-14 02:58:25
>>_puk+AP
Ah so it’s more about “forbidden knowledge” than “fake news” makes sense. I don’t personally see as that toooo much of an issue since other sources still exist, eg Wikipedia, internet archive, libraries, or that one Minecraft Library of Alexandria project. So I see knowledge storage staying there and LLMs staying put in the interpretation/transformation role, for the foreseeable future.

But obviously all that social infrastructure is fragile… so you’re not wrong to be alarmed, IMO

◧◩◪◨⬒⬓⬔⧯▣
9. antupi+Zq1[view] [source] 2024-02-14 03:51:23
>>bbor+Qk1
It is not that much about censorship, even that would be somewhat fine if OpenAI would do it dataset level so chatgpt would not have any knowledge about bomb-making. But it is happening lazily so system prompts get bigger which makes a signal to noise worse etc. I don't care about racial bias or what to call pope when I want chatgpt to write Python code.
◧◩◪◨⬒⬓⬔⧯▣▦
10. ben_w+ZY1[view] [source] 2024-02-14 10:23:12
>>antupi+Zq1
> It is not that much about censorship, even that would be somewhat fine if OpenAI would do it dataset level so chatgpt would not have any knowledge about bomb-making

While I would agree that "don't tell it how to make bombs" seems like a nice idea at first glance, and indeed I think I've had that attitude myself in previous HN comments, I currently suspect that it may be insufficient and that a censorship layer may be necessary (partly in a addition, partly as an alternative).

I was taught, in secondary school, two ways to make a toxic chemical using only things found in a normal kitchen. In both cases, I learned this in the form of being warned of what not to do because of the danger it poses.

There's a lot of ways to be dangerous, and I'm not sure how to get an AI to avoid dangers without them knowing them. That said, we've got a sense of disgust that tells us to keep away from rotting flesh without explicit knowledge of germ theory, so it may be possible although research would be necessary, and as a proxy rather than the real thing it will suffer from increased rates of both false positives and false negatives. Nevertheless, I certainly hope it is possible, because anyone with the model weights can extract directly modelled dangers, which may be a risk all by itself if you want to avoid terrorists using one to make an NBC weapon.

> I don't care about racial bias or what to call pope when I want chatgpt to write Python code.

I recognise my mirror image. It may be a bit of a cliché for a white dude to say they're "race blind", but I have literally been surprised to learn coworkers have faced racial discrimination for being "black" when their skin looks like mine in the summer.

I don't know any examples of racial biases in programming[1], but I can see why it matters. None of the code I've asked an LLM to generate has involved `Person` objects in any sense, so while I've never had an LLM inform me about racial issues in my code, this is neither positive nor negative anecdata.

The etymological origin of the word "woke" is from the USA about 90-164 years ago (the earliest examples preceding and being intertwined with the Civil War), meaning "to be alert to racial prejudice and discrimination" — discrimination which in the later years of that era included (amongst other things) redlining[0], the original form of which was withholding services from neighbourhoods that have significant numbers of ethnic minorities: constructing a status quo where the people in charge can say "oh, we're not engaging in illegal discrimination on the basis of race, we're discriminating against the entirely unprotected class of 'being poor' or 'living in a high crime area' or 'being uneducated'".

The reason I bring that up, is that all kinds of things like this can seep into our mental models of how the world works, from one generation to the next, and lead to people who would never knowingly discriminate to perpetuate the same things.

Again, I don't actually know any examples of racial biases in programming, but I do know it's a thing with gender — it's easy (even "common sense") to mark gender as a boolean, but even ignoring trans issues: if that's a non-optional field, what's the default gender? And what's it being used for? Because if it is only used for title (Mr./Mrs.), what about other titles? "Doctor" is un-gendered in English, but in Spanish it's "doctor"/"doctora". But here matters what you're using the information for, rather than just what you're storing in an absolute sense, as in a medical context you wouldn't need to offer cervical cancer screening for trans women (unless the medical tech is more advanced than I realised).

[0] https://en.wikipedia.org/wiki/Redlining

[1] unless you count AI needing a diverse range of examples, which you may or may not count as "programming"; other than that, the closest would be things like "master branch" or "black-box testing" which don't really mean the things being objected to, but were easy to rename anyway

[go to top]