zlacker

[return to "ChatGPT Containers can now run bash, pip/npm install packages and download files"]
1. dangoo+821[view] [source] 2026-01-27 01:14:20
>>simonw+(OP)
Giving agents linux has compounding benefits in our experience. They're able to sort through weirdness that normal tooling wouldn't allow. Like they can read and image, get an error back from the API and see it wasn't the expected format. They read the magic bytes to see it was a jpeg despite being named .png, and read it correctly.
◧◩
2. lpcvoi+Ds2[view] [source] 2026-01-27 13:30:41
>>dangoo+821
I don't understand why this is something special that somebody would need some LLM slop generation for? Any human can also do this in a few seconds using normal unix tooling.
◧◩◪
3. darkno+Aw2[view] [source] 2026-01-27 13:54:37
>>lpcvoi+Ds2
I think you'd find that it's far from "any human" who can do this without looking anything up. I have 15y of dev exp and couldn't do this from memory on the cli. Maybe in c, but less helpful to getting stuff done!
◧◩◪◨
4. lpcvoi+Ez2[view] [source] 2026-01-27 14:10:18
>>darkno+Aw2

  # curl -s https://upload.wikimedia.org/wikipedia/commons/6/61/Sun.png | file -
  /dev/stdin: PNG image data, 256 x 256, 8-bit/color RGBA, non-interlaced
That's it, two utilities almost everybody has installed.
◧◩◪◨⬒
5. simonw+FA2[view] [source] 2026-01-27 14:15:06
>>lpcvoi+Ez2
ChatGPT has 800 million monthly users. The fraction of those who are comfortable opening a terminal and running those commands is pretty tiny.
◧◩◪◨⬒⬓
6. lpcvoi+iC2[view] [source] 2026-01-27 14:21:41
>>simonw+FA2
If 800m people think delegating thinking to a slop generator is fine, that's not my loss. It's bad for humanity, but who even cares anymore in 2026, right?
◧◩◪◨⬒⬓⬔
7. simonw+VI2[view] [source] 2026-01-27 14:48:34
>>lpcvoi+iC2
"Delegating thinking" and "figuring out how to determine an image format from the first few bytes of a file" are not the same thing.
◧◩◪◨⬒⬓⬔⧯
8. lpcvoi+FK2[view] [source] 2026-01-27 14:56:07
>>simonw+VI2
I disagree, in my opinion it's the exact same process, just on a much smaller scale. It's a problem, and we humans are good at solving problems. That is, until LLMs arrived, now we are supposed to become good at prompting, or something.
[go to top]