zlacker

[return to "ChatGPT Is a Gimmick"]
1. danlit+ag[view] [source] 2025-05-22 07:43:31
>>blueri+(OP)
It is refreshing to see I am not the only person who cannot get LLMs to say anything valuable. I have tried several times, but the cycle "You're right to question this. I actually didn't do anything you asked for. Here is some more garbage!" gets really old really fast.

It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.

◧◩
2. lovepa+Hi[view] [source] 2025-05-22 08:07:36
>>danlit+ag
I use LLMs to check solutions for graduate level math and physics problem I'm working on. Can I 100% trust their final output? Of course not, but I know enough about the domain to tell whether they discovered mistakes in my solutions or not. And they do a pretty good job and have found mistakes in my reasoning many times.

I also use them for various coding tasks and they, together with agent frameworks, regularly do refactoring or small feature implementations in 1-2 minutes that would've taken me 10-20 minutes. They've probably increased my developer productivity by 2-3x overall, and by a lot more when I'm working with technology stacks that I'm not so familiar with or haven't worked with for a while. And I've been an engineer for almost 30 years.

So yea, I think you're just using them wrong.

◧◩◪
3. bsaul+lk[view] [source] 2025-05-22 08:22:22
>>lovepa+Hi
i could have written all of this myself. I use it exactly for the same purposes ( except i don't do undergrad physics, just maths) and with the same outcome.

It's also pretty useful for brainstorming : talking to AI helps you refine your thoughts. It probably won't give you any innovative idea, only a survey of mainstream ones, but it's a pretty good start for thinking about a problem.

[go to top]