zlacker

[return to "My AI skeptic friends are all nuts"]
1. fjfaas+U91[view] [source] 2025-06-03 07:52:59
>>tablet+(OP)
When I ask chatGTP to generate the code for a M4F MCU that implements the VirtIO code for accessing the GPIO through RPMSG using FreeRTOS it produces two answers that are both incomplete and incorrect.
◧◩
2. cdrini+Uf1[view] [source] 2025-06-03 08:58:24
>>fjfaas+U91
This is very outside my domain :P I asked ChatGPT to explain the acronyms in your comment and still do not understand it. But I think a reason LLMs might struggle is that there are too many up front complex requirements.

Giving it only 1 or 2 complex requirements at a time, and then having it iterate, I've found to be more effective.

Most LLMs don't "think", so when asking an LLM something, I generally try to think "would I be able to do this without thinking, if I had all the knowledge, but just had to start typing and go?".

You could break down your prompt into separate prompts like this maybe: https://chatgpt.com/share/683eb7d7-e7ec-8012-8b3b-e34d523dc9...

I think it broke things down in a weird way, but I definitely can't analyse the correctness of anything it outputs in this domain :P

Coding specific agents like Copilot might be better able to handle a complex initial prompt, since they take the initial prompt, and use LLMs to break it down into smaller steps, which ChatGPT doesn't do. They can sort of "think". Deep research AIs also have a sort of thinking too, so might do better.

[go to top]