I think in code.
To me, having to translate the into natural language for the LLM to translate it back into code makes very little sense.
Am I alone in this camp? What am I missing?
But if you don't have the shape of a solution? Might be faster to have an AI find it. And then either accept AI's solution as is, or work off it.
I quite often prompt with code in a different language, or pseudo-code describing roughly what I am trying to achieve, or a Python function signature without the function body.
Or I will paste in a bunch of code I have already written with a comment somewhere that says "TODO: retrieve the information from the GitHub API" and have the model finish it for me.
This, and for multiple functions that end up composing well together as per their signatures. Maybe there's one public function I want to document well, so I write the docstring myself, and it's the result for 3-4 other internal functions which I'd let the LLM implement.
The nice part is that even if the LLM fails, all that is not lost, as opposed to some weird spec I'd feed an LLM but that's too verbose for a human reader, or a series of prompts.
Natural language is just a terrible interface and fundamentally not an appropriate one to communicate with a computer.
I wonder if I'm in the minority here because I'm neurodivergent.