As with any other project, it’s best to specify your wants and needs than to let someone or an LLM to guess.
I've been developing my prompting skills for nearly three years now and I still constantly find new and better ways to prompt.
I also consider knowing what "use a reasoning model" means to be part of that skill!
I just tell the AI what I want, with sufficient context. Then, I check the reasoning trace to check it understood what I wanted. You need to be clear in your prompts, sure, but I don't really see it as "prompt engineering" any more.