I, knowing far less than him, would've had a much more elaborate prompt, and o3 would've proved a lot more competent/capable. Yet with my friend, since he knows so much already, and has such a high bar, he thinks the AI should be able to do a lot more with just a few basic words in a prompt... yet, for that same reason, he (understandably) doubts the inevitable sub-par output.
That's what makes all these debates about "Why are smart people doubting LLMs??" so pointless. The smarter you are, the less help you need, so the less prompting you do, the less context the model has, the less impressive the output, and the more the smart person thinks LLMs suck. With this logic, of course the smartest people are also the biggest skeptics!