We expect images that look like photographs — at least when taken by amateurs — to be the result of a documentary process, rather than an artistic one. They might be slightly filtered or airbrushed, but they won't be put together from whole cloth.
But amateur photography is actually the outlier, in the history of "capturing memories"!
If you imagine yourself before the invention of photography, describing your vacation to an illustrator you're commissioning to create a some woodblock-print artwork for a set of christmas cards you're having made up, the conversation you've laid out here is exactly how things would go. They'd ask you to recount what you saw, do a sketch, and then you'd give feedback and iterate together with them, to get a final visual down that reflects things the way you remember them, rather than the way they were, per se.
Indeed, people viewing photographs have always been able to be manipulated by presentation as fact something that is not true -- you dress up smart, in borrowed clothes, when you're really poor; you stand with a person you don't know to indicate association; you get photographed with a dead person as if they're alive; you use a back drop or set; et cetera.
I think a use case for AI image manipulation could be more like if I need a picture where I'm poor but wearing smart borrowed clothes, standing with an unassociated associate and a dead alive, with a backdrop, with the only source image beimg selfie of someone else that incidentally caught half of me way in the background
The intent or use cases for these two (lacking a better term) manipulators aren't orthogonal here. The purpose of AI image generation is, well, images generated by AI. It could technically generate images that misrepresent info, but that's more of a side effect reached in a totally different way than staging a scene in an actual photo. It seems like using manipulation to stage misleading photos would be used primarily for the purpose of deceptive activities or subversive fuckery.