Indeed, people viewing photographs have always been able to be manipulated by presentation as fact something that is not true -- you dress up smart, in borrowed clothes, when you're really poor; you stand with a person you don't know to indicate association; you get photographed with a dead person as if they're alive; you use a back drop or set; et cetera.
I think a use case for AI image manipulation could be more like if I need a picture where I'm poor but wearing smart borrowed clothes, standing with an unassociated associate and a dead alive, with a backdrop, with the only source image beimg selfie of someone else that incidentally caught half of me way in the background
The intent or use cases for these two (lacking a better term) manipulators aren't orthogonal here. The purpose of AI image generation is, well, images generated by AI. It could technically generate images that misrepresent info, but that's more of a side effect reached in a totally different way than staging a scene in an actual photo. It seems like using manipulation to stage misleading photos would be used primarily for the purpose of deceptive activities or subversive fuckery.