Not necessarily if the model is trained with an appropriate adversarial loss. The reason that VAEs are blurry isn’t directly because of the KL divergence loss term but because of the L1/L2 loss. Since VAEs sample from a Gaussian distribution, a high KL weight will make the latent of different images overlap (towards a -1 to 1.0 with a variance of 1 and a mean of 0), and the output of the decoder will tend towards the mean of possible values to try and minimise the pixel-wise loss.
With the appropriate GAN loss, you will instead get a plausible sharp image that differs more and more from the original the more you weigh the KL loss term. A classic GAN that samples from the normal distribution in fact has the best possible KL divergence loss and none of the blurriness from a VAE’s pixel based loss.