zlacker

[parent] [thread] 2 comments
1. zimpen+(OP)[view] [source] 2022-05-24 09:55:20
Don't have access to Dall-E 2 or Imagen but I do have [1] and [2] locally and they produced [3] with that prompt.

[1] https://github.com/nerdyrodent/VQGAN-CLIP.git [2] https://github.com/CompVis/latent-diffusion.git [3] https://imgur.com/a/dCPt35K

replies(1): >>Nition+CT1
2. Nition+CT1[view] [source] 2022-05-24 20:40:31
>>zimpen+(OP)
Nice. Latent-diffusion has come out very traditional but the VQGAN/CLIP ones are fairly original.
replies(1): >>zimpen+YW1
◧◩
3. zimpen+YW1[view] [source] [discussion] 2022-05-24 21:04:54
>>Nition+CT1
From my experiments, the LD one doesn't seem to have been trained on as big or as tagged data set - there's a whole bunch of "in the style of X" that the VQGAN knows* about but the LD doesn't. That might have something to do with it.
[go to top]