Interesting to me that this one can draw legible text. DALLE models seem to generate weird glyphs that only look like text. The examples they show here have perfectly legible characters and correct spelling. The difference between this and DALLE makes me suspicious / curious. I wish I could play with this model.
>>ALittl+(OP)
Imagen takes text embeddings, OpenAI model takes image embeddings instead, this is the reason. There are other models that can generate text: latent diffusion trained on LAION-400M, GLIDE, DALL-E (1).
>>GaggiX+Ka
My understanding of the terms text and image embeddings is that they are ways of representing text or images as vectors. But, I don't understand how that would help with the process of actually drawing the symbols for those letters.
>>sdento+gs
I blame it on the surprisingly structural cleverness of a bicycle. Opposing triangles probably isn’t the first thing most people think of when they think of a bicycle (vs two wheels and some handlebars)
>>ALittl+Ql
If the model takes text embeddings/tokens as an input, it can create a connection between the caption and the text on the image (sometimes they are really similar).
>>ALittl+(OP)
The latent-diffusion[1] one I've been playing with is not terrible at drawing legible text but generally awful at actually drawing the text you want (cf. [2]) (or drawing text when you don't want any.)
>>ALittl+(OP)
DALLE1 was able to render text[0]. That DALLE2 isn't probably is a tradeoff introduced by unCLIP in exchange for diverse results. Now the google model is better yet and doesn't have to make that tradeoff.
>>Tehdas+Q6
I only see the problem for the paintings. If you choose a photo it's good. Could be a problem in the source data (i.e. paintings of mechanical objects are imperfect).