I’m the Co-founder and CTO of Krea. We’re excited because we wanted to release the weights for our model and share it with the HN community for a long time.
My team and I will try to be online and try to answer any questions you may have throughout the day.
It’s simple: hackability and recruiting!
The open-source community hacking around it and playing with it PLUS talented engineers who may be interested in working with us already makes this release worth it. A single talented distributed systems engineer has a lot of impact here.
Also, the company ethos is around AI hackability/controllability, high-bar for talent, and AI for creatives - so this aligns perfectly.
The fact that Krea serves both in-house and 3rd-Party models tells you that we are not that bullish on models being a moat.
My reasoning: If the user types in "a cat reading a book" then it seems obvious that the result should look like a real cat which is actually reading a book. So it obviously shouldn't have an "AI style", but it also shouldn't produce something that looks like an illustration or painting or otherwise unrealistic. Without further context, a "cat" is a photorealistic cat, not an illustration or painting or cartoon of a cat.
In short, it seems that users who want something other than realism should be expected to mention it in the prompt. Or am I missing some other nuances here?