zlacker

[parent] [thread] 2 comments
1. cellis+(OP)[view] [source] 2022-05-23 22:04:23
Inference times are key. If it can't be produced within reasonable latency, then there will be no real world use case for it because it's simply too expensive to run inference at scale.
replies(2): >>thepti+n2 >>dougmw+ul1
2. thepti+n2[view] [source] 2022-05-23 22:16:46
>>cellis+(OP)
There are plenty of usecases for generating art/images where a latency of days or weeks would be competitive with the current state of the art.

For example, corporate graphics design, logos, brand photography, etc.

I really do think inference time is a red herring for the first generation of these models.

Sure, the more transformative use-cases like real-time content generation to replace movies/games, but there is a lot of value to be created prior to that point.

3. dougmw+ul1[view] [source] 2022-05-24 11:15:00
>>cellis+(OP)
There's been much prior work done to take these models down from datacenter size to single GPU size. Given continued work in that area and improving GPU performance it seems like it's just a matter of years before inference can be cheap and local for even the most impressive of generation.
[go to top]