zlacker

[parent] [thread] 2 comments
1. accrua+(OP)[view] [source] 2025-12-06 23:56:10
Did you use PyTorch Native or Diffusers Inference? I couldn't get the former working yet so I used Diffusers, but it's terribly slow on my 4080 (4 min/image). Trying again with PyTorch now, seems like Diffusers is expected to be slow.
replies(1): >>Wowfun+D
2. Wowfun+D[view] [source] 2025-12-07 00:01:47
>>accrua+(OP)
Uh, not sure? I downloaded the portable build of ComfyUI and ran the CUDA-specific batch file it comes with.

(I'm not used to using Windows and I don't know how to do anything complicated on that OS. Unfortunately, the computer with the big GPU also runs Windows.)

replies(1): >>accrua+23
◧◩
3. accrua+23[view] [source] [discussion] 2025-12-07 00:19:38
>>Wowfun+D
Haha, I know how it goes. Thanks, I'll give that a try!

Update: works great and much faster via ComfyUI + the provided workflow file.

[go to top]