zlacker

[parent] [thread] 74 comments
1. crazyg+(OP)[view] [source] 2025-12-05 00:58:24
Wow. To me, the big news here is that ~30% of devices now support AV1 hardware decoding. The article lists a bunch of examples of devices that have gained it in the past few years. I had no idea it was getting that popular -- fantastic news!

So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?

replies(10): >>JoshTr+o >>dylan6+F >>dehrma+M1 >>snvzz+y2 >>thrdbn+06 >>0manrh+Xa >>vitorg+Jy >>alex_d+XE >>mort96+4Z >>hulitu+0t5
2. JoshTr+o[view] [source] 2025-12-05 01:01:21
>>crazyg+(OP)
> So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?

Hopefully AV2.

replies(2): >>jshear+W >>hulitu+Ct5
3. dylan6+F[view] [source] 2025-12-05 01:02:51
>>crazyg+(OP)
how does that mean "~30% of devices now support AV1 hardware encoding"? I'm guessing you meant hardware decoding???
replies(1): >>crazyg+l1
◧◩
4. jshear+W[view] [source] [discussion] 2025-12-05 01:04:22
>>JoshTr+o
H266/VVC has a five year head-start over AV2, so probably that first unless hardware vendors decide to skip it entirely. The final AV2 spec is due this year, so any day now, but it'll take a while to make it's way into hardware.
replies(5): >>adgjls+u1 >>kevinc+K2 >>adzm+W3 >>shmerl+Ba >>lambda+SY4
◧◩
5. crazyg+l1[view] [source] [discussion] 2025-12-05 01:06:43
>>dylan6+F
Whoops, thanks. Fixed.
◧◩◪
6. adgjls+u1[view] [source] [discussion] 2025-12-05 01:07:51
>>jshear+W
H266 is getting fully skipped (except possibly by Apple). The licensing is even worse than H265, the gains are smaller, and Google+Netflix have basically guaranteed that they won't use it (in favor of AV1 and AV2 when ready).
replies(2): >>johnco+VX >>TitaRu+kX1
7. dehrma+M1[view] [source] 2025-12-05 01:10:05
>>crazyg+(OP)
Not trolling, but I'd bet something that's augmented with generative AI. Not to the level of describing scenes with words, but context-aware interpolation.
replies(5): >>randal+z2 >>km3r+C3 >>afiori+iE >>mort96+fZ >>cubefo+Jy1
8. snvzz+y2[view] [source] 2025-12-05 01:16:34
>>crazyg+(OP)
>So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support

That'd be h264 (associated patents expired in most of the world), vp9 and av1.

h265 aka HEVC is less common due to dodgy, abusive licensing. Some vendors even disable it with drivers despite hardware support because it is nothing but legal trouble.

replies(1): >>ladyan+MC
◧◩
9. randal+z2[view] [source] [discussion] 2025-12-05 01:16:37
>>dehrma+M1
for sure. macroblock hinting seems like a good place for research.
◧◩◪
10. kevinc+K2[view] [source] [discussion] 2025-12-05 01:17:46
>>jshear+W
If it has a five year start and we've seen almost zero hardware shipping that is a pretty bad sign.

IIRC AV1 decoding hardware started shipping within a year of the bitstream being finalized. (Encoding took quite a bit longer but that is pretty reasonable)

replies(1): >>jshear+t4
◧◩
11. km3r+C3[view] [source] [discussion] 2025-12-05 01:25:32
>>dehrma+M1
https://blogs.nvidia.com/blog/rtx-video-super-resolution/

We already have some of the stepping stones for this. But honestly much better for upscaling poor quality streams vs just gives things a weird feeling when it is a better quality stream.

◧◩◪
12. adzm+W3[view] [source] [discussion] 2025-12-05 01:29:07
>>jshear+W
VVC is pretty much a dead end at this point. Hardly anyone is using it; it's benefits over AV1 are extremely minimal and no one wants the royalty headache. Basically everyone learned their lesson with HEVC.
replies(1): >>ksec+sb1
◧◩◪◨
13. jshear+t4[view] [source] [discussion] 2025-12-05 01:33:22
>>kevinc+K2
https://en.wikipedia.org/wiki/Versatile_Video_Coding#Hardwar...

Yeah, that's... sparse uptake. A few smart TV SOCs have it, but aside from Intel it seems that none of the major computer or mobile vendors are bothering. AV2 next it is then!

14. thrdbn+06[view] [source] 2025-12-05 01:45:40
>>crazyg+(OP)
I'm not too surprised. It's similar to the metric that "XX% of Internet is on IPv6" -- it's almost entirely driven by mobile devices, specifically phones. As soon as both mainstream Android and iPhones support it, the adoption of AV1 should be very 'easy'.

(And yes, even for something like Netflix lots of people consume it with phones.)

◧◩◪
15. shmerl+Ba[view] [source] [discussion] 2025-12-05 02:29:47
>>jshear+W
When even H.265 is being dropped by the likes of Dell, adoption of H.266 will be even worse making it basically DOA for anything promising. It's plagued by the same problems H.265 is.
replies(1): >>SG-+UE
16. 0manrh+Xa[view] [source] 2025-12-05 02:33:39
>>crazyg+(OP)
> To me, the big news here is that ~30% of devices now support AV1 hardware decoding

Where did it say that?

> AV1 powers approximately 30% of all Netflix viewing

Is admittedly a bit non-specific, it could be interpreted as 30% of users or 30% of hours-of-video-streamed, which are very different metrics. If 5% of your users are using AV1, but that 5% watches far above the average, you can have a minority userbase with an outsized representation in hours viewed.

I'm not saying that's the case, just giving an example of how it doesn't necessarily translate to 30% of devices using Netflix supporting AV1.

Also, the blog post identifies that there is an effective/efficient software decoder, which allows people without hardware acceleration to still view AV1 media in some cases (the case they defined was Android based phones). So that kinda complicates what "X% of devices support AV1 playback," as it doesn't necessarily mean they have hardware decoding.

replies(3): >>sophie+nt >>endorp+cx >>cogman+bA1
◧◩
17. sophie+nt[view] [source] [discussion] 2025-12-05 06:20:46
>>0manrh+Xa
“30% of viewing” I think clearly means either time played or items played. I’ve never worked with a data team that would possibly write that and mean users.

If it was a stat about users they’d say “of users”, “of members”, “of active watchers”, or similar. If they wanted to be ambiguous they’d say “has reached 30% adoption” or something.

replies(2): >>0manrh+ju >>csdrea+J42
◧◩◪
18. 0manrh+ju[view] [source] [discussion] 2025-12-05 06:29:56
>>sophie+nt
Agreed, but this is the internet, the ultimate domain of pedantry, and they didn't say it explicitly, so I'm not going to put words in their mouth just to have a circular discussion about why I'm claiming they said something they didn't technically say, which is why I asked "Where did it say that" at the very top.

Also, either way, my point was and still stands: it doesn't say 30% of devices have hardware encoding.

◧◩
19. endorp+cx[view] [source] [discussion] 2025-12-05 07:09:11
>>0manrh+Xa
In either case, it is still big news.
20. vitorg+Jy[view] [source] 2025-12-05 07:32:29
>>crazyg+(OP)
I mean... I bought a Samsung TV in 2020, and it already supported AV1 HW decoding.

2020 feels close, but that's 5 years.

replies(2): >>cubefo+7c1 >>usrusr+1j1
◧◩
21. ladyan+MC[view] [source] [discussion] 2025-12-05 08:13:10
>>snvzz+y2
I have the feeling that H265 is more prevalent than VP9
replies(1): >>seabro+UU4
◧◩
22. afiori+iE[view] [source] [discussion] 2025-12-05 08:25:03
>>dehrma+M1
AI embeddings can be seen as a very advanced form of lossy compression
◧◩◪◨
23. SG-+UE[view] [source] [discussion] 2025-12-05 08:31:45
>>shmerl+Ba
Dell is significant in the streaming and media world?
replies(1): >>close0+eJ
24. alex_d+XE[view] [source] 2025-12-05 08:32:17
>>crazyg+(OP)
That's not at all how I read it.

They mentioned they delivered a software decoder on android first, then they also targeted web browsers (presumably through wasm). So out of these 30%, a good chunk of it is software not hardware.

That being said, it's a pretty compelling argument for phone and tv manufacturers to get their act together, as Apple has already done.

replies(1): >>danude+4V2
◧◩◪◨⬒
25. close0+eJ[view] [source] [discussion] 2025-12-05 08:59:21
>>SG-+UE
Dell and HP are significant in the "devices" world and they just dropped the support for HEVC hardware encoding/decoding [1] to save a few cents per device. You can still pay for the Microsoft add-in that does this. It's not just streaming, your Teams background blur was handled like that.

Eventually people and companies will associate HEVC with "that thing that costs extra to work", and software developers will start targeting AV1/2 so their software performance isn't depending on whether the laptop manufacturer or user paid for the HEVC license.

[1] https://arstechnica.com/gadgets/2025/11/hp-and-dell-disable-...

replies(3): >>nolok+vT >>shmerl+NO2 >>danude+ZT2
◧◩◪◨⬒⬓
26. nolok+vT[view] [source] [discussion] 2025-12-05 09:35:26
>>close0+eJ
On the same line, Synology dropped it on their NAS too (for their video, media etc ... Even thumbnails, they ask the sender device to generate one locally and send it, the NAS won't do it anymore for HEVC)
◧◩◪◨
27. johnco+VX[view] [source] [discussion] 2025-12-05 10:06:44
>>adgjls+u1
Did anybody, including the rightsholders, come out ahead on H265? From the outside it looked like the mutually assured destruction situation with the infamous mobile patents, where they all end up paying lawyers to demand money from each other for mostly paper gains.
replies(2): >>tux3+n61 >>gary_0+qP1
28. mort96+4Z[view] [source] 2025-12-05 10:18:05
>>crazyg+(OP)
> So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?

Hopefully, we can just stay on AV1 for a long while. I don't feel any need to obsolete all the hardware that's now finally getting hardware decoding support for AV1.

◧◩
29. mort96+fZ[view] [source] [discussion] 2025-12-05 10:19:52
>>dehrma+M1
I don't want my video decoder inventing details which aren't there. I much rather want obvious compression artifacts than a codec where the "compression artifacts" look like perfectly realistic, high-quality hallucinated details.
replies(1): >>cubefo+Xa1
◧◩◪◨⬒
30. tux3+n61[view] [source] [discussion] 2025-12-05 11:17:03
>>johnco+VX
Why, the patent office did. There are many ideas that cannot be reinvented for the next few decades, and thanks to submarine patents it is simply not safe to innovate without your own small regiment of lawers.

This is a big victory for the patent system.

replies(1): >>Dylan1+YJ1
◧◩◪
31. cubefo+Xa1[view] [source] [discussion] 2025-12-05 11:54:36
>>mort96+fZ
In case of many textures (grass, sand, hair, skin etc) it makes little difference whether the high frequency details are reproduced exactly or hallucinated. E.g. it doesn't matter whether the 1262nd blade of grass from the left side is bending to the left or to the right.
replies(1): >>mort96+mm1
◧◩◪◨
32. ksec+sb1[view] [source] [discussion] 2025-12-05 11:57:36
>>adzm+W3
It is being used in China and India for Streaming. Brazil chose it with LCEVC for their TV 3.0. Broadcasting industry is also preparing for VVC. So it is not popular as in Web and Internet is usage, but it is certainly not dead.

I am eagerly awaiting for AV2 test results.

replies(2): >>adzm+KK3 >>Sunspa+6L4
◧◩
33. cubefo+7c1[view] [source] [discussion] 2025-12-05 12:02:16
>>vitorg+Jy
Two years ago I bought a Snapdragon 8+ Gen 1 phone (TSMC 4nm, with 12 GB LPDDR RAM, 256 GB NAND flash, and a 200 megapixel camera). It still feels pretty modern but it has no AV1 support.
◧◩
34. usrusr+1j1[view] [source] [discussion] 2025-12-05 12:49:00
>>vitorg+Jy
Is that supposed to be long-lived for a TV?

I'm running an LG initially released in 2013 and the only thing I'm not happy with is that about a year ago Netflix ended their app for that hardware generation (likely for phasing out whatever codec it used). Now I'm running that unit behind an Amazon fire stick and the user experience is so much worse.

(that LG was a "smart" TV from before they started enshittifying, such a delight - had to use and set up a recent LG once on a family visit and it was even worse than the fire stick, omg, so much worse!)

replies(3): >>windex+tp1 >>Stiles+Py1 >>Dylan1+nL1
◧◩◪◨
35. mort96+mm1[view] [source] [discussion] 2025-12-05 13:08:03
>>cubefo+Xa1
And in the case of many others, it makes a very significant difference. And a codec doesn't have enough information to know.

Imagine a criminal investigation. A witness happened to take a video as the perpetrator did the crime. In the video, you can clearly see a recognizable detail on the perpetrator's body in high quality; a birthmark perhaps. This rules out the main suspect -- but can we trust that the birthmark actually exists and isn't hallucinated? Would a non-AI codec have just showed a clearly compression-artifact-looking blob of pixels which can't be determined one way or the other? Or would a non-AI codec have contained actual image data of the birth mark in sufficient detail?

Using AI to introduce realistic-looking details where there was none before (which is what your proposed AI codec inherently does) should never happen automatically.

replies(4): >>cubefo+Gz1 >>beala+Sz1 >>mapt+QI1 >>amiga3+BN1
◧◩◪
36. windex+tp1[view] [source] [discussion] 2025-12-05 13:26:29
>>usrusr+1j1
If, by chance, you're not running the latest version RootMyTV [0] may be an option. Or downgrade might still be an option [1].

[0] https://github.com/RootMyTV/RootMyTV.github.io [1] https://github.com/throwaway96/downgr8

◧◩
37. cubefo+Jy1[view] [source] [discussion] 2025-12-05 14:13:42
>>dehrma+M1
Neural codecs are indeed the future of audio and video compression. A lot of people / organizations are working on them and they are close to being practical. E.g. https://arxiv.org/abs/2502.20762
◧◩◪
38. Stiles+Py1[view] [source] [discussion] 2025-12-05 14:13:53
>>usrusr+1j1
Fire Stick is the most enshittified device (which is why it was so cheap). AppleTV is fantastic if you're willing to spend $100. You don't need the latest gen; previous gen are just as good.
◧◩◪◨⬒
39. cubefo+Gz1[view] [source] [discussion] 2025-12-05 14:18:57
>>mort96+mm1
Maybe there could be a "hallucination rate" parameter in the encoder: More hallucination would enable higher subjective image quality without increased accuracy. It could be used for Netflix streaming, where birthmarks and other forensic details don't matter because it's all just entertainment. Of course the hallucination parameter needs to be hard coded somehow in the output in order to determine its reliability.
◧◩◪◨⬒
40. beala+Sz1[view] [source] [discussion] 2025-12-05 14:20:01
>>mort96+mm1
There’s an infamous case of xerox photocopiers substituting in incorrect characters due to a poorly tuned compression algorithm. No AI necessary.

https://en.wikipedia.org/wiki/JBIG2#:~:text=Character%20subs...

replies(1): >>mort96+hB1
◧◩
41. cogman+bA1[view] [source] [discussion] 2025-12-05 14:21:36
>>0manrh+Xa
That was one of the best decisions of AOMedia.

AV1 was specifically designed to be friendly for a hardware decoder and that decision makes it friendly to software decoding. This happened because AOMedia got hardware manufacturers on the board pretty early on and took their feedback seriously.

VP8/9 took a long time to get decent hardware decoding and part of the reason for that was because the stream was more complex than the AV1 stream.

replies(2): >>Neywin+AO1 >>galad8+Zk2
◧◩◪◨⬒⬓
42. mort96+hB1[view] [source] [discussion] 2025-12-05 14:27:32
>>beala+Sz1
Yeah, I had that case in mind actually. It's a perfect illustration of why compression artifacts should be obvious and not just realistic-looking hallucinations.
◧◩◪◨⬒
43. mapt+QI1[view] [source] [discussion] 2025-12-05 15:01:32
>>mort96+mm1
> a codec doesn't have enough information to know.

The material belief is that modern trained neural network methods that improve on ten generations of variations of the discrete cosine transform and wavelets, can bring a codec from "1% of knowing" to "5% of knowing". This is broadly useful. The level of abstraction does not need to be "The AI told the decoder to put a finger here", it may be "The AI told the decoder how to terminate the wrinkle on a finger here". An AI detail overlay. As we go from 1080p to 4K to 8K and beyond we care less and less about individual small-scale details being 100% correct, and there are representative elements that existing techniques are just really bad at squeezing into higher compression ratios.

I don't claim that it's ideal, and the initial results left a lot to be desired in gaming (where latency and prediction is a Hard Problem), but AI upscaling is already routinely used for scene rips of older videos (from the VHS Age or the DVD Age), and it's clearly going to happen inside of a codec sooner or later.

replies(1): >>mort96+wL1
◧◩◪◨⬒⬓
44. Dylan1+YJ1[view] [source] [discussion] 2025-12-05 15:06:21
>>tux3+n61
The patent office getting $100k or whatever doesn't sound like a win for them either.

I'm not sure what you mean by "patent system" having a victory here, but it's not that the goal of promoting innovation is happening.

◧◩◪
45. Dylan1+nL1[view] [source] [discussion] 2025-12-05 15:12:29
>>usrusr+1j1
> Is that supposed to be long-lived for a TV?

I don't see anything in that comment implying such a thing. It's just about the uptake of decoders.

◧◩◪◨⬒⬓
46. mort96+wL1[view] [source] [discussion] 2025-12-05 15:12:43
>>mapt+QI1
I'm not saying it's not going to happen. I'm saying it's a terrible idea.

AI upscaling built in to video players isn't a problem, as long as you can view the source data by disabling AI upscaling. The human is in control.

AI upscaling and detail hallucination built in to video codecs is a problem.

replies(1): >>mapt+wN1
◧◩◪◨⬒⬓⬔
47. mapt+wN1[view] [source] [discussion] 2025-12-05 15:22:17
>>mort96+wL1
The entire job of a codec is subjectively authentic, but lossy compression. AI is our best and in some ways easiest method of lossy compression. All lossy compression produces artifacts; JPEG macroblocks are effectively a hallucination, albeit one that is immediately identifiable because it fails to simulate anything else we're familiar with.

AI compression doesn't have to be the level of compression that exists in image generation prompts, though. A SORA prompt might be 500 bits (~1 bit per character natural English), while a decompressed 4K frame that you're trying to bring to 16K level of simulated detail starts out at 199 million bits. It can be a much finer level of compression.

◧◩◪◨⬒
48. amiga3+BN1[view] [source] [discussion] 2025-12-05 15:22:36
>>mort96+mm1
> And in the case of many others, it makes a very significant difference.

This is very true, but we're talking about an entertainment provider's choice of codec for streaming to millions of subscribers.

A security recording device's choice of codec ought to be very different, perhaps even regulated to exclude codecs which could "hallucinate" high-definition detail not present in the raw camera data, and the limitations of the recording media need to be understood by law enforcement. We've had similar problems since the introduction of tape recorders, VHS and so on, they always need to be worked out. Even the phantom of Helibronn (https://en.wikipedia.org/wiki/Phantom_of_Heilbronn) turned out to be DNA contamination of swabs by someone who worked for the swab manufacturer.

replies(1): >>mort96+bh2
◧◩◪
49. Neywin+AO1[view] [source] [discussion] 2025-12-05 15:26:12
>>cogman+bA1
Hmmm disagree on your chain there. Plenty of easy hardware algorithms are hard for software. For example, in hardware (including FPGAs), bit movement/shuffling is borderline trivial if it's constant, while in software you have to shift and mask and or over and over. In hardware you literally just switch which wire is connected to what on the next stage. Same for weird bit widths. Hardware doesn't care (too much) if you're operating on 9 bit quantities or 33 or 65. Software isn't that granular and often you'll double your storage and waste a bunch.

I think they certainly go hand in hand in that algorithms relatively easier for software vs previously are easier for hardware vs previously and vice versa, but they are good at different things.

replies(1): >>cogman+3T1
◧◩◪◨⬒
50. gary_0+qP1[view] [source] [discussion] 2025-12-05 15:30:00
>>johnco+VX
MBAs got to make deals and lawyers got to file lawsuits. Everyone else got to give them money. God bless the bureaucracy.
◧◩◪◨
51. cogman+3T1[view] [source] [discussion] 2025-12-05 15:45:23
>>Neywin+AO1
I'm not claiming that software will be more efficient. I'm claiming that things that make it easy to go fast in hardware make it easy to go fast in software.

Bit masking/shifting is certainly more expensive in software, but it's also about the cheapest software operation. In most cases it's a single cycle transform. In the best cases, it's something that can be done with some type of SIMD instruction. And in even better cases, it's a repeated operation which can be distributed across the array of GPU vector processors.

What kills both hardware and software performance is data dependency and conditional logic. That's the sort of thing that was limited in the AV1 stream.

replies(1): >>IshKeb+NA5
◧◩◪◨
52. TitaRu+kX1[view] [source] [discussion] 2025-12-05 16:00:47
>>adgjls+u1
For smart TVs Netflix is obviously a very important partner.
◧◩◪
53. csdrea+J42[view] [source] [discussion] 2025-12-05 16:29:34
>>sophie+nt
I am not in data science so I can not validate your comment, but 30% of viewing I would assume mean users or unique/discreet viewing sessions and not watched minutes. I would appreciate it if Netflix would clarify.
◧◩◪◨⬒⬓
54. mort96+bh2[view] [source] [discussion] 2025-12-05 17:21:25
>>amiga3+BN1
I don't understand why it needs to be a part of the codec. Can't Netflix use relatively low bitrate/resolution AV1 and then use AI to upscale or add back detail in the player? Why is this something we want to do in the codec and therefore set in stone with standard bodies and hardware implementations?
replies(1): >>amiga3+Mk2
◧◩◪◨⬒⬓⬔
55. amiga3+Mk2[view] [source] [discussion] 2025-12-05 17:38:24
>>mort96+bh2
We're currently indulging a hypothetical, the idea of AI being used to either improve the quality of streamed video, or provide the same quality with a lower bitrate, so the focus is what would both ends of the codec could agree on.

The coding side of "codec" needs to know what the decoding side would add back in (the hypothetical AI upscaling), so it knows where it can skimp and get a good "AI" result anyway, versus where it has to be generous in allocating bits because the "AI" hallucinates too badly to meet the quality requirements. You'd also want it specified, so that any encoding displays the same on any decoder, and you'd want it in hardware as most devices that display video rely on dedicated decoders to play it at full frame rate and/or not drain their battery. It it's not in hardware, it's not going to be adopted. It is possible to have different encodings, so a "baseline" encoding could leave out the AI upscaler, at the cost of needing a higher bitrate to maintain quality, or switching to a lower quality if bitrate isn't there.

Separating out codec from upscaler, and having a deliberately low-resolution / low-bitrate stream be naively "AI upscaled" would, IMHO, look like shit. It's already a trend in computer games to render at lower resolution and have dedicated graphics card hardware "AI upscale" (DLSS, FSR, XeSS, PSSR), because 4k resolutions are just too much work to render modern graphics consistently at 60fps. But the result, IMHO, noticibly and distractingly glitches and errors all the time.

◧◩◪
56. galad8+Zk2[view] [source] [discussion] 2025-12-05 17:39:15
>>cogman+bA1
All I read about is that it's less hardware friendly than H.264 and HEVC, and they were all complaining about it. AV2 should be better in this regard.

Where did you read that it was designed to make creating an hardware decoder easier?

replies(2): >>cogman+vu2 >>hulitu+ft5
◧◩◪◨
57. cogman+vu2[view] [source] [discussion] 2025-12-05 18:21:57
>>galad8+Zk2
It was a presentation on AV1 before it was released. I'll see if I can find it but I'm not holding my breath. It's mostly coming from my own recollection.

Ok, I don't think I'll find it. I think I'm mostly just regurgitating what I remember watching at one of the research symposiums. IDK which one it was unfortunately [1]

[1] https://www.youtube.com/@allianceforopenmedia2446/videos

replies(1): >>danude+yT2
◧◩◪◨⬒⬓
58. shmerl+NO2[view] [source] [discussion] 2025-12-05 19:50:08
>>close0+eJ
Also you can just use Linux, Dell / HP have no control over the actual GPU for that, I think they just disabled it in Windows level. Linux has no gatekeepers for that and you can use your GPU as you want.

But this just indicates that HEVC etc. is a dead end anyway.

◧◩◪◨⬒
59. danude+yT2[view] [source] [discussion] 2025-12-05 20:15:51
>>cogman+vu2
I've heard that same anecdote before, that hardware decoding was front of mind. Doesn't mean that you (we) are right, but at least if you're hallucinating it's not just you.
◧◩◪◨⬒⬓
60. danude+ZT2[view] [source] [discussion] 2025-12-05 20:18:05
>>close0+eJ
Dell is dropping it to save 4 cents per device, so users will have to pay $1 to Microsoft per user instead. Go figure.
◧◩
61. danude+4V2[view] [source] [discussion] 2025-12-05 20:24:26
>>alex_d+XE
This is something that infuriates me to no end - companies forcing software decoding on my devices rather than shipping me a codec my device supports.

When I'm watching something on YouTube on my iPhone, they're usually shipping me something like VP9 video which requires a software decoder; on a sick day stuck in bed I can burn through ten percent of my battery in thirty minutes.

Meanwhile, if I'm streaming from Plex, all of my media is h264 or h265 and I can watch for hours on the same battery life.

replies(1): >>Sunspa+MN4
◧◩◪◨⬒
62. adzm+KK3[view] [source] [discussion] 2025-12-06 02:23:10
>>ksec+sb1
Right, I know some studios have used it for high quality/resolution/framerate exports too, so it is definitely not dead. But still a dead end, from everything I've seen. No one seems to want to bother with it unless it is already within the entire pipeline. Every project I've seen that worked with it that went to consumers or editors ended up running into issues of some sort that they ended up using something else entirely and any VVC support basically abandoned or deprecated. It's a shame because VVC really is pretty awesome, but the only people using it seem to be those that adopted it earlier assuming broader support that never materialized.
◧◩◪◨⬒
63. Sunspa+6L4[view] [source] [discussion] 2025-12-06 15:01:59
>>ksec+sb1
China and India are two-thirds of the human race. With population numbers like those, this is large scale adoption.. just not in our market.
replies(1): >>cyberl+vL4
◧◩◪◨⬒⬓
64. cyberl+vL4[view] [source] [discussion] 2025-12-06 15:06:32
>>Sunspa+6L4
Your math is way off, China and India combined are roughly 35 %.
◧◩◪
65. Sunspa+MN4[view] [source] [discussion] 2025-12-06 15:23:25
>>danude+4V2
Another example of why Android is better for this use case. With Firefox for Android you can install an extension to force h264 from YouTube and the problem is solved. With iPhone, you cannot. You must buy a new device when you need a feature or support.
◧◩◪
66. seabro+UU4[view] [source] [discussion] 2025-12-06 16:16:44
>>ladyan+MC
YouTube uses VP9 so it depends if you're talking "number of applications that use it" or "number of hours watched".
◧◩◪
67. lambda+SY4[view] [source] [discussion] 2025-12-06 16:47:41
>>jshear+W
AV2's mission is to nip VVC in the bud. They seem to be more or less at parity, and given that, why would anyone want to use a royalty-based codec when they could could use an essentially equivalent free one? There's no massive hurry to implement either - we have existing codecs that are largely good enough for now - this is technology which take 5 to 10 years to fully deploy, as has been seen with other codecs.
68. hulitu+0t5[view] [source] 2025-12-06 21:02:09
>>crazyg+(OP)
> So now that h.264, h.265, and AV1 seem to be the three major codecs with hardware support, I wonder what will be the next one?

Continous improvement ? CADT ? What shall the next one bring ? Free meals ?

◧◩◪◨
69. hulitu+ft5[view] [source] [discussion] 2025-12-06 21:04:37
>>galad8+Zk2
> AV2 should be better in this regard

Will it, though ?

Why create a SW spec and hope that the HW will support it ? Why not design together with HW ?

◧◩
70. hulitu+Ct5[view] [source] [discussion] 2025-12-06 21:08:01
>>JoshTr+o
> Hopefully AV2.

This is so 90's. Why not AV2026 or AV26.0 ? /s

◧◩◪◨⬒
71. IshKeb+NA5[view] [source] [discussion] 2025-12-06 22:15:30
>>cogman+3T1
> Bit masking/shifting is certainly more expensive in software, but it's also about the cheapest software operation. In most cases it's a single cycle transform.

He's not talking about simple bit shifts. Imagine if you had to swap every other bit of a value. In hardware that's completely free; just change which wires you connect to. In software it takes several instructions. The 65 bit example is good too. In hardware it makes basically no difference to go from 64 bits to 65 bits. In software it is significantly more complete - it can more than double computation time.

I think where software has the advantage is sheer complexity. It's harder to design and verify complex algorithms in hardware than it is in software, so you need to keep things fairly simple. The design of even state-of-the-art CPUs is surprisingly simple; a cycle accurate model might only be a few tens of thousands of lines of code.

replies(2): >>Neywin+BG5 >>cogman+8H5
◧◩◪◨⬒⬓
72. Neywin+BG5[view] [source] [discussion] 2025-12-06 23:04:26
>>IshKeb+NA5
Right. It's bit packing and unpacking. Currently dealing with a 32 bit system that needs to pack 8 11 bit quantities each subsisting of 3 multi bit values into a 96 bit word. As you can imagine, the assembly is a mess of bit manipulation and it takes forever. Ridiculously it's to talk to a core that extracts them effortlessly. I'm seriously considering writing an accelerator to do this for me
◧◩◪◨⬒⬓
73. cogman+8H5[view] [source] [discussion] 2025-12-06 23:09:37
>>IshKeb+NA5
I just have to repeat myself

> I'm not claiming that software will be more efficient. I'm claiming that things that make it easy to go fast in hardware make it easy to go fast in software.

replies(1): >>IshKeb+PI5
◧◩◪◨⬒⬓⬔
74. IshKeb+PI5[view] [source] [discussion] 2025-12-06 23:21:33
>>cogman+8H5
Right... That's what I was disagreeing with. Hardware and software have fairly different constraints.
replies(1): >>cogman+6Q5
◧◩◪◨⬒⬓⬔⧯
75. cogman+6Q5[view] [source] [discussion] 2025-12-07 00:21:46
>>IshKeb+PI5
I don't think you have an accurate view on what makes an algorithm slow.

The actual constraints on what makes hardware or software slow are remarkably similar. It's not ultimately the transforms on the data which slow down software, it's when you inject conditional logic or data loads. The same is true for hardware.

The only added constraint software has is a limited number of registers to operate on. That can cause software to put more pressure on memory than hardware does. But otherwise, similar algorithms accomplishing the same task will have similar performance characteristics.

Your example of the bitshift is a good illustration of that. Yes, in hardware it's free. And in software it's 3 operations which is pretty close to free. Both will spend far more time waiting on main memory to load up the data for the masking than they will spend doing the actual bit shuffling. The constraint on the software is you are burning maybe 3 extra registers. That might get worse if you have no registers to spare forcing you to constantly load and store.

This is the reason SMT has become ubiquitous on x86 platforms. Because CPUs spend so much time waiting on data to arrive that we can make them do useful work while we wait for those cache lines to fill up.

Saying "hardware can do this for free" is an accurate statement, but you are missing the 80/20 of the performance. Yes, it can do something subcycle that costs software 3 cycles to perform. Both will wait for 1000 cycles while the data is loaded up from main memory. A fast video codec that is easy to decode with hardware gets there by limiting the amount of dataloads that need to happen to calculates a given frame. It does that by avoiding wonky frame transformations. By preferring compression which uses data-points in close memory proximity.

[go to top]