I don't feel like ever going back to x86 to be honest, at this point there is nothing lacking or unable to run and when the neural engine drivers come online now that the GPU is starting to mature people will be able to juice out every last bit of computation this machine is capable of.
For the record, I've switched to the edge branch a couple of months ago and honestly I noticed no actual difference in my day-to-day tasks which is really telling about how powerful even the M1 is when it can handle software rendering in such an effortless manner coupled with anything else running.
Really thank god for asahi being a thing.
Send me the link, that sounds amazing.
There do not exist "equivalent"s to any Apple devices (I don't see them licensing the M1/2 chips to anyone else anytime soon). But depending on what you care about, a "comparable" Apple device could be 10x more expensive. Of course, for other tasks a "comparable" Apple device can also be _cheaper_ than any non-Apple device available!
Only looking at aiming for similar "longevity" (since the parent is using that as a benchmark), there are plenty of devices that have a useful life comparable to Apple devices at 1/4 - 1/10 the price.
As for longevity, if you consider software support ending as EoL, software/OS support for a huge swath of Intel iMacs (especially those with DGPUs) was dropped by Apple quite a few OS releases ago and you have to run community patches to keep them working. Whereas similar decade old hardware still run Win 10 and Linux out of the box.
*: Don't get me wrong though, the markups are for good reason. x86 platforms don't offer anything close to Apple's ARM chip memory bandwidth (which are closer to DGPU levels). Similarly, for flash/SSDs.