Yes, it's a big reason.
I tried to port the yuzu switch emulator to macos a few years ago, and you end up having to write compute shaders that emulate the geometry shaders to make that work.
Even fairly modern games like Mario Odyssey use geometry shaders.
Needless to say, I was not enough of a wizard to make this happen!
If the reason they don't support it in hardware is because they don't want to support it in software, then the logic gets a bit circular.
I'm interested in which came first, or if it's a little of both.
Geometry shaders are generally considered less necessary in modern graphics pipelines due to the rise of more flexible and efficient alternatives like mesh shaders which can perform similar geometry manipulation tasks with often better performance and more streamlined workflows
If you’re using geometry shaders, you’re almost always going to get better performance with compute shaders and indirect draws or mesh shaders.
A lot of hardware vendors will handle them in software which tanks performance. Metal decided to do away with them rather than carry the baggage of something that all vendors agree is bad.
It takes up valuable die space for very little benefit.
Tessellation falling short is just classic Apple, though. Shows how much they prioritize games in their decision making, despite every other year deciding they need a AAA game to showcase their hardware.
(apologies for the crude answer. I would genuinely be interested in a technical perspective defending the decision. My only conclusion is that the kind of software their customers need, like art or editing, does not need that much tessellation).
If you are talking about Vulkan, that is much more complicated. My guess is that they want to maintain their independence as hardware and software innovator. Hard to do that if you are locked into a design by committee API. Apple has had some bad experience with these things in the past (e.g. they donated OpenCL to Kronos only to see it sabotaged by Nvidia). Also, Apple wanted a lean and easy to learn GPU API for their platform, and Vulkan is neither.
While their stance can be annoying to both developers and users, I think it can be understood at some level. My feelings about Vulkan are mixed at best. I don't think it is a very good API, and I think it makes too many unnessesary compromises. Compare for example the VK_EXT_descriptor_buffer and Apple's argument buffers. Vulkan's approach is extremely convoluted — you are required to query descriptor sizes at runtime and perform manual offset computation. Apple's implementation is just 64-bit handles/pointers and memcpy, extremely lean and immediately understandable to anyone with basic C experience. I understand that Vulkan needs to support different types of hardware where these details can differ. However, I do not understand why they have to penalize developer experience in order to support some crazy hardware with 256-byte data descriptors.
I honestly wonder how much the rallying around Vulkan is just that it is a) newer than OpenGL and b) not DirectX.
I understand it’s good to have a graphics API that isn’t owned by one company and is cross platform. But I get the impression that that’s kind of Vulkan‘s main strong suit. That technically there’s a lot of stuff people aren’t thrilled with, but it has points A and B above so that makes it their preference.
(This is only in regard to how it’s talked about, I’m not suggesting people stop using it or switch off it to thing)