If you are talking about Vulkan, that is much more complicated. My guess is that they want to maintain their independence as hardware and software innovator. Hard to do that if you are locked into a design by committee API. Apple has had some bad experience with these things in the past (e.g. they donated OpenCL to Kronos only to see it sabotaged by Nvidia). Also, Apple wanted a lean and easy to learn GPU API for their platform, and Vulkan is neither.
While their stance can be annoying to both developers and users, I think it can be understood at some level. My feelings about Vulkan are mixed at best. I don't think it is a very good API, and I think it makes too many unnessesary compromises. Compare for example the VK_EXT_descriptor_buffer and Apple's argument buffers. Vulkan's approach is extremely convoluted — you are required to query descriptor sizes at runtime and perform manual offset computation. Apple's implementation is just 64-bit handles/pointers and memcpy, extremely lean and immediately understandable to anyone with basic C experience. I understand that Vulkan needs to support different types of hardware where these details can differ. However, I do not understand why they have to penalize developer experience in order to support some crazy hardware with 256-byte data descriptors.
I honestly wonder how much the rallying around Vulkan is just that it is a) newer than OpenGL and b) not DirectX.
I understand it’s good to have a graphics API that isn’t owned by one company and is cross platform. But I get the impression that that’s kind of Vulkan‘s main strong suit. That technically there’s a lot of stuff people aren’t thrilled with, but it has points A and B above so that makes it their preference.
(This is only in regard to how it’s talked about, I’m not suggesting people stop using it or switch off it to thing)