OpenGL was the nicer API to use, on all platforms, because it hid all the nasty business of graphics buffer and context management, but in those days, it was also targeted much more at CAD/CAM and other professional use. The games industry wasn't really a factor in road maps and features. Since OpenGL did so much in the driver for you, you were dependent on driver support for all kinds of use cases. Different hardware had different capabilities and GL's extension system was used to discover what was available, but it wasn't uncommon to have to write radically different code paths for some rendering features based on the cababilities present. These capabilities could change across driver versions, so your game could break when the user updated their drivers. The main issue here was quite sloppy support from driver vendors.
DirectX was disgusting to work with. All the buffer management that OpenGL hid was now your responsibility, as was resource management for textures and vertex arrays. In DirectX, if some other 3D app was running at the same time, your textures and vertex buffers would be lost every frame, and you're have to reconstruct everything. OpenGL did that automatically behind the scenes. this is just one example. What DirectX did have, though was some form of certification, eg, "Direct X9", which guaranteed some level of features, so if you wrote your code to a DX spec, it was likely to work on lots of computers, because Microsoft did some thorough verification of drivers, and pushed manufacturers to do better. Windows was the most popular home OS, and MacOS was insignificant. OpenGL ruled on IRIX, SunOS/Solaris, HP/UX, etc, basically along the home/industry split, and that's where engineering effort went.
So, we game developers targeted the best supported API on Windows, and that was DX, despite having to hold your nose to use it. It didn't hurt that Microsoft provided great compilers and debuggers, and when XBox came out, which used the same toolchain, that finally cinched DX's complete victory, because you could debug console apps in the same way you did desktop apps, making the dev cycle so much easier. The PS1/PS2 and GameCube were really annoying to work with from an API standpoint.
Microsoft did kill OpenGL, but it was mainly because they provided a better alternative. They also did sabotage OpenGL directly, by limiting the DLL's shipped with windows to OpenGL 1.2, so you ended up having to work around this by poking into you driver vendor's OpenGL DLL and looking up symbols by name before you could use them. Anticompetitive as they were technically, though, they did provide better tools.
HW OpenGL was not available on the consumer machines (Win95, Win2K) at all, GLQuake used so-called "mini-driver" which was just a wrapper around few Glide APIs and was a way to circumvent id's contract with Rendition, which forbade them from using any proprietary APIs other than Verite (the first HW accelerated game they released had been VQuake), by the time the full consumer HW OpenGL drivers became available circa OpenGL 2.0 time, DirectX 9 already reigned supreme. You can tell by the number of OpenGL games released after 2004 (mobile games did not use OpenGL but the OpenGL ES, which is a different API).
I don't remember console development fondly. This is 25 years ago, so memory is hazy, but the GameCube compiler and toolchain was awful to work with, while the PS2 TOOL compile/test cycle was extremely slow and the API's were hard to work with, but that was more hardware craziness than anything. XBox was the easiest when it came out. Dreamcast was on the way out, but I remember really enjoying being clever with the various SH4 math instructions. Anyhow, I think we're both right, just in different times. In the DX7 days, NVIDIA and ATI were shipping OpenGL libraries which were usable, but yes, by then, DX was the 800lb gorilla on windows. The only reason that OpenGL worked at all was due to professional applications and big companies pushing against Microsoft's restrictions.