Ah yes, as shown by the example of OpenAI.
OpenAI's weird insistence that its "open" research only be accessible to entities it trusts not to do harm with it (like... Microsoft of course!) is indicative of a divide between academic research and corporate research that exists in all fields. While there are serious concerns about AI risk, when I hear it in a corporate context, it always seems to be a thin veneer of affected concern painted over an all-too-familiar drive to keep innovation under lock and key
When talking about ML's preference for open access, the article was probably referring to the fact that ML literature is overwhelmingly published on arXiv or similarly open-access places (even if it's also published elsewhere) compared to most fields. Personally, I think this is strictly because it's a relatively new field without as many entrenched norms