AMD GPUs are relatively well tested. Anybody who's looked at nvidia's architecture could tell you it's not perfect for every application. Similarly AMD's isn't either.
If you know what your application would be and have the $300 million custom chips may be way more wise. Something you'd only get if you make things in-house/at startups.
>>toaste+(OP)
For which applications are AMD GPUs more suited? Last I looked at the available chips, AMD sometimes had higher FLOPS or memory throughput (and generally lower cost), but I don't recall any qualitative advantages. In contrast, just to pick something I care about, NVIDIAs memory and synchronisation model allows operations like prefix sums to be significantly more efficient.
>>Athas+l5
They may have had an edge on 64-bit performance, which is pretty much useless for deep learning, but can be useful e.g. physics simulations or other natural science applications.