That's an incredibly bold claim that would need quite a bit of evidence, and just waving "$500k in gpus" isn't it. Especially when individuals are reporting more than enough tps at native int4 with <$80k setups, without any of the scaling benefits that commercial inference providers have.
I know you need to cope because your competency is 1:1 correlated to the quality and quantity of tokens you can afford, so have fun with your Think for me SaaS while you can afford it. You have no clue the amount of engineering that goes into provide inference at scale. I wasn't even including the cost of labor.
> You still need $500k in GPUs and a boatload of electricity to serve like 3 concurrent sessions at a decent tok/ps.
as being patent bullshit, after which the burden is squarely on you to back up the remainder of your claims.