- It takes 10s of millions of dollars in GPU time for training?
- Curation of data to train on
- Maybe 10s of thousands of man hours for reinforcement?
- How many lines of code are written for the nets and data pipelines?
Does anyone have any insight on these numbers?
Get NVidia, AMD, or Apple to help fund the new entity and/or get some chip designers on board to push things further than OpenAI can without reaching into Microsoft’s pocket. A pocket I’m sure will be much tighter after the recent chicanery.
Capital would NOT a problem at this point as it’s beyond proof of concept… A normal start up, trying to prove itself, sure, but at this point Altman has proven the idea and himself at the helm. I’d also argue the dataset they used to train it, is not that relevant long term. As the data itself was agglomerated from the internet and can be had again. Even better data perhaps because the copyright holders can become investors. You really just need the capacity to deal with it, from ingestion to legal, which is a capital problem.