zlacker

[parent] [thread] 1 comments
1. 10000t+(OP)[view] [source] 2023-05-16 20:24:10
You can't strangle the development of such models because the data comes from anywhere and everywhere. Short of shutting off the entire Internet, there's nothing a government can do to prevent some guy on the opposite side of the world from hoovering up publicly accessible human text into a corpus befitting an LLM training set.
replies(1): >>bootsm+x8
2. bootsm+x8[view] [source] 2023-05-16 21:11:23
>>10000t+(OP)
It costs a lot of money to train foundation models, that is a big hurdle to open source models which can strangle further development.

Open source AI needs people with low stakes (Meta AI) who continue to open source foundation models for the community to tinker with

[go to top]