zlacker

[parent] [thread] 3 comments
1. buffer+(OP)[view] [source] 2023-11-20 05:59:46
I thought GPT-4 was not trained on labeled data, but simply on a large volume of text / code. Most of it is publicly accessible: wikipedia, archives of scientific articles, books, github, plus probably purchased data from text-heavy sites like Reddit.
replies(3): >>enigmu+61 >>lyu072+C2 >>frabcu+r3
2. enigmu+61[view] [source] 2023-11-20 06:06:29
>>buffer+(OP)
Assuming it's a reference to RLHF? Not sure
3. lyu072+C2[view] [source] 2023-11-20 06:14:00
>>buffer+(OP)
No it's reinforcement learning with human feedback, RLHF lots of labeling
4. frabcu+r3[view] [source] 2023-11-20 06:19:17
>>buffer+(OP)
Whatever they've built this year presumably uses all the positive/negative feedback on ChatGPT that they have a year worth of data now...

Another examples is the Be My Eyes data - presumably the vision part of GPT-4 was trained on the archive of data the blind assistance app has, and that could be an exclusive deal with OpenAI.

[go to top]