zlacker

[parent] [thread] 4 comments
1. leonid+(OP)[view] [source] 2023-05-17 00:42:23
Two years ago we already had GPT-2, that was capable of some problem solving and logic following. It was archaic, sure, it produced a lot of gibberish, yes, but if you followed OpenAI releases closely, you wouldn't think that something like GPT3.5 was "pure science fiction", it would just look as the inevitable evolution of GPT-2 in a couple of years given the right conditions.
replies(2): >>ux-app+Q >>canjob+Vi
2. ux-app+Q[view] [source] 2023-05-17 00:47:39
>>leonid+(OP)
that's pedantic. switch 2 years to 5 years and the point still stands.
replies(1): >>edgyqu+xm
3. canjob+Vi[view] [source] 2023-05-17 03:43:35
>>leonid+(OP)
In hindsight it’s an obvious evolution, but in practice vanishingly few people saw it coming.
replies(1): >>leonid+3P1
◧◩
4. edgyqu+xm[view] [source] [discussion] 2023-05-17 04:27:14
>>ux-app+Q
No it isn’t. Even before transformers people were doing cool things with LSTMs and RNNs before that. People following this space haven’t really been surprised by any of these advancements. It’s a straight forward path imo
◧◩
5. leonid+3P1[view] [source] [discussion] 2023-05-17 15:37:58
>>canjob+Vi
Few people saw it coming in just two years, sure. But most people following this space were already expecting a big evolution like the one we saw in 5-ish years.

For example, take this thread: https://news.ycombinator.com/item?id=21717022

It's a text RPG game built on top of GPT-2 that could follow arbitrary instructions. It was a full project with custom training for something that you can get with a single prompt on ChatGPT nowadays, but it clearly showcased what LLMs were capable of and things we take for granted now. It was clear, back then, that at some point ChatGPT would happen.

[go to top]