zlacker

[parent] [thread] 2 comments
1. tarran+(OP)[view] [source] 2022-12-15 18:31:24
The thing is though, it's trained on human text. And most humans are per difinition, very fallible. Unless someone made it so that it can never get trained on subtly wrong code, how will it ever improve? Imho AI can be great for suggestions as for which method to use (visual studio has this, and I think there is an extension for visual studio code for a couple of languages). I think fine grained things like this are very useful, but I think code snippets are just too coarse to actually be helpful.
replies(1): >>tintor+yw
2. tintor+yw[view] [source] 2022-12-15 20:56:51
>>tarran+(OP)
Improve itself through experimentation with reinforcement learning. This is how humans improve too. AlphaZero does it.
replies(1): >>lostms+zC
◧◩
3. lostms+zC[view] [source] [discussion] 2022-12-15 21:27:42
>>tintor+yw
The amount of work in that area of research is substantial. You will see world shattering results in a few years.

Current SOTA: https://openai.com/blog/vpt/

[go to top]