zlacker

[parent] [thread] 4 comments
1. staina+(OP)[view] [source] 2024-02-13 20:51:31
it was really good at some point last fall, solving problems that it had previously completely failed at, albeit after a lot of iterations via autogpt. at least for the tests i was giving it which usually involved heavy stats and complicated algorithms, i was surprised it passed. despite it passing the code was slower than what i had personally solved the problem with, but i was completely impressed because i asked hard problems.

nowadays the autogpt gives up sooner, seems less competent, and doesnt even come close to solving the same problems

replies(2): >>anon11+X >>thelit+Oe
2. anon11+X[view] [source] 2024-02-13 20:56:59
>>staina+(OP)
this is exactly what I noticed too
3. thelit+Oe[view] [source] 2024-02-13 22:13:51
>>staina+(OP)
Hamstringing high value tasks (complete code) to give forthcoming premium offerings greater differentiation could be a strategy. But in counter to this, doing so would open the door for competitors.
replies(1): >>wolpol+QQ
◧◩
4. wolpol+QQ[view] [source] [discussion] 2024-02-14 03:07:53
>>thelit+Oe
The question I have been wondering is if they are hamstringing high value tasks to creating room for premium offerings or are they trying to minimize cost per task.
replies(1): >>djmips+9m3
◧◩◪
5. djmips+9m3[view] [source] [discussion] 2024-02-14 21:21:10
>>wolpol+QQ
I think it's the latter. Reading between the lines on costs gives me the impression they have strived to lower computational costs. They already added a cap of 30 queries per 3 hours...
[go to top]