I don't want to sound grumpy or but it doesn't achieve anything, this is just a showcase of how a "calculator with a small probability of failure can succeed".
Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
> Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
Do you see the irony in what you did?So, how about you move on, do something useful, don't stop being annoyed by AI, but please stop throwing your opinion in anyone's face.
Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.
Well, you didn't try very hard :)
If you think that every model behaves the same way in terms of programming, you don't have a lot of experience with them.
I find it useful to see how other people use all kinds of tools. AI is no different.
It's like getting upset when someone compares how it's like using bun vs deno vs node.
pattern_start = 1 if half_digits == 1 else 10 ** (half_digits - 1)
when 10 ** (half_digits - 1)
is fine.p/s, anyone who gets upset that folks are experimenting with LLMs to generate code or solve AoC should have their programmer's card revoked.
Most of the hubbub I saw was because AI code making it into those leaderboards very clearly violates the spirit of competition.