zlacker

[return to "Gemini 3 Pro: the frontier of vision AI"]
1. knolli+9P[view] [source] 2025-12-05 19:58:32
>>xnx+(OP)
I do some electrical drafting work for construction and throw basic tasks at LLMs.

I gave it a shitty harness and it almost 1 shotted laying out outlets in a room based on a shitty pdf. I think if I gave it better control it could do a huge portion of my coworkers jobs very soon

◧◩
2. reduce+d51[view] [source] 2025-12-05 21:21:07
>>knolli+9P
"AI could never replace the creativity of a human"

"Ok, I guess it could wipe out the economic demand for digital art, but it could never do all the autonomous tasks of a project manager"

"Ok, I guess it could automate most of that away but there will always be a need for a human engineer to steer it and deal with the nuances of code"

"Ok, well it could never automate blue collar work, how is it gonna wrench a pipe it doesn't have hands"

The goalposts will continue to move until we have no idea if the comments are real anymore.

Remember when the Turing test was a thing? No one seems to remember it was considered serious in 2020

◧◩◪
3. Frater+6g1[view] [source] 2025-12-05 22:21:58
>>reduce+d51
The turing test is still a thing. No llm could pass for a person for more than a couple minutes of chatting. That’s a world of difference compared to a decade ago, but I would emphatically not call that “passing the turing test”

Also, none of the other things you mentioned have actually happened. Don’t really know why I bother responding to this stuff

◧◩◪◨
4. Workac+Xv2[view] [source] 2025-12-06 12:58:05
>>Frater+6g1
Ironically the main tell of LLMs is that are too smart and write too well. No human can discuss the depth of topics they can and no humans writes like a author/journalist all the time.

i.e. the tell that it's not human is that it is too perfectly human.

However if we could transport people from 2012 to today to run the test on them, none would guess the LLM output was from a computer.

◧◩◪◨⬒
5. visarg+zE2[view] [source] 2025-12-06 14:13:13
>>Workac+Xv2
Yesterday I stumbled onto a well written comment on reddit, it was a bit contrarian, but good. Then I was curious and looked at their comment history and found it was a one month old account with many comments of similar length and structure. I put a LLM to read that feed and they spotted LLM writing, and the argument? it was displaying too broad a knowledge across topics. Yes, it gave itself up by being too smart. Does that count as Turing test fail?
[go to top]