AI has gone through a lot of stages of “only X can be done by a human”-> “X is done by AI” -> “oh, that’s just some engineering, that’s not really human” or “no longer in the category of mystical things we can’t explain that a human can do”.
LLM is just the latest iteration of, “wow it can do this amazing human only thing X (write a paper indistinguishable from a human)” -> “doh, it’s just some engineering (it’s just a fancy auto complete)”.
Just because AI is a bunch of linear algebra and statistics does not mean the brain isn’t doing something similar. You don’t like terminology, but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is?
Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would. What would be left? The human is computation also, unless you believe in souls or other worldly mysticism. So why not think eventually AI as computation can be equal to human.
Just because Github CoPilot can write bad code, isn't a knock on AI, it's real, a lot of humans write bad code.
Using them for "creative" things, is that they can parrot things back in the statistically average way, or maybe attempt to echo it in an existing style.
Copilot cannot use something because it prefers it, or thinks it's better than what's common. It can only repeat what is currently popular (and will likely be self reenforced over time)
When you write prose or code you develop preferences and opinions. "Everyone does it this way, but I think X is important."
You can take your learning and create a new language or framework based on your experiences and opinions working in another.
You develop your own writing style.
LLM cuts out this chance to develop.
---
Images, prose, (maybe) code are not the result of computation.
Two different people compute the same thing they get the same answer. When I ask different people to write the same thing I get wildly different answers.
Sure ChatGPT may give different answers, but they will always be in the ChatGPT style (or parroting the style of an existing someone).
"ChatGPT will get started and I'll edit my voice into what it generated" is not how writing works.
It's difficult for me to see how a world where people are communicating back and forth with the most statistically likely manner is good
Humans are also regurgitating what they ‘inputted’ to their brain. For programming, isn’t it an old joke that everyone just copy/paste's from stack overflow?
Why if an AI does it (copy paste), it is somehow now a lesser accomplishment than when a human does it.