The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.
To give you a few examples:
- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.
- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.
The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.
The one area I would agree that AI and ML tools have been surprisingly good, art generation.
But then, I see the flood of AI generated pictures and overall, feel it has made a already troublesome world, even more troublesome. I am starting to see the "the picture is AI made, or AI modified" excuses coming into mainstream.
A picture now, has lost all meaning.
> be useful for “thinking” or analyzing a piece of writing
This, I am highly skeptical of. If you train an LLM with words of "trains can fly", then it spits that out. They may be good as summarizing or search tools, but to claim them to be "thinking" and "analyzing", nah.
And I meant myself thinking and analyzing a piece of writing with the help of ChatGPT, not ChatGPT itself “thinking.” (Although I frankly think this is somewhat of an irrelevant point, if the machine is thinking.) Because I have absolutely gained tons of new insights and knowledge by asking ChatGPT to analyze an idea and suggest similar concepts.
I don't think that's an accurate summary of this article. Are you basing that just on the title, or do you fundamentally disagree with the author here?
> We call something a gimmick, the literary scholar Sianne Ngai points out, when it seems to be simultaneously working too hard and not hard enough. It appears both to save labor and to inflate it, like a fanciful Rube Goldberg device that allows you to sharpen a pencil merely by raising the sash on a window, which only initiates a chain of causation involving strings, pulleys, weights, levers, fire, flora, and fauna, including an opossum. The apparatus of a large language model really is remarkable. It takes in billions of pages of writing and figures out the configuration of words that will delight me just enough to feed it another prompt. There’s nothing else like it.
In my own experience, that is absolute nonsense, and I have gotten immense amounts of value from it. Most of the critical arguments (like the link) are almost always from people that use them as basic chatbots without any sort of deeper understanding or exploration of the tools.
Please consider that there are some very clever people out there. I can respond to your point about languages personally - I speak three, and have lived and operated for extended periods in two others which I wouldn't call myself "fluent" in as it's been a number of years. I would not use an LLM to generate images for each word, as I have methods that I like already that work for me, and I would consider that a wasteful use of resources. I am into permacomputing, minimising resources, etc.
When I see you put the idea forward, I think, oh, neat, but surely it'd be much more effective if you did a 30s sketch for each word, and improved your drawing as you went.
In summary - do read the article, it's very good! You're responding to an imagined argument based on a headline, ignoring a nuanced and serious argument, by saying: "yeah, but I use it well, so?! It's not a gimmick then, for me!"
30 second sketches also are not nearly as effective as detailed images and would likely have dubious value in implementing the Picture Superiority Effect.
Nowhere did I say that people who write essays about AI being useless are idiots. That's your terminology, not mine. Merely that they lack imagination and creativity when it comes to exploring the potential of a new tool and instead just make weak criticisms.
Or, you know, just imagine something. Which is what I have done for learning to speak 3 languages fluently other than my mother tongue.
Are you going to test them by building something or using these concepts in conversation with specialists?
And likewise, using AI to critique a piece of writing is already “testing it,” as it definitely makes useful suggestions.
Either that or different people have different views on life, tech, &c. If you're not going through life as some sort of minmax rpg not using LLM to "optimise" every single aspects of your life is perfectly fine. I don't need a LLM to summarise an article, I want to read it during my 15 min coffee time in the morning. I don't need an LLM to tell me how my text should be rewritten to look like the statistical average of a good text...
1. In a couple of contexts, as a non-expert, I'm getting excellent use out of these LLM tools, because I'm imaginative and creative in my use of them.
2. I get such great use out of them, as a non-expert, in these areas, that any expert claiming they are gimmicks, is simply wrong. They just need to get more imaginative and creative, like me.
Am I misunderstanding you here? Is this really what you're saying?
The holes in the thinking seem obvious, if I may be blunt. I would suggest you ask an LLM to help you analyse it, but I think they're quite bad at that, as they are programmed to reflect your biases back at you in a positive way. The largest epistemic issue they have is probably that - it is only possible to overcome this tendency to placate the user if the user has great knowledge of their biases, an issue even the best experts face!
If you not part of a very small subset of tech enthusiasts or companies directly profiting from it it really isn't that big of a deal
This isn’t that complicated. Someone wrote an article saying X is a gimmick and made a weak argument. I said no, in my experience that isn’t the case, and here are a few examples.
Your patronizing tone is pretty irritating and distracts from whatever point you’re trying to make. But I’m not sure you’re actually engaging in good faith here, so I think that’s the end of this conversation.
It was written after the author attended a workshop where the presenter tried and seemingly failed to show how AI was able to write essays when prompted with the word "innovative" or produce a podcast on a book. The author also mentions an article by a university lecturer who claims that "Human interaction is not as important to today’s students" and that AI will basically replace it.
The subtitle of the article is "AI cannot save us from the effort of learning to live and die."
In other words, the article is about a specific trend in higher education to present AI as some sort of revolutionary tool that will completely change the way students learn.
The author disagrees and contends that pretending to replace most human interactions with genAI is a gimmick, and pretending that AI can make learning effortless is lying to students.
The way you use AI for learning language is certainly imaginative but you are not claiming that it replaces the quality of interacting with native speakers or possibly immersion in the culture. Your tool may be useful and clever but claiming it makes learning language effortless (as some AI apologists in education might) would make it a gimmick.
This swiss army knife is totally useless!
I just want to point out that this is precisely how they described your perspective. It’s hard to see how you find their tone patronizing given they’re just explaining their point of view. It’s worth noting that others may find your words to be patronizing:
> These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination.
> Most of the critical arguments (like the link) are almost always from people that use them as basic chatbots without any sort of deeper understanding or exploration of the tools.
> I said that people making blanket statements about LLMs being gimmicks need to be more creative.
You didn't need AI for the things you list, and using AI has lowered the credibility and quality of your work.
I don't use any AI in my work. Which makes my work worth scanning by AI-- but not yours.
If you talk to workers being forced to use these tools you come away with a different conclusion.
> Either that or different people have different views on life, tech, &c.
Most definitely. When stable diffusion came out, I recall AI enthusiasts gushing and saying things like it was "creating the most beautiful art they've ever seen," which just made me scratch my head and wonder what they were smoking.
> If you're not going through life as some sort of minmax rpg not using LLM to "optimise" every single aspects of your life is perfectly fine. I don't need a LLM to summarise an article, I want to read it during my 15 min coffee time in the morning.
Exactly. Some people want to actually understand stuff, in their own minds, and some people seem to be fine teetering on top of an output machine they don't really understand.
On that summarization thing specifically, pre-LLM there were services that did that (my manager subscribed me to one once), and it never clicked with me. Why would I waste my time consuming such shallow stuff? I never felt it had any impact, it felt like fluff to make it feel like you were learning something when you weren't. Which is ironic, because the people who seem enthusiastic about that stuff are under the false impression that everything is like three bullet points wrapped in fluff.
> I don't need an LLM to tell me how my text should be rewritten to look like the statistical average of a good text...
Especially since I roll my eyes at ChatGPT-ese.
We’re going to see similar emotional outbursts in the future. Probably going to need new strategies to convince people why they are wrong. Even harder when the parrot says they’re actually right.
For everyone else, they obviously understood I was critiquing the article and showing how I found some genuine value in AI tools by thinking a little outside the box. I.e., they aren't gimmicks.
The other poster's comments are full of a smarmy, holier-than-thou attitude of insisting that I didn't read the article, that I'm just posting this comment to brag about my creative brilliance, and that "only idiots see value in AI - but you're not an idiot, of course."
This kind of writing is by someone that's trying to be clever, not have an honest conversation.
AI receives so much funding and support from the wealthy because they believe that they can use it to replace humans and reduce labor costs. I strongly suspect that AI being available to us at all is merely a plot to get us to train and troubleshoot the tech for them so it can more perfectly imitate us. Then, eventually, when the tech is "good enough" it will rapidly become too expensive for normal people to use and thus become inaccessible.
Companies are already mass-firing their staff in favor of AI agents even though those agents don't even do a good job. Imagine how it will be when they do.
The futility is already apparent, but I'll make the same point again a third time, even though you've already shown a commitment to not understanding.
> For everyone else, they obviously understood I was critiquing the article and showing how I found some genuine value in AI tools by thinking a little outside the box. I.e., they aren't gimmicks.
The logic this sentence hinges on is that you using the tools and getting "genuine value" out of them proves that the tools are not gimmicks. This is nonsensical.
Case in point, a PIAAC report back in 2013 said that only 9% of the US adults were considered proficient in math and complex reasoning. And the questions used by PIAAC were arguably just high-school level. Anecdotally, how many people have heard their professors or high-school teachers complain that most students couldn't even really grasp linear equations or distributed property (students can remember rules and pass high-school test, but many of them would be hopeless if they were to pass national entrance exams in other countries).
I mean sure, from a purely profit oriented point of view it's good but you need to realize the human being replaced isn't feeling too good about it. Especially when the AI works because it has used input from works of people like him.
The people possessing the capital for AI are pretty happy about the results for sure but they need to think about sharing the wealth created because otherwise this is just an unfair transfer of value to an already rich and powerful small group of people.
In my situation there were no humans being replaced, because I didn’t have the budget to spend thousands of dollars hiring illustrators to make images or academics to review my writing. No one lost work because of AI, but I was able to create new value by using AI. That’s pretty much the pattern of every new technology, from photography replacing portrait painters to widespread literacy replacing letter writers.
Maybe one could make the argument that across the economy as a whole, some jobs are being replaced by AI. Which is indeed true - however, my point was about using these tools for creative individual purposes.
This isn't to say that the technology can't be useful, but that the hype around it is exhausting. This bubble is much bigger than the dot-com one, and the effects when it bursts will be far worse.
You say you didn't have the budget, which basically means, "human illustrator is too expensive for the value I get out of it". From this we could infer that either the value actually created is extremely low or that you want a larger share of profit or that there is a market problem.
Yet you said you created value with AI. This does mean that even though the finished work may not have a broad commercial value with a price that is relatively easy to figure out, it does have value, at the very least for one person. You may not be able to make money off it but that's not very relevant, people do all kinds of things where they spend a lot of money on things that mostly (and sometimes only) create value for themselves.
So, it is pretty clear that you value having an artwork but it's just that you are unwilling to pay the price of a human who has the skill and put in the time would require. If a human illustrator could produce the artwork at the same price (close to zero) you would take it without much more consideration.
And this is exactly where your "point of view" is extremely dishonest. You are acting as if having to artwork at all is the same thing as having an AI artwork. It is absolutely not. And you pretend this doesn't affect the illustrators. Sure, there is no direct link, but what you just did is redefine the value of artwork in the market as close to zero as possible. Over time many more people will make the same choice, because they "figure out" that paying an illustrator is too expensive (regardless of if they have the means or not) and they go with AI. Over time, the illustrator job market dries up and soon enough you won't find a lot of them if at all.
The irony is that AI cannot exist without the previous work of the illustrator, the only reason it has been able to produce artwork for so cheap is because it could use a massive amount of past work without either paying for it. If AI companies had to charge people not just for the capital of compute, power and R&D but also for the value of all the works they have ingested, suddenly the AI wouldn't look so good purely on price.
But I guess it's nothing new under the sun, people are willing to exploit the labors of others the moment they get an imbalance of power in the form of capital.
Your examples are pretty bad; they are hardly comparable. Photography is not a strict equivalent to portrait painters. Those still exist and they still sell their craft to rich people; it's just that it's much less popular, (especially for new rich types) or they sell paintings of some other people/things. At 20yo I had a friend who made and sold large scale painting (usually 1to1) to rich people, for example he had a sexy rendition of "Little Red Riding Hood" (had it in our colocation living for a bit) which he sold for 12K. His work cannot be replaced by photography, it's not an equivalent at all, and of course he is not getting replaced by AI any time soon.
Even if we agree that photography replaced portrait painters it is still not comparable at all. First of all, photography still needs a human operator and there are still a lot of skill/knowledge involved even with modern technology. And we want to pretend to have relative equivalency, you need a physical output, which involves more humans and cannot be fully replaced by AI.
As for widespread literacy, I have a hard time seeing the parallel. It's something that we decided to teach to all humans, in order to make like fairer, the equivalent would be to teach all humans how to be an illustrator not to replace them with an AI. On top of that you take a weirdly specific and reductive definition of the world. It's not like if letter writers were a very widespread job and it's not like if it doesn't exist in some other form. If you take a look at many "social work" type of jobs, plenty of them do exactly the same kind of work: they write/correct/enhance CVs, letters of recommendation and the likes for people whose literacy isn't very good (and unlikely to ever improve). This is just one example, there are many more if you don't take a narrow-antiquated definition.
But I guess they should get replaced by AI as soon as possible. And the problem is not just that they are getting replaced, it's that the profit from the value created goes in the pocket of the capitalist controlling the AI tools and to some extent to the people who can exploit the AI to "improve" their output without involving humans (so, like you).
The people getting replaced don't even get to see the value float around in their community/country, it just goes somewhere else entirely. In other cases of technological disruption, people could at least adapt and learn new skills; and the new technology usually created new types of jobs that were still constrained by physical/geographical limits.
AI has is currently pushed/exploited is nothing like we have seen before. It is much closer to theft than its defenders would like to admit and it's not even good enough to actually replace humans for jobs they typically don't like to do. But we surely are going to replace the interesting ones, like illustrations and writing. Having artwork and writing as expensive things was not a bug but a feature. In fact, one could argue that it had already become too cheap, in no small part because of IT, making it cheaper was not really a problem. But since profit can be made from it, it will happen, because the one profiterring will never have to pay for the externalities.
Now you may find AI useful, but don't kid yourself on ethics.