The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.
To give you a few examples:
- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.
- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.
The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.
I don't think that's an accurate summary of this article. Are you basing that just on the title, or do you fundamentally disagree with the author here?
> We call something a gimmick, the literary scholar Sianne Ngai points out, when it seems to be simultaneously working too hard and not hard enough. It appears both to save labor and to inflate it, like a fanciful Rube Goldberg device that allows you to sharpen a pencil merely by raising the sash on a window, which only initiates a chain of causation involving strings, pulleys, weights, levers, fire, flora, and fauna, including an opossum. The apparatus of a large language model really is remarkable. It takes in billions of pages of writing and figures out the configuration of words that will delight me just enough to feed it another prompt. There’s nothing else like it.
In my own experience, that is absolute nonsense, and I have gotten immense amounts of value from it. Most of the critical arguments (like the link) are almost always from people that use them as basic chatbots without any sort of deeper understanding or exploration of the tools.
Please consider that there are some very clever people out there. I can respond to your point about languages personally - I speak three, and have lived and operated for extended periods in two others which I wouldn't call myself "fluent" in as it's been a number of years. I would not use an LLM to generate images for each word, as I have methods that I like already that work for me, and I would consider that a wasteful use of resources. I am into permacomputing, minimising resources, etc.
When I see you put the idea forward, I think, oh, neat, but surely it'd be much more effective if you did a 30s sketch for each word, and improved your drawing as you went.
In summary - do read the article, it's very good! You're responding to an imagined argument based on a headline, ignoring a nuanced and serious argument, by saying: "yeah, but I use it well, so?! It's not a gimmick then, for me!"
30 second sketches also are not nearly as effective as detailed images and would likely have dubious value in implementing the Picture Superiority Effect.
Nowhere did I say that people who write essays about AI being useless are idiots. That's your terminology, not mine. Merely that they lack imagination and creativity when it comes to exploring the potential of a new tool and instead just make weak criticisms.
1. In a couple of contexts, as a non-expert, I'm getting excellent use out of these LLM tools, because I'm imaginative and creative in my use of them.
2. I get such great use out of them, as a non-expert, in these areas, that any expert claiming they are gimmicks, is simply wrong. They just need to get more imaginative and creative, like me.
Am I misunderstanding you here? Is this really what you're saying?
The holes in the thinking seem obvious, if I may be blunt. I would suggest you ask an LLM to help you analyse it, but I think they're quite bad at that, as they are programmed to reflect your biases back at you in a positive way. The largest epistemic issue they have is probably that - it is only possible to overcome this tendency to placate the user if the user has great knowledge of their biases, an issue even the best experts face!
This isn’t that complicated. Someone wrote an article saying X is a gimmick and made a weak argument. I said no, in my experience that isn’t the case, and here are a few examples.
Your patronizing tone is pretty irritating and distracts from whatever point you’re trying to make. But I’m not sure you’re actually engaging in good faith here, so I think that’s the end of this conversation.