AI is great. ChatGPT is incredible. But I feel tired when I see so many new products being built that incorporate AI in some way, like "AI for this..." "AI for that..." I think it misapplies AI. But more than that, it's just too much. Right? Right? Anyone else feel like this? Everything is about ChatGPT, AI, prompts or startups we can build with that. It's like the crypto craze all over again, and I'm a little in dread of the shysters again, the waste, the opportunity cost of folks pursuing this like a mad crowd rather than being a little more thoughtful about where to go next. Not a great look for the "scene" methinks. Am I alone in this view?
Sorry, I couldn't help; that is the ChatGPT response to your question. More informatively, AI is clearly at the height of inflated expectations. It will provide a helpful tool. However, it will not push people out of jobs. Furthermore, right now it gives a much better search experience than Google, as it is not yet filled with ads or has been gamed extensively by SEO. It is doubtful this will stay like this in the future.
In other words, if you’re fatigued already, I have some bad news regarding the rest of your life.
It will be increasingly tiresome until it becomes commonplace, then the disastrous consequences will become the next tedium.
When I started with the topic I watched a documentary with Joseph Weizenbaum ([1]) and felt weirded out that someone would step away from such an interesting and future-shaping topic. But the older I get, the more I feel that technology is not the solution to everything and AI might actually make more problems than it solves. I still think Bostrom's paperclip maximizer ([2]) is lacking fundamental understandings of the status quo and just generated unnecessary commotion.
[1] http://www.plugandpray-film.de/en/ [2] https://www.lesswrong.com/tag/paperclip-maximizer
I think we're now past that and people can see that tools like ChatGPT are powerful enough to be applied in many pre existing contexts and industries in unpredictable and inventive ways without huge amounts of manual configuration, which makes it more exciting.
(…or take a good step back from the news cycle, check in once or twice a week instead of several times daily. News consumption reduction is good for mental health.)
At least all the previous crazes didn't threaten to replace humans, so I suppose this tech hype bubble is arguably even more irritating.
In the meantime, all the attention and media is easing people into thinking about some difficult questions that we may end up having to deal with sooner than we'd like.
The hype can be annoying, and I'm sure they'll be suckers who lose a lot of money chasing it, but I'm also sure AI will get better, and be better understood too, as a result of all of the attention and attempts to shoehorn it into new roles and environments.
It's not AI it's an IF statement for crying out loud :-(
But this is the industry we're in, and buzzword-driven headlines and investment are how it goes.
Actual proper AI getting some attention makes a pleasant change tbh :-)
Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.
That's tiring, and really annoying.
It's incredibly cool technology, it is great at certain use cases, but those use cases are somewhat limited. In case of GPT-3 it's good at generative writing, summarization, information search and extraction, and similar things.
It also has plenty of issues and limitations. Lets just be realistic about it, apply it where it works, and let everything else be. Now it's becoming a joke.
Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.
It seems too exciting to me and I am eager to see more AI. It's fascinating stuff.
The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.
Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...
So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.
Language models are right now at the very top of the peak of inflated expectations. It's still too early to tell what the real impact will be, but it won't be even remotely close to what you read on the headlines.
Far more impressive technology (like Wolfram Alpha) has existed for almost a decade now, and it's directly comparable to language models for many applications.
My guess is they will end up being something like Rust. Very cool to look at, little impact on your day-to-day.
It seems more likely that we'll surpass the hype than not in the next few decades. I think people have forgotten how quickly technology can move after the last 20 years of relative stability where more powerful hardware didn't really change what a computer can do.
Note that nobody is pretending that ChatGPT is "true" intelligence (whatever that means), but i believe the excitement comes from seeing something that could have real application (and so, yes, everybody is going to pretend to have incorporated "AI" in their product for the next 2 years probably). After 50 years of unfulfilled hopes from the AI field, i don't think it's totally unfair to see a bit of (over)hype.
I've got "AI Fatigue" not in the sense that it is overhyped, but just like "JS Fatigue": It is all very exciting, and new genuinely useful and impressive things are coming up all the time, but it's too much to deal with. I feel like it's difficult to start a product based on AI these days due to the feeling that it will become obsolete next week when something 10x better will come out.
Just like with JS Fatigue back in the days, the reasonable solution for me is something like "Let the dust settle a bit before going in the latest cool thing"
And you're not alone, I feel the same since ~2015
While crypto or VR tech still hasn't arrived in our daily lives, most of my friends are already using tools like ChatGPT on a regular basis.
If writing boilerplate becomes effortless, then you'll write more of it, instead of feeling the pain of writing it and then trying to reduce it, because you don't want to spend time writing it.
And since Copilot was accepted as a way to help the developers on the teams, the increase of boilerplate have been immersive.
I'm borderline pissed, but mostly at our own development processes, not at Copilot per se. But damn if I didn't wish it existed somehow, although it was inevitable it would at one point.
What I have seen about it ranged from things that can be nearly just as well handled by your $EDITOR's snippet functionality to things where my argument kicked in - I have to verify this generated code does what I want, ergo I have to read and understand something not written by me. Paired with the at least somewhat legally and ethical questionable source of the training data, this is not for me.
However, in this case, it does seem that there is a level of fraudulence and deception. Given that “fake” often is used exactly the way you say, maybe “fake intelligence” would indeed be a more appropriate term.
The thing is that AI is just about the most general term for the type of computing that gives the illusion of intelligence. Machine learning is a more specific region of the space of AI, and generally is made of statistical models that lead to algorithms that can train and modify their behavior based on data. But this includes "mundane" algorithms like k-means clustering or line-fitting. Deep learning (aka neural networks) is yet a more specific subfield of ML.
I think the term AI just has more "sex appeal" because people confuse it with the concept of AGI, which is the holy grail of machine intelligence. But we don't even know if this is achievable, or what technology it will use.
So in terms of conceptual spaces, we can say that AI > ML > DL, and we can say (by definition) that AI > AGI. And it seems very likely that AGI > ML. But it's not known, for instance, whether AGI > DL, ie, we don't know for sure that deep learning/neural networks are sufficient to obtain AGI.
In any case, people should put less weight on the term AI, as it's a pretty low bar. But also yes, the term is way over hyped.
They are not addressing the public or swaying opinion
I'd rather we have bitcoin crazes, scaling crazes, nosql crazes and GPT crazes than this industry commoditizes itself to hell and I have to spend the rest of my career gluing AWS cognito to AWS lambdas for $55k / year.
At the same time I'm pretty sure that it will wildly change any industry where creativity is critically important and quality control either isn't that important or can be done by amateurs. There is substance at the core of the hype.
ChatGPT is the "new" booster shot, it's a hell of a boost and this one might stick. What will not stick is the copious amount of wishful thinking and bullshit the usual suspects are bringing in. ChatGPT is a godsend after crypto went bust and the locusts had to go somewhere else.
I suspect we will have to endure a crypto-craze like environment for a couple years at least..
This is all speculative, of course, but I have seen the fall of the Soviet system, and I am well aware that forms of government are not eternal.
tl;dr but yes. Crypto of the future will look more or less similar to the crypto of today. Governments of the future will look nothing like today’s nation-states.
At the same time people's actual quality of life or economic standing is going nowhere, there is fragility that bursts in the open with every stress, politics has become toxic and the environment gets degraded irreversibly.
Yet people simply refuse to see and they keep chasing unicorns.
BUT the rate of change in AI is enormous and it will be a much bigger deal than the internet over the next 10 years. Not because of API wrappers, but because the cost of many types of labor will effectively go to zero.
It's the same kind of people that were hyping cryptocurrencies in the past. People who understand nothing about the technology, but shout the loudest about how amazing it is (probably to make money off of it). Those are also the kind of people that will be the cause of the next AI winter.
I wish I could derive as much utility as everyone else that's praising it. I mean, it's great fun but it doesn't wow me in the slightest when it comes to augmenting anything beyond my pleasure.
[1] Synonyms of artificial has "faked" : https://www.thesaurus.com/browse/artificial
[2] Synonyms of fake has "artificial": https://www.thesaurus.com/browse/fake
Just wait for it to underdeliver. Investors will get scared and we will be back to calling it machine learning.
And this happens in the artistic world as well with the other branch of NN : "mood boards" can now be generated from prompts infinitely.
I don't understand how some engineers still fail to see that a threshold was passed.
But defining "intelligence" is a philosopical question that doesn't necessarily have one answer for everything and everyone.
made up bullshit
> summarization
except you can't possibly know the output has any relation whatsoever to the text being summarized
> information search and extraction
except you can't possibly know the output has any relation whatsoever to the information being extracted
people still fall for this crap?
AGI could be ML driven, most likely it is not. Neuronal nets are still AI tech. Even Bayesian inference is weakly AI tech.
The public always misuses words. Words change to match that meaning.
As folks that work in tech we can tell the difference between stuff that's got some form of depth to it in "proper" AI: ML, DL, AGI as you suggest, vs the over-hyped basic computation stuff. And the selling of the latter as the former can rankle.
I think there's an argument to be made that AI is being used here to help you tackle the more trivial tasks so you have more time to focus on the more important, and challenging tasks. Albeit I recognise GitHub CoPilot is legally questionable.
But yes, I agree with your overall point that AI has still not been able to 'think' like a human but rather can only still pretend to think like a human, and history has shown that users are often fooled by this.
Hype or not, it's incredibly useful and has increased my productivity by at least 20%. Worth every penny.
Lack of actual human understanding? Of course, by definition a machine will always lack human understanding. Why does that matter so much if it's a helpful tool?
For what it's worth, I do agree that there is a lot of hype. But contrary to blockchain, NFTs, web3, etc., this is actually useful for many people in many everyday use cases.
I see it as more similar to the dot com hype - buying a domain and creating a silly generic website didn't really multiply the value of your company as some people thought in that era, but that doesn't mean that websites weren't a useful technology with staying power, as time has shown.
But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.
Moreover, it's first opinion on the things I'm good at has been a special kind of awful. It generates sentences that are true on their face but, as a complete idea, are outright wrong. I mean, you're effectively gaslighting yourself by learning these half truths. And as someone with unfortunate lengthy experience in being gaslit as a kid, I can tell you that depending on how much you learn from it, you could end up needing to spend 3x as much time learning what you originally sought to learn (if you're lucky and the only three things you need to do is learn it very poorly, unlearn it and relearn it the right way)
It just feels like a waste of time having read the comment. Even if the information is there I don't trust the user to be able to distinguish between true or confident false. If it's not my skillset or knowledgebase I assume it's wrong because I can't tell and can't ask followup questions.
Me using it as an assistant? Love it. Others using it as an assistant? I don't trust them to be doing it right.
In any case I want to read your opinion, copy paster, not a robot I could just ask in my own time! Just don't post if you've got no thoughts lol
AI has gone through a lot of stages of “only X can be done by a human”-> “X is done by AI” -> “oh, that’s just some engineering, that’s not really human” or “no longer in the category of mystical things we can’t explain that a human can do”.
LLM is just the latest iteration of, “wow it can do this amazing human only thing X (write a paper indistinguishable from a human)” -> “doh, it’s just some engineering (it’s just a fancy auto complete)”.
Just because AI is a bunch of linear algebra and statistics does not mean the brain isn’t doing something similar. You don’t like terminology, but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is?
Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would. What would be left? The human is computation also, unless you believe in souls or other worldly mysticism. So why not think eventually AI as computation can be equal to human.
Just because Github CoPilot can write bad code, isn't a knock on AI, it's real, a lot of humans write bad code.
Well, they were right...
Has it really? Or are you worried that this is something that will happen?
Of course I don't know how other people use it but I find that it's very much like having a fairly skilled pair programmer on board. I still need to do a lot of work but I get genuine help. I don't find that I personally write more boilerplate code than before, every programming principle applies as it always has.
ad.: Code review takes less time than writing code for the same reason reading a book takes less time than writing one. Distillation and organization of ideas requires expertise gained through experience and long thought. Reading a book requires reading ability.
Understanding a book (and the intricacies underlying it) takes effort on the order of the original writing, but most people don't seek that level of understanding. The same is true of code.
ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.
When provided with an analytic prompt, it is reliably a translator.
Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...
In terms of closing the gap between AI hype and useful general purpose AI tools, no one can reasonably deny that it's an absolute quantum leap.
It's just not a daily driver for technical experts yet.
> The business plans of the next 10,000 startups are easy to forecast: Take X and add AI.
I think the AI hype cycle isn't done building. A few days ago, Paul Graham tweeted[2] this:
> One of the differences between the AI boom and previous tech booms is that AI is technically more difficult. That combined with VC funds' shift toward earlier stage investing with less analysis will mean that, for a while, money will be thrown at any AI startup.
[1]: https://twitter.com/kevin2kelly/status/718166465216512001
bring me npmGPT
Why not get some of the freed up, Copilot augmented developer labor budget moved to testing and do more there or build more tools to make your personal, boilerplate, repetitive tasks more efficient?
If the coders are truly just dumping bad code your way, that's an externality and the cost should be called out.
Try to focus on the bright side - now that you've seen behind the curtain, you can more easily avoid the hacks and shysters. They will try to cast the "ML/AI" spell on you and it won't take.
I think a big part of my success with it is that I'm used to providing good specifications for tasks. This is, apparently, non-trivial for people to the point where it drives the existence of many middle-management or high-level engineering roles whose primary job is translating between business people / clients / and the technical staff.
I thought of a basic chess position with a mate in 1 and described it to chatGPT, and it correctly found the mate. I don't expect much in chess skill from it, but by god it has learned a LOT about chess for an AI that was never explicitly trained in chess itself with positions as input and moves as output.
I asked it to write a brief summary of the area, climate, geology, and geography of a location I'm doing a project in for an engineering report. These are trivial, but fairly tedious to write, and new interns are very marginal at this task without a template to go off of. I have to lookup at least 2 or 3 different maps, annual rainfall averages over the last 30 years, general effects of the geography on the climate, average & range of elevations, names of all the jurisdictions & other things, population estimates, zoning and land-use stats, etc, etc. And it instantly produced 3 or 4 paragraphs with well-worded and correct descriptions. I had already done this task and it was eerily similar to what I'd already written a few months earlier. The downside is, it can't (or rather won't) give me a confidence value for each figure or phrase it produces. ...So given it's prone to hallucinations, I'd presumably still have to go pull all the same information anyway to double check. But nevertheless, I was pretty impressed. It's also frankly probably better than I am at bringing in all that information and figuring out how to phrase it all. (And certainly MUCH more time efficient)
I think it's evident that the intelligence of these systems is indeed evolving very rapidly. The difference in ChatGPT 2 vs 3 is substantial. With the current level of interest and investment I think we're going to see continued rapid development here for at least the near future.
With that, all the hype-sters and shady folks rush in and it can quickly become hard to differentiate between what’s good, what’s misplaced enthusiasm, and what’s just a scam.
These scenarios are also a big case study in the Dunning-Kruger effect. I’ve already got folks that haven’t written a line of code in their life trying to “explain” to me why I’m not understanding why some random junk AI thing that’s not really AI is the next big thing. Sometimes you gotta just sit there and be like “thanks for your perspective”.
Often I have times where I'm think about a specific piece of code that I need and I have it partially in my head and github copilot "just completes" it. I press tab and that's it.
I'm not talking about writing entire functions where you have to mentally strain yourself to understand what it wrote.
But I've never seen any autocompleter do it so good then github copilot. Even for documentation purposes like JSdoc and related commenting system it's amazing.
It's a tool I pay for now since it's proven to be a tool that increases my productivity.
Is it gonna replace us? I hope not, but it does look promising as one of those tools people will talk about in the future.
sounds like most people tbf
Cloud for this cloud for that! Blockchain for this blockchain for that! Big Data for this, big data for that! Web scale all the things!
The marketing driven development is exhausting and has done nothing to improve technology. This happened because of 0% interest rates and free money. People have been vying for all the VC money by creation solutions looking for problems, which end up being useless solutions for which no problems exist
If you have a task or are trying to accomplish something, and the way you do it is by moving a mouse around or typing on a keyboard then it is very likely that an AI will be able to do that task. Doing so is a more or less straightforward extension of existing techniques in AI. All that is necessary is to record you performing the task and then an AI will be able to imitate your behavior. GPT3 can already do this for text, and doing it instead with trajectories of screen, mouse and keyboard is not fundamentally different.
So yes, it is true that there is a lot of hype right now, but I suspect it is a small fraction of what we will see in the near future. I also expect there will be an enormous backlash at some point.
There are so few permutations in tac tac toe that it's lack of memory and lack of ability to understand extremely simple rules make it difficult for me to have confidence in anything it says. I mean, I barely had confidence left before I ran that "experiment" but that was the final nail in the coffin for me.
We're about six minutes away from "AI bros" becoming a thing.
The same kind of grifters who always latch onto the latest thing and hype it up in order to make a quick buck are already knocking on AI's door.
See also: Cryptocurrency, and Beanie Babies.
ChatGPT isn't as good as a human who puts in a lot of effort, but in many jobs it can easily outperform humans who don't care very much.
ofc HN over-analysing is killing the fun
I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.
Copilot is amazing for reducing the tedium of typing obvious but lengthy code (and strings!). And it’s inline and passive; it’s not like you go edit -> insert -> copilot function and it dumps in 100 lines of code you have to debug. Which is what it sounds like parent is mistaking it for.
I’m reminded of 1995, when an elderly relative told me everything wrong with the internet based on TV news and not having ever actually seen the internet.
It you ask it to go through and comment code it does a pretty good job of that.
some things better than others(not that great at CSS)
need a basic definition of something. got it.
tell it to write a function it's not bad.
As a BA just tell it what your trying to do and what questions it should ask users. It will get some good ideas for you.
Want it to be a PM have create a loop asking every 10 minutes if your done yet.
Is it a senior engineer? no. can it pass a senior engineering interview? quite possibly.
debug code hit or miss.
I think the big thing it's not that great at front end code. It can't see so that probably makes sense. a fine-tuned version of clip that interacted with a browser would probably be pretty scary.
You don't need AI to move a mouse around or type on a keyboard. A simple automation is enough.
The value is not in moving a mouse or typing on a keyboard. The value is in knowing when and where to move the mouse and when and what to write on the keyboard.
We are now exposed to companies hyping huge general purpose models with whatever tech is the latest fad, which resonates with the average person who wants to generate memes, etc.
This is impressive only at the surface level. Take a specific application: prompting it to write you an algorithm, outside of any copying-and-pasting from a textbook these models will generate bad/incorrect code and then explain why it works.
It's like having an incompetent junior on your team who has the bravado of a senior 10x-er.
That's not to say "AI" doesn't have a purpose, but currently it seems just hyped up by sales people looking for Series-A funding or an IPO cash-out. I want to see the models developed for specific tasks that will have a big impact, rather than the slight-of-hand or circus tricks we currently get.
Maybe that time is passed, and general models are the future and we will just have to wait until they're as good as any specific model that was built for any task you can ask of it.
It will be interesting what happens when these "general" models are used without much thought and their unchecked results lead to harm. Will we still find companies culpable?
I agree.
And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".
As you say, first(ish) there was ELIZA. Than this that and everything else. Then Autonomy and all that dot-com era jazz. Now with compute becoming more powerful and more compact, any man and his dog can stuff some AI bullshit where it doesn't belong.
I have seen comments below on this thread where people talk about "well, it's closing the gap". The thing you have to understand is that the gap will always exist. Ultimately you will always be asking a computer to do something. And computers are dumb. They are and will always be beholden to the humans that program them and the information that you feed them. The human will always have the upper hand at any tasks that require actual intelligence (i.e. thoughtful reasoning, adapting to rapidly changing events etc.).
I'm excited for these emerging technologies, but I don't care about any of the products people want to sell based on them. I've spent the past 27 years developing zero-effort self-filtering against spam and hucksters, so I'm not even aware of any AI startups, just as I can't tell you the names of any Bitcoin exchanges. That's just not in my sphere, and I'm not missing out.
Hunker down and have fun. It's incredibly accessible, and you likely have more than you need to get started making it work for you.
That uncritical handling along with a growing offer can lead to the next big bullshit bubble.
Best you can hope if you're a "Y" person is for the marketers to get bored of the current Y and jump to the next one, leaving yours alone.
We've seen this pattern many times. And there is money to be made, for sure, but the value might not be there yet.
I just use the features in the iphone where some photos get enhanced or i can detect and copy text from images.
So far it’s going very well.
But I can’t fully get on board with this:
> but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is? Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would.
The parent teaching a toddler bears some vague resemblance to machine learning, but the underlying results of that learning (and the process of learning itself) could not be any more different.
More problematic than this, while you may be correct that we will eventually be able to explain human biology with the precision of an engineer, these recent AI advances have not made meaningful progress towards that goal, and such an achievement is arguably many decades away.
It seems you are concluding that because we might eventually explain human biology, we can draw conclusions now about AI as if such an explanation had already happened.
This seems deeply problematic.
AI is “real” in the sense that we are making good progress on advancing the capabilities of AI software. This does not imply we’ve meaningfully closed the gap with human intelligence.
I don't think there is any because there is no functional model for what organic intelligence is or how it operates. There are plethora of fascinating attempts / models but only a subset implore that it is solely "statistical". And even if it was statistical, the implementation of the wet system is absolutely not like a gigantic list of vectorized (stripped of their essence) tokens
I asked some general questions to ChatGPT, and it gave me pretty coherent answers. But when I asked really specific question like "How to rewrite Linux kernel in Lisp", then it gave me seemingly gibberish answer.
This was about 2 months ago, BTW. Maybe ChatGPT already learn more stuffs and are smarter. Let's see...
It looks much less likely for the cost of developing and training an AI system to come down for the time being, making it out of reach for most individuals.
When the PC revolution was happening, everyone interested had a good chance of getting in, they just needed some money to buy/rent a computer and learn to use it or program it.
Compared to that, the AI revolution doesn't seem to have the same quality.
The barrier to entry seems much much higher this time.
Time will tell, I certainly can’t predict.
Think about it.
What's the most expressive medium we have which is also absolutely inundated with data?
To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.
We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.
Kind of, it isn't fool proof. I use GPT3 and Chat GPT (not the same thing) almost daily, and there is quite a bit of error correction that I am doing. Still, it is really helpful.
See Scott Alexander for attempts to explain what is apparently impenetrable papers on it.
I'm actually optimistic about both crypto and AI, but I see the authors point. I really don't think the comparison is hard to spot between the AI hype and, say, the NFT hype from 1 year ago.
A lot of people are claiming that these technologies will imminently change everything, fundamentally. In reality, both of them are just neat things that give us a glimpse of what the future may hold, and hold a bunch of promise, but aren't really changing anything fundamentally. Not yet, at least.
AI is wide and deep, and its proper uses are so so far removed from mainstream media and the hype-train.
AI still has too many undiscovered areas of usefulness to the degree that it will nothing short of transform those areas.
But you hear most of the times about Stable Diffusion, see melted faces and weird fingers, and screenshots of ChatGPT.
These, wrt area and width, are nothing compared to what is possible.
So, no, I am not AI fatigued as I don't pay much attention to these hypes at all.
It's important to note that this is your assumption which I believe to be wrong (for most people here).
LLMs don’t have a mechanism to learn from interaction, their models are simply fed more data and luckily you’ll get better results, but you might as well get worse results if said data isn’t well curated.
Co-pilot has been semi-useful. It's faster than search SO, but like you said, I still have to review all the code and it's often wrong in subtle ways.
Before this point in history we accepted 'I am that I am' because there wasn't any challenger to the title. Now that we are putting this to question we realize our definitions may not work well.
Using them for "creative" things, is that they can parrot things back in the statistically average way, or maybe attempt to echo it in an existing style.
Copilot cannot use something because it prefers it, or thinks it's better than what's common. It can only repeat what is currently popular (and will likely be self reenforced over time)
When you write prose or code you develop preferences and opinions. "Everyone does it this way, but I think X is important."
You can take your learning and create a new language or framework based on your experiences and opinions working in another.
You develop your own writing style.
LLM cuts out this chance to develop.
---
Images, prose, (maybe) code are not the result of computation.
Two different people compute the same thing they get the same answer. When I ask different people to write the same thing I get wildly different answers.
Sure ChatGPT may give different answers, but they will always be in the ChatGPT style (or parroting the style of an existing someone).
"ChatGPT will get started and I'll edit my voice into what it generated" is not how writing works.
It's difficult for me to see how a world where people are communicating back and forth with the most statistically likely manner is good
One simple example that I've had to reject more than once.
- Function 1 does something
- Developer needs something like Function 1 but minor change
- Developer starts typing name of function which has a similar name to Function 1, but again, minor difference
- Copilot helpfully suggests copy-pasting Function 1 but with the small change incorporated
- Developer accepts it, commits and sends the patch my way
Rather than extracting the common behavior into it's own function and call that from both of them, refactors which Copilot doesn't suggest, the developers is fine with just copy-pasting the function.
Now we have to maintain two full slightly different functions, rather than 1 full functions + 2 minor ones.
Obviously a small example, and it wouldn't be worth extracting it the first time it happens or on a smaller scale. But once you have entire teams doing something like this, it becomes a bit harder to justify copy-paste approach, especially when you want the codebase not to evolve to complete spaghetti.
And finally, I'm not blaming the tool, it's not Copilots fault. But it does seem to have made developers who rely on it think less, compared to the ones that don't.
Where will the AI hype train go? The internet as we know it already has so much SEO engineered content and content producers chasing that sweet, sweet advertising money that they could all be replaced by mediocre, half-true, outdated content created by bots. So do we have to wait until our refrigerators are "AI powered, predicts your groceries for you!" in order to see the usefulness?
If you mean in the next year or two, I hate to disappoint you, but barring some massive leap forward, you are going to be wrong.
If you mean in the next hundred years, or maybe sometime in our lifetimes, sure. The chances it looks anything like chatGPT or GPT3 now though is laughable.
This isn't the future. This is a small glimpse into a potential future down the line, but everyone is talking like developers/designers/creatives/humans are already obsolete.
We don't really understand what intelligence means -- in humans or our creations -- but ChatGPT gives us a little more insight (just like ELIZA, and the psychological research behind it, did).
At the very least, ChatGPT helps us build increasingly better Turing tests.
> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."
> [...]
> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.
Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".
[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...
None of this is new; there's a special magic phrase to attract VCs that changes every few years, and for now AI is it (we've actually been here before; there was a year or so a while back when everything was an "intelligent agent"/chatbot).
Imagine how the HN users who disagree with that feel. It is beyond fatiguing. I’m frequently reminded of the companies who added “blockchain” to their name and saw massive jumps in their stock price, despite having noting to do with blockchains¹.
¹ https://www.theverge.com/2017/12/21/16805598/companies-block...
If you explained the rules carefully and asked it to respond in paragraphs rather than a grid, it might be able to do it. Can't test since it's down now.
For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.
Given we are not talking about state changes in electrons, there is nothing wrong with this description of ChatGPT - it truly does feel like a massive advance to anyone who has even cursorily played with it.
For example, you can ask it questions like "Who was born first, Margaret Thatcher or George Bush?" and "Who was born first, Tony Blair or George Bush?" and in each instance it infers which George Bush you are talking about.
I honestly couldn't imagine something like this being this good only three years ago.
Which it occasionally mistypes. Then you're off to chase a small piece of error in a tub of boilerplate. Great stuff! For actual example, see [0]
[0] https://blog.ploeh.dk/2022/12/05/github-copilot-preliminary-...
I think it's safe to say your experience is an outlier, just like theirs are.
I'm happy it's working for you, but if you really do use it every day, you surely can understand the points where it doesn't live up to the hype -- or at the very least, how it is not for everyone.
It really isn't. The business use cases even with current tech are pretty obvious. The problem with crypto/blockchain stuff was that it was useless. An emperor with no clothes.
Is there a more legitimate argument for why they're similar other than "hype" or am I missing something?
But it seems like the current trendline for “AI” is going to be worse. Why be excited about building tools that will undermine democracy and cast doubt on the authenticity of every single photo, video, and audio clip. Now it can be done cheaply, by anyone. It will become good enough that we cannot believe any form of media. And also make it impossible to determine if the written word is coming from an actual person. This is going to be weaponized against us.
And at the very least, if you think blogspam sucks now, wait until this becomes 99.9999% of all indexed content. It’s going to jam all of our comms with noise.
But hey it looks great on your resume, right?
Maybe I’m too cynical, would love for someone to change my mind. But you are not alone in your unease.
When asked for references it cannot refer to any. Scientifically useless?
Until AI can filter out fact from fiction, it will continue frustrate the technical people who rely on absolute truths keep important systems running smoothly.
That's about the only purpose I've found so far, but it seems a big one?
I'm sorry, what sort of bullshit argument is that ?
Flight and engines are both natural evolution using natural physics and mechanics.
Artificial Intelligence is nothing but a square-peg-round-hole, when you have a sledgehammer everything looks like a nut scenario.
I'll also throw random programming questions into it, and it's been hit and miss. SO is probably still faster, and I like seeing the discussion. The problem with chatGPT right now is it gives an answer like it's certainty when it's often wrong.
I can see the benefits of this interaction model (basically summarizing all the things from a search into what feels like a person talking back), but I don't see change the world level hype at the moment.
I also wonder if LLMs will get worse over time through propagation error as content is generated by other LLMs.
People love the optimism and the paranoia and uncertainty.
- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends
- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.
- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.
Intelligence may be a fuzzily defined word in everyday usage, but I don't think it's the mystery you present it to be. Joe public may argue against any and all definitions of the word that they personally disagree with (maybe just dislike), but it's nonetheless quite easy to come up with a straightforward and reductive definition if you actually want to!
It will turn out to be a useful tool for those who know what they’re asking about so they can check the answer quickly; but it will be USED by tons of people who don’t have a way of verifying the answers given.
I can't really imagine asking it a question about anything I cared about and not verifying via a second source, though, given its accuracy issues. This makes it feel a lot less useful.
I think it is incredibly sad that a person can be reduced to believing humans don't have souls. Do something different with yourself so you can discover the miracle of life. If you don't believe there is anything more to people and to the world than mechanical processes, I would challenge you to do a powerful spiritual activity.
start_value = get_*start_value(user_input)*
self.log.d*ebug(‘got start_value {start_value}’)*
. . . where the would-be italics are what copilot would likely suggest for completion.And if it’s wrong, you just. . . keep typing. It’s autocomplete, just like IDEs have for other things. I’m kind of astounded that people have such an emotional reaction to an optional, low-key, passive, easily-ignored tool that sometimes saves a bunch of typing. Yes, if you always accept the suggestions you’ll have problems. Just like literally every other coding assistance tool.
This. To answer the OPs question, this is what I'm fatigued about.
I'm glad we're making progress. It's a hell of a parlor trick. But the hype around it is astounding considering how often it's answers are completely wrong. People think computers are magic boxes, and so we must be just a few lever pulls away from making it correct all the time.
Or maybe my problem is that I've overestimated the average human's intelligence. If you can't tell ChatGPT apart from a good con-man, can we consider the Turing test passed? It's likely time for a redefinition of the Turing test.
Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?
I'm reminded of the old Rod Serling quote: We're developing a new citizenry. One that will be very selective about cereals and automobiles, but won't be able to think.
If GPT-3 was listed on Huggingface, its main category listing would be a completion model. Those models tend to be good at generative NLP tasks like creating a Shakespeare sonnet about French fries. But they tend not to be as good at similarity tasks, used by semantic search engines, as models specifically trained for those tasks.
I don't deny that LLM represent a coming revolution in computer interaction. But as someone who's already mastered the command line, programming, etc. I already know how to use computers. LLMs will actually be slower for me for a huge variety of tasks like finding information. English is so clumsy compared to programming languages.
I feel like for nerds like me "user friendlyness" is often just a hindrance. For me this has been the case with GUI in general, touch GUI especially, and probably will be for most LLM applications that don't fundamentally do something I cannot(like stable diffusion).
Yes, it's overhyped, but it's not useless, it actually does work quite well if you apply it to the right use cases in a correct way.
In terms of accuracy, in ChatGPT the hallucination issue is quite bad, for GPT3 it's a lot less and you can reduce it even further by good prompt writing, fine tuning, and settings.
Can we just recognize it for what it is?
It’s also plain that many people are very interested in looking inside the black box and think the contents of the black box are relevant and important. This fact doesn’t change just by your saying so.
Neglecting that (only because it's harder to navigate whether I should expect it to handle state for an extremely finite space; even if it's in a different representation than it's directly used to), I know I saw a post where it failed at rock, paper, scissors. Just found it:
https://www.reddit.com/r/OpenAI/comments/zjld09/chat_gpt_isn...
Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹
It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.
¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.
² I passed by that specific example on Mastodon but I’m not finding it now.
Language is way, way far removed from intelligence. This is well-known in cognitive psychology. You'll find plenty of examples of stroke victims who are still intelligent but have lost the ability to produce coherent sentences, and (though much rarer) examples of people who can produce clear, eloquent prose, yet are so learning and mentally challenged that they can't even tell the difference between fantasy and reality.
As far as creativity goes, human creativity is also a product of life experiences. Artistic styles are always influenced by others, etc.
People are still trying to figure out what the new AIs can and can’t be used for.
Some people will try to build ridiculous products that don’t work, but that’s just part of the learning process and those things will be weeded out over time.
There’s no ‘clean’ path to finding all the useful applications of these new models, so be prepared to be bombarded with AI powered tools for a few more years until the most useful ones have been figured out.
ChatGPT is good at making up stories.
I think it's fair to say this is not one of the use cases where it shines. It's not great at logic, it's also not that smart.
That's exactly what the hype does. Too big claims and then it gets dismissed when it inevitably doesn't live up to the hype.
My point was that “consciousness” and “intelligence” are very different things. One does not imply the other.
Consciousness is about self reflection. Intelligence is about insight and/or problem solving. The two are often correlated, especially in animals, especially in humans, but they’re not the same thing at all.
“Is chatgpt consciousness” is a totally different question than “is chatgpt intelligent”.
We will know chatgpt is intelligent when it passes our tests of intelligence, which are imperfect but at least directionally correct.
I have no idea if/when we we know whether chatgpt is conscious, because we don’t really have good definitions of consciousness, let along tests, as you note.
Consciousness is a subjective experience (regardless of what you believe/understand to be responsible for that experience), so discussing "consciousness/intelligence" is rather like discussing "cabbages/automobiles".
Personally I enjoy creating language models and agent networks, at work I make predictive models so.. :)
Even if I didn't find the tech fascinating and especially the new emergent features of the big LMs, I would be left in the dust professionally if I ignored it. The tech really works for a lot of stuff.
- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends
- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.
- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.
It can't play tic tac toe, fine. But I know it gets concepts wrong on things I'm good at. I've seen it generate a lot of sentences that are correct on their own, but when you combine them to form a bigger picture, it paints something fundamentally different than what's going on.
Moreover, I've had terrible results with it as something to generate creative writing; to the extent that it's on par with a lazy secondary school student that only knows a rudimentary outline of what they're writing about. For example, I asked it to generate a debate between Chomsky and Trump and it gives me a basic debate format around a vague outline of their beliefs where they argue respectfully and blandly (both of which Trump is not known for).
It's entirely possible I haven't exercised it enough and that it requires more than the hours I put into it or it just doesn't work for anything I find interesting.
The only thing we can definitely do better than machines is sad, proud sophistry. “Not real understanding, not real intelligence, just a stochastic parrot”. Sure, keep telling yourself that.
Helping write boilerplate is to Copilot what cropping is to Photoshop.
Some of the ways I've found Copilot a powerful tool in my toolbox: Writing missing comments (especially unfamiliar code bases), "translating" parts of unfamiliar code to a more familiar language, suggesting ideas for how to implement a feature (!) in comments.
I cannot but think that this approach of "Strong Opinions, Weakly Held" is a much stronger path forward towards AGI than what we had before.
It is also obvious that we are in the middle of a shift of some kind. Very hard to see from within, but clearly we will look back at 2022 as the beginning of something…
To the general public, ChatGPT and the Image Generators 'just appeared,' and appeared in a very impressive and usable form. Of course there were many waves of ML advances leading up to these models, but for many people these tools are their first opportunity to play with ML models in a meaningful way that is easy to incorporate into daily life and with very little barrier to entry.
While impressive and there are many applications, my questions surrounding the new AI tools relate to the volume of information they are capable of producing and our capacity to consume it. Tools can be used to synthesize the information, tools can act on it, but there is already too much 'noise.' There is a market for entertainment tailored to exact preferences, but it won't provide the shared cultural connection mass media provides. In the workplace, e-mails and documents can be quickly drafted. This is a valuable use case, but it augments and increases productivity. It will lower the bar necessary for certain jobs, and it will increase productivity expectations, but it will become a tool like Excel rather than a replacement like a factory robot (for now).
The Art of Worldly Wisdom #231 - Never show half-finished things to others. <- ChatGPT managed it's release perfectly in this regard.
Just because people shift the goalposts doesn't mean that the new position of the goalposts isn't closer to being correct than the old position. You can criticise the people for being inconsistent or failing to anticipate certain developments, but that doesn't tell you anything about where the goalposts should be.
Personally, I care very little about whether the machine is intelligent or not. If it actually happens in my lifetime, I believe it will be unmistakable.
I am interested in how people solve problems. If you built and trained a model that solves a challenging task, THAT is something I find noteworthy and what I want to read about.
Apparently utility is boring, and “just ML” now. There’s tons of academic papers I see fly under the radar probably because they solve specific problems that the average person doesn’t know exists. Much of ML doesn’t foray into “popular science” enough to hold general public interest.
It does seem like on HN, the audience is heavily weighted towards software developers that are not biologist, and often cannot see the forest for the trees. They know enough about AI programming to dismiss the hype, and not enough about biology, and miss that this is pretty amazing.
The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI. These fields are starting to converge and inform each other. I’m saying this is happening fast enough that the end game is in sight, humans are just made of parts, an engineering problem that will be solved.
Free will and consciousness are overrated, we think of ourselves as having some mystically exceptional consciousness, which clouds the credit we give advancements in AI. ‘AI will never be able to equal a human’, when humans just want lunch, and our ‘free will’ is based on how much sleep we got. DNA is a program; it builds the brain that is just responding to inputs. Read some Robert Sapolsky, human reactions are just hormones, chemicals, responding to inputs. We will eventually have an AI that mimics a human because humans aren’t that special. Even if the function of every single molecule in the body, or every equation in AI, is all fully mapped out, enough is to stop claiming 'specialness'.
Or who knows, may be there will be an application for block chains too.
I think we're already there. A legion of AI based startups seem to be coming out daily (https://www.futuretools.io/) that offer little more than gimmicks.
> Now it can be done cheaply, by anyone. It will become good enough that we cannot believe any form of media. And also make it impossible to determine if the written word is coming from an actual person. This is going to be weaponized against us.
We shouldn't believe any form of media straight away. We only do so because we think faking it is hard and why should one do. Being able to produce it cheaply could make people more attentative and skeptical of things around them. Blogspam sucks mostly out of consumers belief that this is something that was written by a person who deeply cares about them. Average internet consumer consumes shitty internet not because he is ignorant, but because he or she doesn't know enough to care.
But maybe I'm to optimistic, I just think people are not aware of stuff around them
This is a non sequitur. The human mind does a whole lot more than string words together. Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.
The writing is on the wall. Programming as we know it is going to end. We should be embracing these tools and should start moving from software developers to software architects role.
Humans are also regurgitating what they ‘inputted’ to their brain. For programming, isn’t it an old joke that everyone just copy/paste's from stack overflow?
Why if an AI does it (copy paste), it is somehow now a lesser accomplishment than when a human does it.
However I'm not advocating using its answers directly, but more as a source of inspiration.
Now everybody is aware of the problem of chatGPT not "knowing" the difference between facts vs opinion. It does, however seem a less hard features to add than what they've already built (and MS already pretends its own version is able to correctly provide sources). Future will tell if i'm wrong..
Being able to define what you want to achieve isn't generally the same as knowing HOW to achieve it (except in this case the definition of intelligence rather does suggest the right path).
(and it's always about business use cases)
??? What are they?
- bad code, with non obvious bugs? I would prefer the original slashdot/GitHub/blog post. Google used to do that.
- chat bots? The customer service will still be shit. Your problem will still not be solved. But I guess some call center staff can be fired. Customers will be very happy to never be able to speak to a human.
- Writing mediocre overlong content for google to place ads in? Just what the internet needs. It’s already day time tv.
Any more?
See also: Gartner hype cycle
I came here to make this comment. Thank you for doing it for me.
I remember feeling shocked when this article appeared in the Atlantic in 2008, "Is Google Making Us Stupid?": https://www.theatlantic.com/magazine/archive/2008/07/is-goog...
The existence of the article broke Betteridge's law for me. The fact that this phenomenon it is not more widely discussed describes the limit of human intelligence. Which brings me back around to the other side... perhaps we were never as intelligent as we suspected?
(2) Counter. I asked it the other day "how many movies were Tom Hanks and Meg Ryan in together" and the answer ChatGPT gave was 2 ... not only is that wrong it is astonishingly wrong (IMO). You could be forgiven for forgetting Ithaca from 2015. I could forgive ChatGPT for forgetting that one. But You've Got Mail? That's a very odd omission. So much so I'm genuinely curious how it could possible get the answer wrong in that way. And for the record, Google presents the correct answer (4) in a cut out segment right at the top, a result and presentation very close to what one would expect from ChatGPT.
I don't know about other use cases like generating stories (or tangentially art of any kind) for inspiration, etc. But as a search engine things like ChatGPT NEED to have attributions. If I ask the question "Does a submarine appear in the movie Battlefield Earth?" it will confidently answer "no". I _think_ that answer is right, but I'm not really all that confident it is right. It needs to present the reasons it thinks that is right. Something like "No. I believe this because (1) the keyword submarine doesn't appear in the IMDb keywords (<source>), (2) the word submarine doesn't appear in the wikipedia plot synopsis (<source>), (3) the film takes place in Denver (<source>) which is landlocked making it unlikely a submarine would be found in that location during the course of the film."
The Tom Hanks / Meg Ryan question/answer would at least more interesting if it explained how it managed to be so uniquely incorrect. That question will haunt me though ... there's some rule about this right? Asking about something you have above average knowledge in and watching someone confidently answer it incorrectly. How am I supposed to ever trust ChatGPT again about movie queries?
I agree with this, I feel like I've seen a lot of really cool technology get swept up in a hype storm and get carried away into oblivion.
I wonder what ways there are for the people who put out these innovations to shield them/their products from it?
Luckily I have a lot of faith in the OpenAI people - I hope their shielding themselves from the technological form of audience capture.
I think "human level intelligence" is an emergent phenomenon arising from a variety of smaller cognitive subsystems working together to solve a problem. It does seem that ChatGPT and similar models have at least partially automated one of the subsystems in this model. Still, it can't reason, doesn't know it's wrong, and can't lie because it doesn't understand what a lie is. So it has a long way to go. But it's still real progress in the sense that it's allowing us to better see the dividing lines between the subsystems that make up general intelligence.
I think that we'll need to build a better systems level model of what general intelligence is and the pieces it's built out of. With a better defined model, we can come up with better tests for each subsystem. These tests will replace the Turing test.
One of major problems of modern computer-based work is that there are too many people already in those roles, doing work that isn't needed. Case in point: the culling of tens of thousands of software engineers, people who would consider themselves to be doing 'bullshit jobs'.
> I was thinking more like:
That example is straight up from any of those "programming is not bound by typing speed" essays of yore.
> people have such an emotional reaction to an optional, low-key, passive, easily-ignored tool that sometimes saves a bunch of typing.
Maybe because it's not generally advertised by proponents as "an optional, low-key, passive, easily-ignored tool that sometimes saves a bunch of typing"? Just look at the rest of the thread, it's pronounced as a game-changer in productivity.
That reminds me how in my youth many were planning on vacations to Mars resorts and unlimited fusion energy) Stars looked so close, only a matter of time!
It is often frustrating that English has words with such different (but clearly related) definitions, as it can make it far too easy to end up talking past each other.
It's bold (to put kindly) how lengthy some of these critical comments are from folks who later in the thread admit to not personally used Copilot (for example) much themselves.
The quality of LLM output can wildly vary based on what prompts (or series of prompts) are used.
Because the kind of 'art' the AI will create will end up in a Canva template; it will be clip art for the modern Powerpoint or Facebook ad. Because corporations like Canva are the only ones that will pay the fees to use these tools at scale. And all they produce is marketing detritus, which is the opposite of art.
Instead of the "Corporate Memphis" art style that's been run into the ground by every big tech company, AI will produce similarly bland, corporate-approved graphics that we'll continue to roll our eyes at.
My last resort is to just remove all AI references from my marketing and just deliver the product.
To drill down a bit, I think the difference is that the child is trying to build a model - their own model - of the world, and how symbols describe or relate to it. Eventually they start to plan their own way through life using that model. Even though we use the term "model" that's not at all what a neural-net/LLM type "AI" is doing. It's just adjusting weight to maximize correlation between outputs and scores. Any internal model is vague at best, and planning (the also-incomplete core of "classical" AI before the winter) is totally absent. That's a huge difference.
ChadGPT is really not much more than ELIZA (1966) on fancy hardware, and it's worth noting that Eliza's was specifically written to illustrate superficiality of (some) conversation. Its best known DOCTOR script was intentionally a parody of Rogerian therapy. Plus ça change, plus c'est la même chose.
My concern is with the limitations in the creation of new styles.
I guess my view is that you send 100 people to art school and you get 100 different styles out of it (ok maybe 80).
With AI you've got a handful of dominant models instead of a unique model for each person based on life experience.
Apprentices learn and develop into a master. If that works is all moved to an LLM, where do the new masters come from?
---
I take your point about the technology. I have a hard time saying it's not impressive or similar to how humans learn.
My concern is more with what widespread adoption will mean
I've criticized it whenever it gets brought up as an alternative for academic research, coding, math, other more sophisticated knowledge based stuff. In my experience at least, it falls apart at reliably dealing with these and I haven't gone back.
But man, is it ever revolutionary at actually dealing with language and text.
As an example, I have a bunch of boring drama going on right now with my family, endless fucking emails while I'm trying to work.
I just paste them into chat gpt and get it to summarize them, and then I get it to write a response. The onerous safeguards make it so I don't have to worry about it being a dick.
Family has texted me about how kind and diplomatic I'm being and I honestly don't even really know what they're squabbling about, it's so nice!
I can see how someone who’s always working on sophisticated, mentally challenging code would get less benefit and would see more frequent errors.
The tech industry runs on hype, so much so that analysts are told to evaluate them separately. Growth now, profit later, here's $2bn from Softbank, yada yada yada.
Companies like Theranos specifically positioned themselves as 'tech' so as to escape press scrutiny, particularly in sensitive industries like healthcare.
Emperors with no clothes can get very far; see Brian Armstrong and SBF (pre-collapse, but still not in jail). Can you imagine how far a well-funded AI hustler could get?
In a better world, it’d be possible to occasionally pause, take a breath and think about what the models are actually doing, how they’re doing it, and if that’s what we want something to do. However, it’s hard to find space to do so without getting run over by people “moving fast” and breaking things and feels like doing the hard corrective work is so much less rewarded.
In your opinion, how wide is this gap? To claim that it is closing at a meaningful pace brings the implication that we understand the width. Has anyone made a credible claim that we actually understand the width of the gap?
> The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI.
This is a thinking trap. Without an understanding or definition of the breadth of the problem space, both fields could be making perfectly equivalent progress and it would still imply nothing regarding the width of the gap or the progress made closing it.
> These fields are starting to converge and inform each other.
Collaboration does not imply anything more than the existence of cooperation across fields. Do you have specific examples where the science itself is converging?
My understanding is that our ability to comprehend neural processes is still so limited that researchers focus on the brains of worms (e.g. a flatworm’s 53 neurons), and we still don’t understand how they work.
> and at this point it is only a matter of time, the end can be seen
Who is claiming we have any notion of being close enough to see the end? Most experts on the cutting edge cite the enormous distance yet to be covered.
I’m not claiming the progress made isn’t meaningful by itself. I’m struggling with your claim that we have any idea how much further we have to go.
Landing rovers on Mars is a huge achievement, but compared to the array of advancements required to colonize space, it seems like just a small step forward in comparison.
I'm finding the current hype cycle very frustrating from both sides. On one side there is frequent overplaying current capabilities, and cherry picked examples given as it they're representative. On the other side there is an over simplistic "AI is evil" reaction. It's hard to deny that progress in the past few years greatly exceeds expectations and could make a significant improvement to individual creativity and learning, as well as how we cooperate but so much of the discussions are fear based.
It will even claim it can generate citations for you too, which is pretty messed up because when I tried it just fabricated them replete with faked DOIs.
Where it shines is at squishy language stuff, like generating the framework of an email, paraphrasing a paragraph for you, or summarizing a news article.
It really is revolutionary at language tasks, but unfortunately the hype machine and these weird "AI sycophants" have caused people to dramatically overestimate it's use cases.
Yeah, I think you're right. Intelligence is just something our species has evolved as a strategy for survival. It isn't about intelligence, it's about survival.
The cognitive skills needed to survive/navigate/thrive in the digital era are very different than the cognitive skills required to survive in the pre-digital era.
We're biologically programmed through millions of years of evolution to survive in a world of scarcity. Intelligence used to be about tying together small bits of scarce information to find larger patterns so that we can better predict outcomes.
Those skills are being rendered more and more irrelevant in a world of information abundance. Perhaps the "best fit" humans of the future are those that possess new form of "intelligence", relying less on reason and more on the ability to quickly digest the firehose of data thrown at them 24-7.
If so, then the AI we were trying to build in the 1950s would necessarily be different than the AI that our grandchilden would find helpful.
I'm more concerned about the Twitter hype-men and women adding '.eth' to their name and singing DeFI praises all day long....and then quietly removing it without so much as a word, once the hype is dead and keeping the '.eth' makes you look like a sucker.
BTW a lot of influential people were on that train, current CEO of YCom being one of them.
> Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.
Why? Wouldn't you expect that technique to generally fail if it isn't intelligent enough to know what's happening in the sentence?
Good luck with the drama! Make sure to read a summary for the next family meeting haha.
https://arxiv.org/abs/2210.05189 but all NNs _are_ if statements!
This seems to have been the rallying cry of AI-ish stuff for the past 30 years, tho. At a certain point you have to ask "but how much time"? Like, a lot of people were confidently predicting speech recognition as good as a human's from the 90s on, for instance. It's 2023, and the state of the art in speech recognition is a fair bit better than Dragon Dictate in the 90s, but you still wouldn't trust it for anything important.
That's not to say AI is useless, but historically there's been a strong tendency to say, of AI-ish things "it's 95% of the way there, how hard could the last 5% be?" The answer appears to be "quite hard, actually", based on the last few decades.
As this AI hype cycle ramps up, we're actually simultaneously in the down ramp of _another_ AI hype cycle; the 5% for self-driving cars is going _very slowly indeed_, and people seem to have largely accepted that, while still predicting that the 5% for generative language models will be easy. It's odd.
(Though, also, I'm not convinced that it _is_ just a case of making a better ChatGPT; you could argue that if you want correct results, a generative language model just isn't the way to go at all, and that the future of these things mostly lies in being more convincingly wrong...)
LLMs are not just generalists, but dilettantes to a degree we'd find extremely tiresome in a human. So of course half the HN commentariat loves them. It's a story more to do with Pygmalion or Narcissus than Prometheus ... and BTW good luck getting Chad or Brad to understand that metaphor.
Let's wait until the end of the year and see how much will this wave will hold up.
I just don't like falling into the other trap of wasting my day to write a complete paper with citations for some loosely defined internet argument on a subject that is already stacked on a pile of controversy and misunderstandings. I think I could easily find a number of citations that have conflated vocabulary, or re-defined vocabulary. This is my opinion, don't think I need to document a cited cross reference list of these re-defined terms to say this.
Probably this is the same problem that exists between a research paper, and a popular science book. Neither is as detailed and exact or also as high level and understandable as everyone desires. So, yes, these are some opinions, just from a certain point of view, my opinions are more correct than others opinions.
Yeah I will be sure to read it before meeting them, would be awkward if they found out I was using it during one of the disputes, which was whether or not to keep resuscitating Grandma.
ChatGPTs stupid ethical filters made it so I actually had to type my response to that one all by myself.
There is a state of emergency presidential address. In Video A, the politician says X Y Z. In Video B, the politician says A B C. Both videos have equal credibility. The videos show no artifacts from tampering. The alteration is undetectable by experts. The broadcast has dire consequences in a divided country.
50% of channels are pushing Video A, 50% of channels are pushing Video B.
We are now in a position where the public actually cannot determine which video is authentic. The politician could broadcast a new statement, to clarify the validity of the first video. But, you could just as easily fake that too, to publish a statement that declares the opposite.
So, then you load up Hacker News or wherever, to determine for yourself what the hell is going on. But someone spins up 1,000 bots to flood the comments in favor of Video A, and someone else spins up 1,000 bots to flood the comments in favor of Video B. These comments are all in natural language, all with their own individual idiosyncrasies. It's impossible to determine if the comment is a bot. And because the cost is essentially free, these bots can be primed for years, making mundane comments on trivial topics to build social credibility. Actual humans only account for maybe 1% of this discourse.
Now imagine: our entire world operates like this, on a daily basis, ranging from the mundane to the dramatic. How about broadcasting a deepfake press statement from a CEO to make a shorted meme stock crash. If there are financial/political incentives to do so, and the technological hurdle is insignificant, these tools will be weaponized.
So how do we "not believe the media", do we all have to be standing in the same room together where something notable happens?
I understand that there could be upsides, the world isn't all doom and gloom. But, I think engineers get myopic, and do not heed the warnings.
People need to get off being 24/7 wired and chill more. The last thing people, society, and the environment needs is the kind of changes the internet brought...
IMO AI has reached this stage of its lifecycle. There have always been, and still are, valid use cases for AI, but I think the GPT-3 inspired applications we've been seeing as of late are no more than impressive tech demos. It's the first time the general public has seen a glimmer of where AI can go, but it really is just a glimmer at this point.
My advice is to keep your head down and try to be selective with the content you engage with on AI. It seems like every feed refresh I have some unknown Twitter Verified account telling me why swaths of the population will be out of a job soon. The best heuristic I have so far is to ignore AI-related posts/reshares from names I haven't heard of before, but of course that has obvious drawbacks.
On your claim that the mind is metaphysical OR it is a NN, you have to understand that this extremely false dichotomy is quite the stretch itself, as if there are no other possibilities, that it isn't even a range or it could be something else entirely. One of the critiques people have of NN from the "old guard" is the lack of symbolic intelligence. Claiming you don't need it and fitting is merely enough is suspect because even with OpenAI tier training, just the grammar is there, some of the semantic understanding is lacking. Appealing to the god of the gaps is a fallacy for a reason, although it may in fact turn out to be true, potentially that just more training might be all that is needed. EDIT: Anyway, the point is assuming symbolic reasoning is a part of intelligence (hell, it's how we discuss things) doesn't require mysticism, it just is an aspect that NNs currently don't have, or very charitably do not appear to have quite yet.
Regardless, there isn't really evidence that "what brains do is what NNs do" or vice versa. The argument as many times as it has been pushed has been primarily driven by analogy. But just because a painting looks like an apple doesn't mean you can eat the canvas. Similarities might betray some underlying relationship (an artist who made the painting took reference from an actual apple you can eat), but assuming an equivalence without evidence just strange behavior, and I'm not sure for what purpose.
For example, let's say you have a website that sells clothes and you want to make the site search engine better. Let's also say that a lot of work has been done to make the top 100 queries return relevant results. But the effort required to get the same relevance for the long tail of unique queries, think misspellings and unusual keywords, doesn't make sense. However you still want to provide a good search experience so you can turn to ML for that. Even if the model only has a 60% accuracy, that's still a lot better than 0% accuracy. So applying ML queries outside the top 100, should improve the overall search experience.
ChatGPT/GPT-3 has an increased the number of areas where ML can be used but it still has plenty of limitations.
Which has happened before. The original semantic/heuristic AI, most notably expert systems, over-promised and ultimately under-delivered. This led directly to the so-called "AI winter" which lasted more than two decades and didn't end until quite recently. It's a very real concern, especially among people who want to push the technology forward and not just profit from it.
I forgot to add something to my original post. >>"I remember feeling shocked when this article appeared in the Atlantic in 2008..."
At the time I was shocked that the question was even being asked!
Or more to the main post, a lot of head down engineers cranking out solutions do loose sight of how far they are moving.
The expressiveness of language lets this be true of almost everything.
Like maybe the hype is not misplaced. There are grifters, and there are companies with products that are basically "IF" statement, and the hype is pretty nutz.
On other hand, some of this stuff is amazing. Don't let the hype and smarmy sales people take away from the amazing advancements that are happening. Just a few years ago some of this would have been considered impossible, only possible in the province of the 'mystery of the human mind'. And yet, here we are, and what it is to be human is being chipped away more every month, and yeah a lot of people want to profit.
Or more to the my main thought, a lot of heads down engineers that are cranking out solutions, do loose sight of how far they are moving. So don't get discouraged by the hype, marketing is in every industry, so why not stay in this cool one that is doing all the amazing things.
Machine Learning research isn't "for us." Let the researchers do what they do, and toil away in boring rooms, and eventually, like the internet slowly did, it will be all around us and will be useful.
Well, I'm no fan of chatGPT. But it appears most people are worse than chatGPT, because just regurgitate what they hear with no thought or contemplation. So you can't really blame average folks who struggle with the concepts of intelligence/understanding that you mention.
> I love the models, the statistics and just the cleverness of everything but I just can't stand the "scene" anymore
This really sums up my feelings too.
Religious texts are something that can be interesting after sensing some spirituality, but probably not before. I don't think anybody who is not spiritual can become so by reading religious texts.
A powerful spiritual practice like challenging your own limits and fears to the maximum, or meditation and fasting, or immersing yourself in a completely different environment from what you are familiar with until you know it truly. Or if these things sound too abstract, to take a strong dose of psychedelics alone or with others.
https://twitter.com/marvinvonhagen/status/162365814434901197...
I think putting AI inside everything will give us opportunity to experience first-hand what is a local extremum of multidimensional function and how it differs from global extremum. Our paper gets eliminated because some AI-based curriculum vitae review glitch. Our car lost a wheel because computer vision failed (or lose our heads like that one owner of Tesla )... Most scary for me is that we are starting to build more and more things of which we wouldn't be able to understand the inner workings. Hence there might be an intelligence crisis creeping slowly into our civilisation, and then bam... like in Andrzej Zajdel's Van Troff's Cylinder
We give our minds too much credit, we keep arguing if AI is, or can ever be, conscious, without ever defining what consciousness is. I would say that humans aren't conscious in the way we think we are. There is no free will, we don't decide what we think about, if you think about thinking, where does the first thought come from?
What are the odds you have it figured out? Why can't you try to prove yourself wrong. The odds are you haven't figured it out either, so why have you stopped trying. You say the soul exists, so why is the onus on me to prove it doesn't, but you don't have to prove anything.
But, I will argue that there is physical evidence for the soul and for the spiritual beyond our everyday comprehension. That physical evidence is psychedelics. If you take psychedelics once with a person who is dear to you, I'm certain you will come out on the other side much assured they have a soul, that there is much more to people than what you see in everyday life.
I'm more from Zen Buddhism background, so agree about not trusting 'text'. That language is limited for communication. I think a lot of the issues here, are just about miss-interpreting language.
But for Psychedelics, I have always fallen on the side that they can also cause delusion. I guess because they are mind altering, then potentially they are altering perceptions to be something even less real than someone had without psychedelics.
The other reason I have not depended on them, is because no matter the impressions they leave, however mind expanding, it is still isolated inside my own head. They don’t provide proof of anything outside myself. The results are still limited to the individual’s point of view. But also agree, that they can be valuable if someone is so buried in dogma it helps them break out to look around. So, guess for psychedelics, it depends on where someone is at, and trying to achieve.
With all addiction dogma aside, or arguments on what is addictive, or not, aside. My struggles to overcome addiction have led me to not trust mind altering substances. That even if our own un-altered perceptions are an illusion, so is the altered perception. So being in an altered state is not gaining ground on understanding.
On other hand Psychedelics do help with some addictions, so guess mileage can vary.