Long before LLMs, I would talk about classes / functions / modules like "it then does this, decides the epsilon is too low, chops it up and adds it to the list".
The difference I guess it was only to a technical crowd and nobody would mistake this for anything it wasn't. Everybody know that "it" didn't "decide" anything.
With AI being so mainstream and the math being much more elusive than a simple if..then I guess it's just too easy to take this simple speaking convention at face value.
EDIT: some clarifications / wording
For example, I think "chain of thought" is a good name for what it denotes. It makes the concept easy to understand and discuss, and a non-antropomorphized name would be unnatural and unnecessarily complicate things. This doesn't mean that I support companies insisting that LLMs think just like humans or anything like that.
By the way, I would say actually anti-anthropomorphism has been a bigger problem for understanding LLMs than anthropomorphism itself. The main proponents of anti-anthropomorphism (e.g. Bender and the rest of "stochastic parrot" and related paper authors) came up with a lot of predictions about things that LLMs surely couldn't do (on account of just being predictors of the next word, etc.) which turned out to be spectacularly wrong.
To be honest the impression I've gotten is that some people are just very interested in talking about not anthropomorphizing AI, and less interested in talking about AI behaviors, so they see conversations about the latter as a chance to talk about the former.
Tbh I also think your comparison that puts "UI events -> Bits -> Transistor Voltages" as analogy to "AI thinks -> token de-/encoding + MatMul" is certainly a stretch, as the part about "Bits -> Transistor Voltages" applies to both hierarchies as the foundational layer.
"chain of thought" could probably be called "progressive on-track-inference" and nobody would roll an eye.
Maybe it's cog-nition (emphasis on the cog).
Outside the technical world it gets much worse. There are people who killed themselves because of LLMs, people who are in love with them, people who genuinely believe they have “awakened” their own private ChatGPT instance into AGI and are eschewing the real humans in their lives.
I think the above poster gets a little distracted by suggesting the models are creative which itself is disputed. Perhaps a better term, like above, would be to just use "model". They are models after all. We don't make up a new portmanteau for submarines. They float, or drive, or submarine around.
So maybe an LLM doesn't "write" a poem, but instead "models a poem" which maybe indeed take away a little of the sketchy magic and fake humanness they tend to be imbued with.
It's much more interesting when we are talking about... say... an ant... Does it "decide"? That I have no idea as it's probably somewhere in between, neither a sentient decision, nor a mathematical one.
If IO can be functional, I don't see why mice can't.
And prediction is already an hyponym of inference. Why not just use inference then?
That's what they call marketing, propaganda or brain washing, acculturation , education depending on who you ask and at which scale you operate, apparently.
The consensual view is rather that no map is matching fully the territory, or said otherwise the territory includes ontological components that exceeds even the most sophisticated map that can be ever built.
It's going to take a lot to get him out of that mindset and frankly I'm dreading trying to compare and contrast imperfect human behaviour and friendships with a sycophantic AI.
I think these models do learn similarly. What does it even mean to reason? Your brain knows certain things so it comes to certain conclusions, but it only knows those things because it was ''trained'' on those things.
I reason my car will crash if I go 120 mph on the other side of the road because previously I have 'seen' where the input is a car going 120mph has a high probability of producing a crash, and similarly have seen input where the car is going on the other side of the road, producing a crash. Combining the two would tell me it's a high probability.
I don't think LLMs are sentient or any bullshit like that, but I do think people are too quick to write them off before really thinking about how a nn 'knows things' similar to how a human 'knows' things, it is trained and reacts to inputs and outputs. The body is just far more complex.
Why?
A plane is not a fly and does not stay aloft like a fly, yet we describe what it does as flying despite the fact that it does not flap its wings. What are the downsides we encounter that are caused by using the word “fly” to describe a plane travelling through the air?
I suppose this war will be fought until people are out of energy, and if reason has no place, it is reasonable to let others tire themselves out reiterating statements that are not designed to bring anyone closer to the truth.
We're very used to "all models are wrong, some are useful", "the map is not the territory", etc.
None of these targets sufficiently motivated, rather those who are either ambivalent or yet unexposed.
This made me think, when will we see LLMs do the same; rereading what they just sent, and editing and correcting their output again :P
Inference would be the part that is deliberately learned and drawn from conclusions based on the training set, like in the "classic" sense of statistical learning.
These are very different and knowledge is not intelligence.
The rest of the time it’s generating content.
That said, it's fascinating to me that it works (and empirically, it does work; a reasoning model generating tens of thousands of tokens while working out the problem does produce better results). I wish I knew why. A priori I wouldn't have expected it, since there's no new input. That means it's all "in there" in the weights already. I don't see why it couldn't just one shot it without all the reasoning. And maybe the future will bring us more distilled models that can do that, or they can tease out all that reasoning with more generated training data, to move it from dispersed around the weights -> prompt -> more immediately accessible in the weights. But for now "reasoning" works.
But then, at the back of my mind is the easy answer: maybe you can't optimize it. Maybe the model has to "reason" to "organize its thoughts" and get the best results. After all, if you give me a complicated problem I'll write down hypotheses and outline approaches and double check results for consistency and all that. But now we're getting dangerously close to the "anthropomorphization" that this article is lamenting.
This tickled me. "There ain't nobody here but us chickens".
I have other thoughts which are not quite crystalized, but I think UX might be having an outsized effect here.
Meanwhile, things can happen in the latent representation which aren't reflected in the intermediate outputs. You could, instead of using CoT, say "Write a recipe for a vegetarian chile, along with a lengthy biographical story relating to the recipe. Afterwards, I will ask you again about my original question." And the latents can still help model the primary problem, yielding a better answer than you would have gotten with the short input alone.
Along these lines, I believe there are chain of thought studies which find that the content of the intermediate outputs don't actually matter all that much...
That's reminding me of deep neural networks where single layer networks could achieve the same results, but the layer would have to be excessively large. Maybe we're re-using the same kind of improvement, scaling in length instead of width because of our computation limitations ?
All imitations require analogous mechanisms, but that is the extent of their similarities, in syntax. Thinking requires networks of billions of neurons, and then, not only that, but words can never exist on a plane because they do not belong to a plane. Words can only be stored on a plane, they are not useful on a plane.
Because of this LLMs have the potential to discover new aspects and implications of language that will be rarely useful to us because language is not useful within a computer, it is useful in the world.
Its like seeing loosely related patterns in a picture and keep derivating on those patterns that are real, but loosely related.
LLMs are not intelligence but its fine that we use that word to describe them.
It would be swimming if it was propelled by drag (well, technically a propellor also uses drag via thrust, but you get the point). Imagine a submarine with a fish tail.
Likewise we can probably find an apt description in our current vocabulary to fittingly describe what LLMs do.
E.g. when I first started learning webdev, I didn’t think about ‘servers’. I just knew that if I uploaded my HTML/PHP files to my shared web host, then they appeared online.
It was only much later that I realized that shared webhosting is ‘just’ an abstraction over Linux/Apache (after all, I first had to learn about those topics).
I personally find that description perfect. If you want it shorter you could say that an LLM generates.
In other words, no, they never accurately describe what the LLM is actually doing. But sometimes drawing an analogy to human behavior is the most effective way to pump others' intuition about a particular LLM behavior. The trick is making sure that your audience understands that this is just an analogy, and that it has its limitations.
And it's not completely wrong. Mimicking human behavior is exactly what they're designed to do. You just need to keep reminding people that it's only doing so in a very superficial and spotty way. There's absolutely no basis for assuming that what's happening on the inside is the same.
It was already common to use a document extender (LLM) against a hidden document, which resembles a movie or theater play where a character named User is interrogating a character named Bot.
Chain-of-thought switches the movie/script style to film noir, where the [Detective] Bot character has additional content which is not actually "spoken" at the User character. The extra words in the script add a certain kind of metaphorical inertia.
I asked Claude to write a E-AC3 audio component so I can play videos with E-AC3 audio in the old version of QuickTime I really like using. Claude's decoder includes the ability to write debug output to a log file, so Claude is studying how QuickTime and the component interact, and it's controlling QuickTime via Applescript.
Sometimes QuickTime crashes, because this ancient API has its roots in the classic Mac OS days and is not exactly good. Claude reads the crash logs on its own—it knows where they are—and continues on its way. I'm just sitting back and trying to do other things while Claude works, although it's a little distracting that something else is using my computer at the same time.
I really don't want to anthropomorphize these programs, but it's just so hard when it's acting so much like a person...
I’m sure you knew that your code was running on computers somewhere even when you first started and wasn’t running in a literal “cloud”.
It’s about as tiring as people on HN who know just a little about LLMs thinking they are sounding smart when they say they are just advanced autocomplete. Both responses are just as unproductive
Meh, I just knew that the browser would display HTML if I wrote it, and that uploading the HTML files made them available on my domain. I didn’t really think about where the files went, specifically.
Try asking an average high school kid how cloud storage works. I doubt you’ll get any further than ‘I make files on my Google Docs and then they are saved there’. This is one step short of ‘well, the files must be on some system in some data center’.
I really disagree that “people who come on HN and say “there is no such thing as serverless and there are servers somewhere” think they are sounding smart when they are adding nothing to the conversation.” On the contrary, it’s an invitation to beginning coders to think about what the ‘serverless’ abstraction actually means.
With such strong wording, it should be rather easy to explain how our thinking differs from what LLMs do. The next step - showing that what LLMs do precludes any kind of sentience is probably much harder.
Relatedly, the alternative to pragmatism is analysis paralysis.
I hold a deep belief that anthropomorphism is a way the human mind words. If we take for granted the hypothesis of Franz de Waal, that human mind developed its capabilities due to political games, and then think about how it could later lead to solving engineering and technological problems, then the tendency of people to anthropomorphize becomes obvious. Political games need empathy or maybe some other kind of -pathy, that allows politicians to guess motives of others looking at their behaviors. Political games directed the evolution to develop mental instruments to uncover causality by watching at others and interacting with them. Now, to apply these instruments to inanimate world all you need is to anthropomorphize inanimate objects.
Of course, it leads sometimes to the invention of gods, or spirits, or other imaginary intelligences behinds things. And sometimes these entities get in the way of revealing the real causes of events. But I believe that to anthropomorphize LLMs (at the current stage of their development) is not just the natural thing for people but a good thing as well. Some behavior of LLMs is easily described in terms of psychology; some cannot be described or at least not so easy. People are seeking ways to do it. Projecting this process into the future, I can imagine how there will be a kind of consensual LLMs "theory" that explains some traits of LLMs in terms of human psychology and fails to explain other traits, so they are explained in some other terms... And then a revolution happens, when a few bright minds come and say that "anthropomorphism is bad, it cannot explain LLM" and they propose something different.
I'm sure it will happen at some point in the future, but not right now. And it will happen not like that: not just because someone said that anthropomorphism is bad, but because they proposed another way to talk about reasons behind LLMs behavior. It is like with scientific theories: they do not fail because they become obviously wrong, but because other, better theories replace them.
It doesn't mean, that there is no point to fight anthropomorphism right now, but this fight should be directed at searching for new ways to talk about LLMs, not to show at the deficiencies of anthropomorphism. To my mind it makes sense to start not with deficiencies of anthropomorphism but with its successes. What traits of LLMs it allows us to capture, which ideas about LLMs are impossible to wrap into words without thinking of LLMs as of people?
I agree these things don't think like we do, and that they have weird gaps, but to claim they can't reason at all doesn't feel grounded.
In part I agree with the parent.
>> it pointless to *not* anthropomorphize, at least to an extent.
I agree that it is pointless to not anthropomorphize because we are humans and we will automatically do this. Willingly or unwillingly.On the other hand, it generates bias. This bias can lead to errors.
So the real answer is (imo) that it is fine to anthropomorphise but recognize that while doing so can provide utility and help us understand, it is WRONG. Recognizing that it is not right and cannot be right provides us with a constant reminder to reevaluate. Use it, but double check, and keep checking making sure you understand the limitations of the analogy. Understanding when and where it applies, where it doesn't, and most importantly, where you don't know if it does or does not. The last is most important because it helps us form hypotheses that are likely to be testable (likely, not always. Also, much easier said than done).
So I pick a "grey area". Anthropomorphization is a tool that can be helpful. But like any tool, it isn't universal. There is no "one-size-fits-all" tool. Literally, one of the most important things for any scientist is to become an expert at the tools you use. It's one of the most critical skills of *any expert*. So while I agree with you that we should be careful of anthropomorphization, I disagree that it is useless and can never provide information. But I do agree that quite frequently, the wrong tool is used for the right job. Sometimes, hacking it just isn't good enough.
I would call "fuck around and find out" a rather simple approach. It is why we use it! It is why lots of animals use it. Even very dumb animals use it. Though, we do notice more intelligent animals use more efficient optimization methods. All of this is technically hypothesis testing. Even a naive grid search. But that is still in the class of "fuck around and find out" or "brute force", right?
I should also mention two important things.
1) as a human we are biased to anthropomorphize. We see faces in clouds. We tell stories of mighty beings controlling the world in an effort to explain why things happen. This is anthropomorphization of the universe itself!
2) We design LLMs (and many other large ML systems) to optimize towards human preference. This reinforces an anthropomorphized interpretation.
The reason for doing this (2) is based on a naive assumption[0]: If it looks like a duck, swims like a duck, and quacks like a duck, then it *probably* is a duck. But the duck test doesn't rule out a highly sophisticated animatronic. It's a good rule of thumb, but wouldn't it also be incredibly naive to assume that it *is* a duck? Isn't the duck test itself entirely dependent on our own personal familiarity with ducks? I think this is important to remember and can help combat our own propensity for creating biases.
[0] It is not a bad strategy to build in that direction. When faced with many possible ways to go, this is a very reasonable approach. The naive part is if you assume that it will take you all the way to making a duck. It is also a perilous approach because you are explicitly making it harder for you to evaluate. It is, in the fullest sense of the phrase, "metric hacking."
Flying doesn't mean flapping, and the word has a long history of being used to describe inanimate objects moving through the air.
"A rock flies through the window, shattering it and spilling shards everywhere" - see?
OTOH, we have never used to word "swim" in the same way - "The rock hit the surface and swam to the bottom" is wrong!
AI apps ought to at minimum warn us that their responses are not anyone's (or anything's) real thoughts. But the illusion is so powerful that many people would ignore the warning.
The therapist thing might be correct, though. You can send a well-adjusted person to three renowned therapists and get three different reasons for why they need to continue sessions.
No therapist ever says "Congratulations, you're perfectly normal. Now go away and come back when you have a real problem." Statistically it is vanishingly unlikely that every person who ever visited a therapist is in need of a second (more more) visit.
The main problem with therapy is a lack of objectivity[1]. When people talk about what their sessions resulted in, it's always "My problem is that I'm too perfect". I've known actual bullies whose therapist apparently told them that they are too submissive and need to be more assertive.
The secondary problem is that all diagnosis is based on self-reported metrics of the subject. All improvement is equally based on self-reported metrics. This is no different from prayer.
You don't have a medical practice there; you've got an Imam and a sophisticated but still medically-insured way to plead with thunderstorms[2]. I fail to see how an LLM (or even the Rogerian a-x doctor in Emacs) will do worse on average.
After all, if you're at a therapist and you're doing most of the talking, how would an LLM perform worse than the therapist?
----------------
[1] If I'm at a therapist, and they're asking me to do most of the talking, I would damn well feel that I am not getting my moneys worth. I'd be there primarily to learn (and practice a little) whatever tools they can teach me to handle my $PROBLEM. I don't want someone to vent at, I want to learn coping mechanisms and mitigation strategies.
[2] This is not an obscure reference.
We always are speaking to our audience, right? This is also what makes more general/open discussions difficult (e.g. talking on Twitter/Facebook/etc). That there are many ways to interpret anything depending on prior knowledge, cultural biases, etc. But I think it is fair that on HN we can make an assumption that people here are tech savvy and knowledgeable. We'll definitely overstep and understep at times, but shouldn't we also cultivate a culture where it is okay to ask and okay to apologize for making too much of an assumption?
I mean at the end of the day we got to make some assumptions, right? If we assume zero operating knowledge then comments are going to get pretty massive and frankly, not be good at communicating with a niche even if better at communicating with a general audience. But should HN be a place for general people? I think no. I think it should be a place for people interested in computers and programming.
Yes this is still mechanical in a sense, but then I'm not sure what behavior you wouldn't classify as mechanical. It's "responding" to stimuli in logical ways.
But I also don't quite know where I'm going with this. I don't think LLMs are sentient or something, I know they're just math. But it's spooky.
> It wasn't a simple brute force.
I think you misunderstood me."Simple" is the key word here, right? You agree that it is still under the broad class of "brute force"?
I'm not saying Claude is naively brute forcing. In fact, with lack of interpretibility of these machines it is difficult to say what kind of optimization it is doing and how complex that it (this was a key part tbh).
My point was to help with this
> I really don't want to anthropomorphize these programs, but it's just so hard when it's acting so much like a person...
Which requires you to understand how some actions can be mechanical. You admitted to cognitive dissonance (something we all do and I fully agree is hard not to do) and wanting to fight it. We're just trying to find some helpful avenues to do so. > It's "responding" to stimuli in logical ways.
And so too can a simple program, right? A program can respond to user input and there is certainly a logic path it will follow. Our non-ML program is likely going to have a deterministic path (there is still probabilistic programming...), but that doesn't mean it isn't logic, right?But the real question here, which you have to ask yourself (constantly) is "how do I differentiate a complex program that I don't understand from a conscious entity?" I guarantee you that you don't have the answer (because no one does). But isn't that a really good reason to be careful about anthropomorphizing it?
That's the duck test.
How do you determine if it is a real duck or a highly sophisticated animatronic?
If you anthropomorphize, you rule out the possibility that it is a highly sophisticated animatronic and you *MUST* make the assumption that you are not only an expert, but a perfect, duck detector. But simultaneously we cannot rule out that it is a duck, right? Because, we aren't a perfect duck detector *AND* we aren't an expert in highly sophisticated animatronics (especially of the duck kind).
Remember, there are not two answers to every True-False question, there are three. Every True-False question either has an answer of "True", "False", or "Indeterminate". So don't naively assume it is binary. We all know the Halting Problem, right? (also see my namesake or quantum physics if you want to see such things pop up outside computing)
Though I agree, it can be very spooky. But that only increases the importance of trying to develop mental models that help us more objectively evaluate things. And that requires "indeterminate" be a possibility. This is probably the best place to start to combat the cognitive dissonance.
It's like we're clinging on to things that make us feel like human cognition is special so we're saying LLM's arent "really" doing it, then not defining what it actually is.
Do you believe thinking/reasoning is a binary concept? If not, do you think the current top LLM are before or after the 50% mark? What % do you think they're at? What % range do you think humans exhibit?
If that's how it's phrased, and it's in a spot where that's on-topic, then obviously nobody would mind.
This subthread is talking about cases where there's a technical conversation going on and somebody derails it to argue about terminology.
I am a fan of the « Beat Your Genes » podcast, and while some of the prescriptions can be a bit heavy handed, most feel intuitively right. It’s approaching human problems as intelligent mammal problems, as opposed to something in a category of its own.
When we need to speak precisely about a model and how it works, we have a formal language (mathematics) which allows us to be absolutely specific. When we need to empirically observe how the model behaves, we have a completely precise method of doing this (running an eval).
Any other time, we use language in a purposefully intuitive and imprecise way, and that is a deliberate tradeoff which sacrifices precision for expressiveness.
I’m highly skeptical this will happen with llms though, their output is superficially convincing but without depth and creativity.
But the question is what is special about the human machine? What is special about the animal machine? These are different from all the machines we have built. Is it complexity? Is it indeterministic? Is it more? Certainly these machines have feelings, and we need to account for them when interacting with them.
Though we're getting well off topic from determining if a duck is a duck or is a machine (you know what I mean by this word and that I don't mean a normal duck)
Ties in with creation from many and synthetic/artificial data. I usually prompt instruct my coding models more with “synthesize” than “generate”.
I’d like to think that this forum is also a place for the proverbial high school kid, who’s just learned JavaScript and deployed their first site to Vercel using their school Chromebook, to learn a thing or two from the greybeards.
Joking aside, I read too quickly. I got my wires crossed when I responded, mixing up who was who. My bad >.<
But I do agree with scarface and disagree with you. Let me try to respond to this directly. There's a lot to unpack here, but I do ask that you actually read the whole thing. There is nuance here and I think it is important.
> I’d like to think that this forum is also a place for the proverbial high school kid ... to learn a thing or two from the greybeards.
I agree that HN is *also* this place.But I still do not believe that means we need to assume non-expertise.
Think of HN as a place that "greybeards" (or more accurately, experts. Because only few have gray beards) hang out, but there is no gatekeeping. We don't check your credentials when you come in nor do we ask you to pass any tests of skill. It's open to all. But this is still the place "experts" hang out. Because we don't check for credentials, we'll treat noobs as peers. Why are you saying this is bad?
Anyone is welcome to sit at the "adult table", but that means having adult conversations. Right? It'd be pretty... childish... for a child to sit at the adult table and expect everyone to start talking about kid stuff.
It's okay if newbies come in and don't understand what is being discussed. In fact, being confused is the very first step to learning! I'll put it this way: the first year (maybe 2) of my PhD I was just reading papers and had no idea what was going on. I had to work and work to understand. Had to ask lots of questions to lots of people (consequently getting over the fear of feeling dumb as well as the fear of asking questions). Then, at some point in time I realized I do know what's going on and being discussed. This is a critical skill to becoming a graybeard. You'll constantly have to wade through waters where you're in well over your head.
It is learning through immersion.
We should help noobs. I frequently say "you can't have wizards without noobs." I don't want to gatekeep and I do actually think we should help the noobs. I need this to be clear[1]
BUT that doesn't mean we should change our conversations between ourselves. To do so would destroy the very reason we come here. There are so few places on the internet where you can talk and operate under the assumption that the other person is reasonable well informed about tech. Frankly, many of those places get destroyed because they get dominated by noobs who change the average level of conversation. While we don't want to kick out noobs, it is *THEIR PREROGATIVE* to ask for help and ask for people to elaborate. There's no shame in this. It's the exact same thing we expect from another expert! It is treating noobs equally. And frankly, if people do make fun of the noobs or treat them disrespectfully I'll gladly downvote, flag them, and likely chastise them. Such a response is rather common around here too (which is what makes it welcoming to noobs).
Ultimately, unless we start credential checking (aka gatekeeping) we have 2 options:
- Treat everyone as experts
- Treat everyone as noobs
If we have to modify our language and explain every subtle nuanced detail, well... why would I come here? I'm already a fairly verbose person, and I don't want to write textbooks. I don't expect people to read textbooks either!I don't come to HN to teach. Nor do I want to come here to be lectured. I would find it insulting if the presumption was that I was a noob.
I come to talk with my peers. Some are direct peers, with expertise in my domain, and some are not. I happily ask questions to those with expertise in other domains and so should noobs. But unless someone makes a pretty egregious assumption (e.g. a very niche subject), then pretty much nobody is going to say something about it. That's perfectly okay. Frankly, being comfortable with not knowing and asking for someone to elaborate is one of the, if not *THE*, most important skill required to become a graybeard. You can't know everything, even about a highly specific domain. There's infinite depth and infinite breadth.
So if you don't know, just ask. It's okay. That's really the only way we can both have expert communities AND not gatekeep.
So I ask you:
Where can g̶r̶a̶y̶b̶e̶a̶r̶d̶s̶ experts go to hang out? Specifically, to hang out with other experts.
TLDR:
If you walk into a biker bar, don't chastise someone who assumes you know something about motorcycles.
[0] >>44492437
[1] I even taught a lot during my PhD and was a rather popular TA. The reason being that I am more than happy to help and even would extend my office hours to make sure students got their questions answered. A class is formed through a partnership, not a dictatorship.
Even with the everyday machines and programs we have, we can make it behave based on random input taken for example from physical noise. It doesn't suddenly make it a special or different type of machine.
But that's not what my comment was about.
My comment was about *what the average person interprets*.
You asked why people take offense to being called a machine, and I'm trying to explain that. But to understand this we have to understand that there isn't a singular objective way to interpret statements. We can agree that language is fuzzy, right?
So let me try to translate, again.
You say: "People are machines"
(Many) People hear: "People are mechanical automata, running pre-defined routines"
I hear you, this is not what you are trying to communicate. That's not what you want them to hear. But if you want them to hear what you actually mean it is very helpful to understand that some people will hear something different.
Why do they hear the other thing? Because they don't have intimate familiarity with machines and how general that word is. *You have a better understanding of what a machine is than most people.* That's likely the cause for miscommunication.
When they think of a machine they think of things like a car, a computer, a blender, a TV, an oven, or a multitude of other similar things. Even if some of these use probabilistic programming, the average person is not going to know what probabilistic programming even is. They just see something mechanical. Deterministic.
I'm sure you know this, but it is worth reiterating. Communication has 3 main components: What you intend to communicate, the words/gestures/etc you use to communicate, and what the other person hears. Unfortunately (fortunately?) we can't communicate telepathically, so don't forget that the person you're talking to can have a reasonable interpretation that is significantly different from what you intended to say.
When talking about people who are not mathematicians or computer scientists, on average, yes absolutely they hear something like that when told humans are machines.