Is there doubt as to whether a neuron can be represented computationally?
This type of thing usually comes through unplanned breakthroughs. You can't discover that the earth revolves around the sun just by paying tons of money to researchers and asking them to figure out astronomy. All that would get you would be some extremely sophisticated Copernican cycle-based models.
Personally, I believe that AI is possible (hard AI thesis) and that computationalism with multiple realizability is right, since none of the philosophical arguments against hard AI and computationalism have convinced me so far. But there are as many opinions on that as there are people working on it.
1. You have to solve the interaction problem (how does the mind interact with the physical world?)
2. You need to explain why the world is not physically closed without blatantly violating physical theory / natural laws.
3. From the fact that the mind is nonphysical, it does not follow that computationalism is false. On the contrary, I'd say that computationalism is still the best explanation of how human thinking works even for a dualist. (All the alternatives are quite mystical, except maybe for hypercomputationalism.)
The physics of bridges is well known. That is basically a solved problem. Human consciousness/intelligence is an open problem, and may never be solved.
Computers have not superseded humans in mathematical research. That is way beyond anything that we can program into a computer. Computers are better at computation, which is not the same thing.
Are you leaving the reason unsaid, or am I in fact reading your argument correctly: "We don't understand consciousness, and we don't understand quantum, therefore it is likely consciousness relies on quantum." There's already plenty of mystery in an ordinary deterministic computation-driven approach to intelligence.
2. If the world is not physically closed then physical theory and natural laws are not violated, since they would not apply to anything beyond the physical world.
3. True, but if the mind can be shown to perform physically uncomputable tasks, then we can infer the mind is not physical. In which case we can also apply Occam's razor and infer the mind is doing something uncomputable as opposed to having access to vast immaterial computational resources.
Finally, calling a position names, such as 'mystical', does nothing to determine the veracity of the position. At best it is counter productive by distracting from the logic of the argument.
> if the mind can be shown to perform physically uncomputable tasks
That's true. Many people have tried that and many people believe they can show it. Roger Penrose, for example. These arguments are usually based on complexity theory or the Halting Problem and involve certain views about what mathematicians can and cannot do. As I've said, I've personally not been convinced by any of those arguments.
Your mileage may differ. Fair enough. Just make sure that you do not "know the answer" already when starting to think about the problem, because that's what many people seem to do when they think about these kind of problems and it's a pity.
> calling a position names, such as 'mystical', does nothing to determine the veracity of the position. At best it is counter productive by distracting from the logic of the argument.
That wasn't my intention, I use "mystical" in this context in the sense of "does not provide any better understanding or scientifically acceptable explanation." Many of the (modern) arguments in this area are inferences to the best explanation.
By the way, correctly formulated computationalism does not presume physicalism. It is fully compatible with dualism.
> > we have no reason to believe intelligence relies on [as-yet mysterious aspects of quantum physics]
you wrote
> We actually do have reason to believe that ...
and later clarified
> [some true premises], therefore there may be unknown physics involved in consciousness, and those unknown physics may not be computable.
Saying something could be is different from saying we have reason to believe it. There may be a soul. Absent convincing evidence of the soul, though, we shouldn't predicate other research on the idea that it exists.
More generally, the fact that currently humans are the only entity observed doing X does not mean you need to understand humans to understand X.
If we do build AI, maybe we'll never know if it's conscious. You can't know whether any other human is conscious, either. But you can know whether they make you laugh, or cry, or learn, or love. The knowable things are good enough.
I know the Lucas Godel incompleteness theorem type arguments. Whether successful or not, the counter arguments are certainly fallacious. E.g. just because I form a halting problem for myself does not mean I am not a halting oracle for uncomputable problems.
But, I have developed a more empirical approach, something that can be solved by the average person, not dealing with whether they can find the Godel sentence for a logic system.
Also, there is a lot of interesting research showing that humans are very effective at approximating solutions to NP complete problems, apparently better than the best known algorithms. While not conclusive proof in itself, such examples are very surprising if there is nothing super computational about the human mind, and less so if there is.
At any rate, there are a number of lines of evidence I'm aware of that makes the uncomputable mind a much more plausible explanation for what we see humans do, ignoring the whole problem of consciousness. I'm just concerned with empirical results, not philosophy or math. As such, I don't really care what some journal's idea of the burden of proof is. I care about making discoveries and moving our scientific knowledge and technology forward.
Additionally, this is not some academic speculation. If the uncomputable mind thesis is true, then there are technological gains to be made, such as through human in the loop approaches to computation. Arguably, that is where all the successful AI and ML is going these days, so that serves as yet one more line of evidence for the uncomputable mind thesis.
If we build AI we could only know if its conscious if we know what conscious is, and that is something we do not know, and perhaps will never know. It could be fundamentally beyond our comprehension.
And I don't think we have a completely firm grasp on what is possible computationally with a given amount of physical resources, given the development of quantum computing.
That jumps out at me, because I do a lot of "unconscious thinking" to solve problems and I feel like I've read where other people describe similar experiences.
Besides the cliche of solving problems in your sleep, I sometimes have an experience where consciously focusing on solving a problem leads to a blind alley, and distracting my conscious mind with something else somehow lets a background task run to "defrag" or something. But on the other hand there is "bad" distraction too - I'm not sure offhand what the difference is.
It's possible that I'm far from typical, but I also suspect people of different types and intellects might process things in very different ways too.
But to me, I definitely have a strong sense much of the time that my conscious mind engages in the receipt of information about something complex and then the actual analysis is happening somewhere invisible to me in my brain. I'm frequently conscious that I'm figuring something out and yet unaware of the process.
It particularly seems weird to me that other people often seem to be convinced they are conscious of their thought processes, because surely the type of person who is not a knowledge worker isn't? I'm not sure if my way of thinking is the "smart way", the "dumb way", or just weird, but I'm sure that there is significant diversity among people in general.
Sometimes I wonder if the model of AI is the typical mind of a very small subset of humanity that's unlike the rest, kind of like the way psychological experiments have been biased towards college students since that's who they could easily get.
That's not true either.
There are plenty of materialists who think the universe is not computable, thus it's totally possible to believe that the mind is not computable despite being entirely physical.
So, if a macro phenomena, i.e. the human mind, is uncomputable, then it is not emergent from the low computable physical substrate.
The hypothesis that the mind is computable but is using heuristics, of various levels of sophistication, explains the data better and is more parsimonious than your hypothesis, because we already have reason to believe that the mind uses heuristics extensively.
Where you see uncomputable oracular insights, others see computable combinations of heuristics. If you introspect deeply enough while problem-solving, you may be able to sense the heuristics working prior to the flash of intuition.
I think I agree that my problem solving is connected with conscious thought, but the heavy lifting is mostly (or at least frequently) done by something that "I" am not aware of in detail.
When someone is explaining something complicated, pretty often, maybe not always, my (conscious) mind is pretty blank. I can say "yeah, I'm following you", but I feel like I'm not. Then when I start working on it, I feel like I am fumbling around for the keys to unlock some background processing that was happening in the meantime.
Also, when I am in a state where I am consciously writing something elaborate, and I feel connected to the complex concepts behind it, sometimes I get stuck in a blind alley. My context seems too narrow, and often I can get unstuck by just doing something unrelated to distract my conscious mind, like browsing news on my phone and then it's like a stuck process was terminated and I realize what I need to change on a higher level of abstraction.
It's possible I have some sort of inherent disability that I am compensating for by using a different part of my brain than normal, I suppose.
I don't think your argument will seem compelling to anyone who doesn't already have a strong prior belief that the mind is non-physical.
I'm not familiar with the boolean circuit problem, but I wonder if it's an instance where the NP hardness comes from specific edge cases, and whether your experiment tested said edge cases. Compare with the fact that the C++ compiler is Turing complete: its Turing completeness arises from compiling extremely contrived bizzarro code that would never come up in practice. So for everyday code, humans can answer the question, "Will the C++ compiler enter an infinite loop when it tries to compile this code?", quite easily, just by answering "No." every time. That doesn't mean humans can solve the halting problem, though.
But, the bigger point is why are not others doing this kind of research? It does not seem out of the realm of conceptual possibility, since someone as myself came up with a test. And the question is prior to all the big AI projects we currently have going on.
If I use a mechanical grabber aid to reach something, then it isn't figuring out how to do anything. But if I ask Wolfram Alpha the answer to a math problem, it isn't me doing it.
One assigns a prior to a class of hypotheses, and the cardinality of that set does not change the total probability you assign to the entire hypothesis class.
If one instead assigns a constant non-zero prior to each individual hypothesis of an infinite class, a grievous error has been committed and inconsistent and paradoxical beliefs can be the only result.
Agreed.
However, when you write:
> the evidence makes the uncomputable partial Oracle the most likely hypothesis, since the space of uncomputable partial oracles is much much larger
you seem to argue that a hypothesis is more likely because it represents a larger (indeed infinite) space of sub-hypotheses. Reasoning from the cardinality of a set of hypotheses to a degree of belief in the set would in general seem to be unsound.