zlacker

[parent] [thread] 58 comments
1. yters+(OP)[view] [source] 2019-12-13 19:06:06
Why is there never any fundamental research whether human intelligence is even computable? All these huge, expensive projects based on an untested premise.
replies(5): >>Ididnt+K >>radekl+c1 >>xamuel+34 >>random+i4 >>13415+Nc
2. Ididnt+K[view] [source] 2019-12-13 19:11:51
>>yters+(OP)
I think it’s pretty certain that we can improve a lot. If that leads to human intelligence or something else we don’t know. But it’s worth working on improving things and trying different approaches even if the final result isn’t known.
replies(1): >>yters+Ha
3. radekl+c1[view] [source] 2019-12-13 19:16:05
>>yters+(OP)
Why wouldn't it be? It seems to me that at worst we would have to wait for computers to become as powerful and complex as a human brain, and then simulating human intelligence would be a matter of accurately modelling the connections.

Is there doubt as to whether a neuron can be represented computationally?

replies(2): >>random+T2 >>yters+za
◧◩
4. random+T2[view] [source] [discussion] 2019-12-13 19:28:13
>>radekl+c1
Yes there is doubt. Can you say for sure that we have a complete model of all physics, and that all physics can be represented computationally? We're still discovering new features of neurons at the quantum level. Who knows how far down it goes. There may be some unknown physics at play inside neurons that can not be computed by a Turing machine. https://www.elsevier.com/about/press-releases/research-and-j...
replies(1): >>Jeff_B+H8
5. xamuel+34[view] [source] 2019-12-13 19:36:22
>>yters+(OP)
There has been some philosophical speculating but that's generally not very actionable, with people clinging to either side of the question. On the practical side, it's the sort of thing which you can't just throw money at and make progress. Ok, you have $100mil to research whether human intelligence is computable. What do you do? Hire lots of humans and assign them noncomputable tasks and tap your foot waiting for one of them to turn out to be the next Oracle of Delphi? That's fantastic if one of them does, but if none of them do, then you've made zero progress: there's no way to know whether you failed because human intelligence is computable, or whether you failed because you chose the wrong tasks/humans.
replies(1): >>yters+pa
6. random+i4[view] [source] 2019-12-13 19:38:28
>>yters+(OP)
But there is. We have fundamental research into whether physics is computable. We also have fundamental research on the physical structure of human consciousness/intelligence. So first we need to discover the physical model of human intelligence, and then we can determine its computability.
replies(1): >>Jeff_B+S8
◧◩◪
7. Jeff_B+H8[view] [source] [discussion] 2019-12-13 20:06:55
>>random+T2
There are aspects of quantum that we don't understand, but we have no reason to believe intelligence relies on them, any more than bridges do.
replies(1): >>random+Ve
◧◩
8. Jeff_B+S8[view] [source] [discussion] 2019-12-13 20:08:08
>>random+i4
Intelligence is a human property, yes, but also a Platonic one. We didn't need to understand how humans process math in order to get computers to do it.
replies(1): >>random+2f
◧◩
9. yters+pa[view] [source] [discussion] 2019-12-13 20:17:46
>>xamuel+34
But that's the sort of thing that should be researched: is the question scientifically answerable? The answer is not obviously no. I can think of ways to scientifically test for noncomputability. If I can then certainly much smarter and knowledgeable poeple can. People just assume like yourself it is not and throw lots of money at a certain assumption. If the assumption is wrong, not only is AGI a dead end, but "human in the loop" computation should be a huge win.
replies(2): >>xamuel+Ib >>drongo+oI
◧◩
10. yters+za[view] [source] [discussion] 2019-12-13 20:18:42
>>radekl+c1
The mind may be nonphysical.
replies(2): >>13415+5e >>Geee+qT
◧◩
11. yters+Ha[view] [source] [discussion] 2019-12-13 20:19:52
>>Ididnt+K
But there might be even better approaches if human intelligence is not computable. E.g. if the mind is a halting oracles that can get us all kinds of cool things.
replies(1): >>inimin+f21
◧◩◪
12. xamuel+Ib[view] [source] [discussion] 2019-12-13 20:26:54
>>yters+pa
I'm not saying it's not scientifically answerable, just that hiring people specifically to answer it is not practical.

This type of thing usually comes through unplanned breakthroughs. You can't discover that the earth revolves around the sun just by paying tons of money to researchers and asking them to figure out astronomy. All that would get you would be some extremely sophisticated Copernican cycle-based models.

https://www.smbc-comics.com/comic/2012-08-09

replies(1): >>yters+eT
13. 13415+Nc[view] [source] 2019-12-13 20:33:04
>>yters+(OP)
There is plenty of fundamental research on it, probably a paper about it is published every week or so. The problem is that there is no general solution to the question, and everybody disagrees about how "human intelligence" should be defined in that context. The answers people give depend too much on untestable "philosophical stances."

Personally, I believe that AI is possible (hard AI thesis) and that computationalism with multiple realizability is right, since none of the philosophical arguments against hard AI and computationalism have convinced me so far. But there are as many opinions on that as there are people working on it.

◧◩◪
14. 13415+5e[view] [source] [discussion] 2019-12-13 20:41:26
>>yters+za
That's one position, but there are three problems with it:

1. You have to solve the interaction problem (how does the mind interact with the physical world?)

2. You need to explain why the world is not physically closed without blatantly violating physical theory / natural laws.

3. From the fact that the mind is nonphysical, it does not follow that computationalism is false. On the contrary, I'd say that computationalism is still the best explanation of how human thinking works even for a dualist. (All the alternatives are quite mystical, except maybe for hypercomputationalism.)

replies(1): >>yters+em
◧◩◪◨
15. random+Ve[view] [source] [discussion] 2019-12-13 20:47:40
>>Jeff_B+H8
We actually do have reason to believe that, since our current understanding of consciousness is very incomplete. Human consciousness extends far beyond our current understanding. I am referring to the full extent of the capabilities of the human mind, not some isolated aspects of it.

The physics of bridges is well known. That is basically a solved problem. Human consciousness/intelligence is an open problem, and may never be solved.

replies(1): >>Jeff_B+Zh
◧◩◪
16. random+2f[view] [source] [discussion] 2019-12-13 20:48:06
>>Jeff_B+S8
As stated in my other reply: "Human consciousness extends far beyond our current understanding. I am referring to the full extent of the capabilities of the human mind, not some isolated aspects of it."

Computers have not superseded humans in mathematical research. That is way beyond anything that we can program into a computer. Computers are better at computation, which is not the same thing.

replies(1): >>Jeff_B+BJ
◧◩◪◨⬒
17. Jeff_B+Zh[view] [source] [discussion] 2019-12-13 21:07:50
>>random+Ve
> We actually do have reason to believe that [intelligence relies on quantum properties]

Are you leaving the reason unsaid, or am I in fact reading your argument correctly: "We don't understand consciousness, and we don't understand quantum, therefore it is likely consciousness relies on quantum." There's already plenty of mystery in an ordinary deterministic computation-driven approach to intelligence.

replies(1): >>random+yy
◧◩◪◨
18. yters+em[view] [source] [discussion] 2019-12-13 21:34:12
>>13415+5e
1. No I don't. I don't have to explain how gravity works to know that it does and make scientific claims about its operation. Likewise, I can scientifically demonstrate the mind is non physical and interacts with our physical world without explaining how.

2. If the world is not physically closed then physical theory and natural laws are not violated, since they would not apply to anything beyond the physical world.

3. True, but if the mind can be shown to perform physically uncomputable tasks, then we can infer the mind is not physical. In which case we can also apply Occam's razor and infer the mind is doing something uncomputable as opposed to having access to vast immaterial computational resources.

Finally, calling a position names, such as 'mystical', does nothing to determine the veracity of the position. At best it is counter productive by distracting from the logic of the argument.

replies(2): >>13415+ut >>perl4e+XZ
◧◩◪◨⬒
19. 13415+ut[view] [source] [discussion] 2019-12-13 22:26:46
>>yters+em
I wasn't trying to argue with you, I merely laid out what is commonly thought about the subject matter. Sorry if that sounds patronizing (it's really not meant to). Anyway, if you want to publish a paper defending a dualist position nowadays in any reputable journal, you'll have to address points 1&2 in one way or another, whether you believe you have to or not. It's not as if that problem hadn't been discussed during the past 60 years or so. There are whole journals dedicated to it.

> if the mind can be shown to perform physically uncomputable tasks

That's true. Many people have tried that and many people believe they can show it. Roger Penrose, for example. These arguments are usually based on complexity theory or the Halting Problem and involve certain views about what mathematicians can and cannot do. As I've said, I've personally not been convinced by any of those arguments.

Your mileage may differ. Fair enough. Just make sure that you do not "know the answer" already when starting to think about the problem, because that's what many people seem to do when they think about these kind of problems and it's a pity.

> calling a position names, such as 'mystical', does nothing to determine the veracity of the position. At best it is counter productive by distracting from the logic of the argument.

That wasn't my intention, I use "mystical" in this context in the sense of "does not provide any better understanding or scientifically acceptable explanation." Many of the (modern) arguments in this area are inferences to the best explanation.

By the way, correctly formulated computationalism does not presume physicalism. It is fully compatible with dualism.

replies(1): >>yters+FT
◧◩◪◨⬒⬓
20. random+yy[view] [source] [discussion] 2019-12-13 23:15:54
>>Jeff_B+Zh
No I'm saying: "We don't have a perfectly accurate physical model of consciousness, we know that physics is incomplete, and our current model of neurons extends to the lowest levels of known physics, therefore there may be unknown physics involved in consciousness, and those unknown physics may not be computable."
replies(2): >>Jeff_B+IG >>random+5H
◧◩◪◨⬒⬓⬔
21. Jeff_B+IG[view] [source] [discussion] 2019-12-14 00:49:46
>>random+yy
In response to

> > we have no reason to believe intelligence relies on [as-yet mysterious aspects of quantum physics]

you wrote

> We actually do have reason to believe that ...

and later clarified

> [some true premises], therefore there may be unknown physics involved in consciousness, and those unknown physics may not be computable.

Saying something could be is different from saying we have reason to believe it. There may be a soul. Absent convincing evidence of the soul, though, we shouldn't predicate other research on the idea that it exists.

◧◩◪◨⬒⬓⬔
22. random+5H[view] [source] [discussion] 2019-12-14 00:55:34
>>random+yy
I clarified it in my latest reply above. The original comment asked if there is any doubt as to whether a neuron can be represented computationally. We don't know exactly what a neuron is, and are still discovering new subtle mechanisms in their functioning, and they are part of the most complex structure in the known universe, therefore of course there is doubt.
◧◩◪
23. drongo+oI[view] [source] [discussion] 2019-12-14 01:13:22
>>yters+pa
OK, what experiments would you design to test whether AGI is possible? Given the decades (centuries?) of thought that have gone into the issue, I'm sure a set of experiments would be valuable.
replies(1): >>yters+kT
◧◩◪◨
24. Jeff_B+BJ[view] [source] [discussion] 2019-12-14 01:33:49
>>random+2f
By "math" I mean proving theorems, not doing arithmetic. Yes, we're better at finding useful theorems, but computers can do it.

More generally, the fact that currently humans are the only entity observed doing X does not mean you need to understand humans to understand X.

replies(1): >>random+CM
◧◩◪◨⬒
25. random+CM[view] [source] [discussion] 2019-12-14 02:26:26
>>Jeff_B+BJ
I wrote "computation", not "arithmetic". Human intelligence goes beyond computation / mathematical logic, and you seem to ignore all of that. We haven't got a clue how consciousness works. It's a total mystery.
replies(1): >>Jeff_B+IS
◧◩◪◨⬒⬓
26. Jeff_B+IS[view] [source] [discussion] 2019-12-14 04:14:17
>>random+CM
The aim of AI is intelligence, not human intelligence. It's not to emulate a process; it's to solve problems.

If we do build AI, maybe we'll never know if it's conscious. You can't know whether any other human is conscious, either. But you can know whether they make you laugh, or cry, or learn, or love. The knowable things are good enough.

replies(2): >>yters+KT >>random+nZ
◧◩◪◨
27. yters+eT[view] [source] [discussion] 2019-12-14 04:20:47
>>xamuel+Ib
Bell Labs made a bunch of breakthroughs that way, i.e. information theory.
◧◩◪◨
28. yters+kT[view] [source] [discussion] 2019-12-14 04:21:54
>>drongo+oI
If humans can solve problems that require more computational resources than exist in the universe, then AGI is not possible. I have run one experiment to demonstrate this.
replies(1): >>xamuel+0o1
◧◩◪
29. Geee+qT[view] [source] [discussion] 2019-12-14 04:23:25
>>yters+za
AGI doesn't mean human-like. The idea is not to clone the "nonphysicality" of the human brain but vastly surpass it in raw intelligence.
◧◩◪◨⬒⬓
30. yters+FT[view] [source] [discussion] 2019-12-14 04:28:53
>>13415+ut
Yes, I understand computationalism does not imply physicalism, but physicalism does imply computationalism. Thus, if computationalism is empirically refuted, then physicalism is false.

I know the Lucas Godel incompleteness theorem type arguments. Whether successful or not, the counter arguments are certainly fallacious. E.g. just because I form a halting problem for myself does not mean I am not a halting oracle for uncomputable problems.

But, I have developed a more empirical approach, something that can be solved by the average person, not dealing with whether they can find the Godel sentence for a logic system.

Also, there is a lot of interesting research showing that humans are very effective at approximating solutions to NP complete problems, apparently better than the best known algorithms. While not conclusive proof in itself, such examples are very surprising if there is nothing super computational about the human mind, and less so if there is.

At any rate, there are a number of lines of evidence I'm aware of that makes the uncomputable mind a much more plausible explanation for what we see humans do, ignoring the whole problem of consciousness. I'm just concerned with empirical results, not philosophy or math. As such, I don't really care what some journal's idea of the burden of proof is. I care about making discoveries and moving our scientific knowledge and technology forward.

Additionally, this is not some academic speculation. If the uncomputable mind thesis is true, then there are technological gains to be made, such as through human in the loop approaches to computation. Arguably, that is where all the successful AI and ML is going these days, so that serves as yet one more line of evidence for the uncomputable mind thesis.

replies(1): >>inimin+721
◧◩◪◨⬒⬓⬔
31. yters+KT[view] [source] [discussion] 2019-12-14 04:31:18
>>Jeff_B+IS
What if the secret sauce that makes intelligence, the kind that invents AI, is consciousness? I, at least, certainly do a lot of conscious thinking when I solve problems, as opposed to unconscious thinking :)
replies(1): >>perl4e+o01
◧◩◪◨⬒⬓⬔
32. random+nZ[view] [source] [discussion] 2019-12-14 06:38:10
>>Jeff_B+IS
But I wasn't talking about Artificial Intelligence or problem solving. I am talking about Actual Intelligence, specifically human level intelligence.

If we build AI we could only know if its conscious if we know what conscious is, and that is something we do not know, and perhaps will never know. It could be fundamentally beyond our comprehension.

◧◩◪◨⬒
33. perl4e+XZ[view] [source] [discussion] 2019-12-14 06:55:54
>>yters+em
Explaining how gravity works doesn't tell you whether gravity itself is a real thing, whether it is metaphysical, whether it's an epiphenomena of something else. People talk about it being curvature in spacetime vs. a force, but we're just reifying the math, right?

And I don't think we have a completely firm grasp on what is possible computationally with a given amount of physical resources, given the development of quantum computing.

replies(1): >>yters+FD1
◧◩◪◨⬒⬓⬔⧯
34. perl4e+o01[view] [source] [discussion] 2019-12-14 07:08:59
>>yters+KT
"I, at least, certainly do a lot of conscious thinking when I solve problems"

That jumps out at me, because I do a lot of "unconscious thinking" to solve problems and I feel like I've read where other people describe similar experiences.

Besides the cliche of solving problems in your sleep, I sometimes have an experience where consciously focusing on solving a problem leads to a blind alley, and distracting my conscious mind with something else somehow lets a background task run to "defrag" or something. But on the other hand there is "bad" distraction too - I'm not sure offhand what the difference is.

It's possible that I'm far from typical, but I also suspect people of different types and intellects might process things in very different ways too.

But to me, I definitely have a strong sense much of the time that my conscious mind engages in the receipt of information about something complex and then the actual analysis is happening somewhere invisible to me in my brain. I'm frequently conscious that I'm figuring something out and yet unaware of the process.

It particularly seems weird to me that other people often seem to be convinced they are conscious of their thought processes, because surely the type of person who is not a knowledge worker isn't? I'm not sure if my way of thinking is the "smart way", the "dumb way", or just weird, but I'm sure that there is significant diversity among people in general.

Sometimes I wonder if the model of AI is the typical mind of a very small subset of humanity that's unlike the rest, kind of like the way psychological experiments have been biased towards college students since that's who they could easily get.

replies(1): >>yters+ms1
◧◩◪◨⬒⬓⬔
35. inimin+721[view] [source] [discussion] 2019-12-14 07:43:51
>>yters+FT
> physicalism does imply computationalism

That's not true either.

There are plenty of materialists who think the universe is not computable, thus it's totally possible to believe that the mind is not computable despite being entirely physical.

replies(1): >>yters+4s1
◧◩◪
36. inimin+f21[view] [source] [discussion] 2019-12-14 07:46:58
>>yters+Ha
If the mind were a halting oracle I don't think most of our open problems in mathematics would be.
replies(1): >>yters+8s1
◧◩◪◨⬒
37. xamuel+0o1[view] [source] [discussion] 2019-12-14 14:41:52
>>yters+kT
What was the experiment you ran?
replies(1): >>yters+Vr1
◧◩◪◨⬒⬓
38. yters+Vr1[view] [source] [discussion] 2019-12-14 15:30:37
>>xamuel+0o1
Filling in missing assignments for a boolean circuit. In general it is an NP hard problem, and humans appear to do it pretty well at computationally intractable sizes.
replies(1): >>xamuel+ku3
◧◩◪◨⬒⬓⬔⧯
39. yters+4s1[view] [source] [discussion] 2019-12-14 15:31:52
>>inimin+721
It's possible, so I should qualify it as our current understanding of physics implies computationalism.

So, if a macro phenomena, i.e. the human mind, is uncomputable, then it is not emergent from the low computable physical substrate.

replies(1): >>inimin+u12
◧◩◪◨
40. yters+8s1[view] [source] [discussion] 2019-12-14 15:33:01
>>inimin+f21
It's possible for the mind to solve more halting problems than any finite computer, yet still not be as powerful as a complete halting oracle. Thus, the fact we haven't solved every problem does not count as evidence against the mind being a halting oracle.
replies(1): >>inimin+c32
◧◩◪◨⬒⬓⬔⧯▣
41. yters+ms1[view] [source] [discussion] 2019-12-14 15:35:17
>>perl4e+o01
I've never solved a problem while completely unconscious. I've occasionally had insights while dreaming, and there is some intuitive aspect to thought that is difficult or impossible to explicitly articulate. But, every instance of problem solving I engage in is connected with conscious thought.
replies(1): >>perl4e+Fl2
◧◩◪◨⬒⬓
42. yters+FD1[view] [source] [discussion] 2019-12-14 17:28:12
>>perl4e+XZ
The metaphysics are unimportant. The important question as far as AGI is concerned is whether human intelligence is physically computable. And quantum computation is less powerful than non deterministic Turing computation. So, we can bound quantum computation with NTMs.
◧◩◪◨⬒⬓⬔⧯▣
43. inimin+u12[view] [source] [discussion] 2019-12-14 20:50:58
>>yters+4s1
If the mind were found to be uncomputable, I think you'd find vastly more physicists would take that as evidence the universe is uncomputable than that the mind is nonphysical.
replies(1): >>yters+ok2
◧◩◪◨⬒
44. inimin+c32[view] [source] [discussion] 2019-12-14 21:05:54
>>yters+8s1
Actually it does. While it's logically possible, evidence for a hypothesis A is still provided by any data that is more likely under hypothesis A than under hypothesis B.

The hypothesis that the mind is computable but is using heuristics, of various levels of sophistication, explains the data better and is more parsimonious than your hypothesis, because we already have reason to believe that the mind uses heuristics extensively.

Where you see uncomputable oracular insights, others see computable combinations of heuristics. If you introspect deeply enough while problem-solving, you may be able to sense the heuristics working prior to the flash of intuition.

replies(1): >>yters+mv4
◧◩◪◨⬒⬓⬔⧯▣▦
45. yters+ok2[view] [source] [discussion] 2019-12-15 00:02:42
>>inimin+u12
So they may, but that would not follow logically. If the lowest level of physics is all computable then the higher physical levels must also be computable. Thus, if a higher level is not computable, it is not physical. We have never found anything at the lowest level that is not computable. None of it is even at the level of a Turing machine, unlike human produced computers.
replies(1): >>inimin+MK2
◧◩◪◨⬒⬓⬔⧯▣▦
46. perl4e+Fl2[view] [source] [discussion] 2019-12-15 00:18:02
>>yters+ms1
I don't know what "completely unconscious" means, but it doesn't sound like what I was describing.

I think I agree that my problem solving is connected with conscious thought, but the heavy lifting is mostly (or at least frequently) done by something that "I" am not aware of in detail.

When someone is explaining something complicated, pretty often, maybe not always, my (conscious) mind is pretty blank. I can say "yeah, I'm following you", but I feel like I'm not. Then when I start working on it, I feel like I am fumbling around for the keys to unlock some background processing that was happening in the meantime.

Also, when I am in a state where I am consciously writing something elaborate, and I feel connected to the complex concepts behind it, sometimes I get stuck in a blind alley. My context seems too narrow, and often I can get unstuck by just doing something unrelated to distract my conscious mind, like browsing news on my phone and then it's like a stuck process was terminated and I realize what I need to change on a higher level of abstraction.

It's possible I have some sort of inherent disability that I am compensating for by using a different part of my brain than normal, I suppose.

replies(1): >>yters+Vv4
◧◩◪◨⬒⬓⬔⧯▣▦▧
47. inimin+MK2[view] [source] [discussion] 2019-12-15 07:22:23
>>yters+ok2
Any chaotic system (highly sensitive to initial conditions) is practically uncomputable for us, because we have neither the computational power nor the ability to measure the initial conditions sufficiently accurately. Whether there is some lowest level at which everything is quantized, or it's real numbers all the way down, is an open question.

I don't think your argument will seem compelling to anyone who doesn't already have a strong prior belief that the mind is non-physical.

replies(1): >>yters+9v4
◧◩◪◨⬒⬓⬔
48. xamuel+ku3[view] [source] [discussion] 2019-12-15 18:11:40
>>yters+Vr1
Did you publish a paper on these experiments?

I'm not familiar with the boolean circuit problem, but I wonder if it's an instance where the NP hardness comes from specific edge cases, and whether your experiment tested said edge cases. Compare with the fact that the C++ compiler is Turing complete: its Turing completeness arises from compiling extremely contrived bizzarro code that would never come up in practice. So for everyday code, humans can answer the question, "Will the C++ compiler enter an infinite loop when it tries to compile this code?", quite easily, just by answering "No." every time. That doesn't mean humans can solve the halting problem, though.

replies(1): >>yters+Pu4
◧◩◪◨⬒⬓⬔⧯
49. yters+Pu4[view] [source] [discussion] 2019-12-16 08:39:08
>>xamuel+ku3
There may be some way the problem set I used is computationally tractable, but I am not aware of such. I have not published the work yet.

But, the bigger point is why are not others doing this kind of research? It does not seem out of the realm of conceptual possibility, since someone as myself came up with a test. And the question is prior to all the big AI projects we currently have going on.

◧◩◪◨⬒⬓⬔⧯▣▦▧▨
50. yters+9v4[view] [source] [discussion] 2019-12-16 08:43:18
>>inimin+MK2
I would argue it is the other way around. If people are truly unbiased whether we are computable or not, then they would give my argument consideration. It is those with a priori computational bias that will not be phased by what I say.
replies(1): >>inimin+Do5
◧◩◪◨⬒⬓
51. yters+mv4[view] [source] [discussion] 2019-12-16 08:46:12
>>inimin+c32
In that setup the evidence makes the uncomputable partial Oracle the most likely hypothesis, since the space of uncomputable partial oracles is much much larger (infinitely so) than either computable minds or perfect halting oracles.
replies(1): >>inimin+EW6
◧◩◪◨⬒⬓⬔⧯▣▦▧
52. yters+Vv4[view] [source] [discussion] 2019-12-16 08:55:04
>>perl4e+Fl2
Every instance of problem solving I encounter involves conscious intentionality. As an analogy, when I get a drink from the fridge, there is a lot going on in my body to make that happen that I do not consciously control. But, overall it is taking place due to my conscious intentional control. I argue the same is going on in the mind, a lot of subconscious things going on that I do not directly control, but the overall effect is directed by my conscious control.
replies(1): >>perl4e+OO4
◧◩◪◨⬒⬓⬔⧯▣▦▧▨
53. perl4e+OO4[view] [source] [discussion] 2019-12-16 13:39:55
>>yters+Vv4
That doesn't seem like a good analogy to me, because intrinsically problem solving is about something you don't understand in the first place, whereas reaching for something you do already understand what you are doing.

If I use a mechanical grabber aid to reach something, then it isn't figuring out how to do anything. But if I ask Wolfram Alpha the answer to a math problem, it isn't me doing it.

replies(1): >>yters+ojr
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
54. inimin+Do5[view] [source] [discussion] 2019-12-16 17:48:45
>>yters+9v4
You're right, but people tend to have strong priors one way of the other, often unconsciously. This is one of those classic cases where people with strong, divergent priors will disagree more strongly after seeing the same evidence. So if you want to convince people you'll have to try harder than most to find common ground.
replies(1): >>yters+FUs
◧◩◪◨⬒⬓⬔
55. inimin+EW6[view] [source] [discussion] 2019-12-17 05:59:51
>>yters+mv4
Well, no. That is the same kind of error as Zeno's paradox.

One assigns a prior to a class of hypotheses, and the cardinality of that set does not change the total probability you assign to the entire hypothesis class.

If one instead assigns a constant non-zero prior to each individual hypothesis of an infinite class, a grievous error has been committed and inconsistent and paradoxical beliefs can be the only result.

replies(1): >>yters+ijr
◧◩◪◨⬒⬓⬔⧯
56. yters+ijr[view] [source] [discussion] 2019-12-27 05:04:53
>>inimin+EW6
Sounds like then you can just arbitrarily divide up your classes to benefit whatever hypothesis you want, leading to special pleading. I think to remain objective one has to integrate over the entire space of hypothesis instances, using an infinitesimal weighting in the case of infinite spaces.
replies(1): >>inimin+veB
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲
57. yters+ojr[view] [source] [discussion] 2019-12-27 05:07:14
>>perl4e+OO4
Sure, it depends on what level your intentionality is involved. But, my experience is my intentionality is quite intimately involved with my intellectual processes. I cannot just will 'answer my math problem' and my mind pops out the answer. There is a lot of intentional, mental actions that take place to arrive at an answer.
◧◩◪◨⬒⬓⬔⧯▣▦▧▨◲◳
58. yters+FUs[view] [source] [discussion] 2019-12-27 22:54:20
>>inimin+Do5
And that's why I'm not concerned with convincing anyone. The proof is in the pudding. If I'm right, I should be able to get results. If not, then my argument doesn't matter.
◧◩◪◨⬒⬓⬔⧯▣
59. inimin+veB[view] [source] [discussion] 2020-01-01 10:30:08
>>yters+ijr
> integrate over the entire space of hypothesis instances, using an infinitesimal weighting in the case of infinite spaces.

Agreed.

However, when you write:

> the evidence makes the uncomputable partial Oracle the most likely hypothesis, since the space of uncomputable partial oracles is much much larger

you seem to argue that a hypothesis is more likely because it represents a larger (indeed infinite) space of sub-hypotheses. Reasoning from the cardinality of a set of hypotheses to a degree of belief in the set would in general seem to be unsound.

[go to top]