Seems like quantum gravity theory might be missing something, no?
The same may apply to "intelligence" --- aka AGI.
As far as I know, there is no proof that AGI can be produced or simulated by a binary logic algorithm running on a finite computer.
Hence, some people support the idea of "emergence" --- aka alchemy, aka PFM --- Pure Friggin Magic.
Looks like this result says we can’t simulate our plane in a computer. But the stuff in that simulation exists in P=n+1. So maybe the conclusion is “you can’t simulate n from within n+1” which means we can’t simulate our own plane, let alone our potential parent, and doesn’t mean we don’t have one
What level of granularity of fidelity are you referring to?
Can we accurately simulate a smaller universe in this universe? If I understand correctly, according to this paper the answer is "no". Except how do we determine the simulation is inaccurate, without either knowing what is accurate (and thus having a correct simulation), or being unable to distinguish the inaccuracy from randomness (the simulation already won't perfectly predict a small part of the real universe due to such randomness, so you can't point to a discrepancy)? What does it mean for a simulation to be “inaccurate”?
Also, you don't need to simulate the entire universe to effectively simulate it for one person, e.g. put them in a VR world. From that person's perspective, both scenarios are the same.
These physicists say they have *mathematical* proof that this is not possible.
86 billion neurons, 100 trillion connections, and each connection modulated by dozens of different neurotransmitters and action potential levels and uncounted timing sequences (and that's just what I remember off the top of my head from undergrad neuroscience courses decades ago).
It hasn't even been done for a single pair of neurons because all the variables are not yet understood. All the neural nets use only the most oversimplified version of what a neuron does — merely a binary fire/don't fire algo with training-adjusted weights.
Even assuming all the neurotransmitters, action potentials, and timing sequences, and internal biochemistry of each neuron type (and all the neuron-supporting cells) were understood and simulate-able, using all 250 billion GPUs shipped in 2024 [0] to each simulate a neuron and all its connections, neurotransmitters and timings, it'd take 344 years to accumulate 86 billion of them to simulate one brain.
Even if the average connection between neurons is one foot long, to simulate 100 trillion connections is 18 billion miles of wire. Even if the average connection is 0.3mm, that's 18 million miles of wire.
I'm not even going to bother back-of-the-envelope calculating the power to run all that.
The point is it is not even close to happening until we achieve many orders of magnitude greater computation density.
Will many useful things be achieved before that level of integration? Absolutely, just these oversimplified neural nets are producing useful things.
But just as we can conceptually imagine faster-than-light travel, imagining full-fidelity human brain simulation (which is not the same as good-enough-to-be-useful or good-enough-to-fool-many-people) is only maybe a bit closer to reality.
[0] https://www.tomshardware.com/tech-industry/more-than-251-mil...
To clarify: without being able to simulate the universe from within the universe itself (i.e. needing to resort to some "outside" higher-fidelity universe), then the word "simulation" becomes meaningless.
We could just as easily refer to the whole thing (the inner "simulation" and the outer "simulation") as just being different "layers of abstraction" of the same universe, and drop the word "simulation" altogether. It would have the same ontology with less baggage.
(1) Person notices that computer simulations are getting increasingly powerful. Maybe we will be able to simulate something like the universe one day which will have life in it.
(2) If simulating the universe is so easy and inevitable, what are the odds that we are at the top level?
The idea in the article would refute the inductive step.
Some intuition:
1. If the universe contains an uncomputable thing, then you could utilize this to build a super turing complete computer. This would only make CS more interesting.
2. If the universe extends beyond the observable universe, and it's infinite, and on some level it exists, and there is some way that we know it all moves forward (not necessarily time, as it's uneven), but that's an infinite amount of information, which can never be stepped forward at once (so it's not computable). The paper itself touches on this, requiring time not to break down. Though it may be the case, the universe does not "step" infinitely much information.
One quick side, this paper uses a proof with model theory. I stumbled upon this subfield of mathematics a few weeks ago, and I deeply regret not learning about it during my time studying formal systems/type theory. If you're interested in CS or math, make sure you know the compactness theorem.
Paper direct:
https://jhap.du.ac.ir/article_488.html
I enjoyed some commentary here:
https://www.reddit.com/r/badmathematics/comments/1om3u47/pub...
See also:
https://en.wikipedia.org/wiki/Mathematical_universe_hypothes...
The said model is significantly misaligned with human perception regarding the start and edges of spacetime, so it’s completely valid to point out that it’s just a model (and that we might be in a simulation).
Then it was spirit
Then it was geometry
Then it was a machine
Then it was an equation
Then it was a network
Now it’s a simulation
They always remake the universe to the fashionable transcendent thing of the era,
Human mortality obfuscates this a little but if you were around for it, you’d see this clear repeating pattern of humanity
As with angels on the head of a pin, the interesting argument is whether the amount of compute is finite or not, not how finite it is.
And, considering the visible universe is also finite, with finite amounts of matter and energy, it would follow ultimate compute quantity is also finite, unless there is an argument for compute without energy or matter, and/or unlimited compute being made available from outside the visible universe or our light cone. I don't know of any such valid arguments, but perhaps you can point to some?
Is the constraint of the "simulation" definition that the thing "simulating" the universe would be a computer less complex than the universe itself?
Consider a game world in a computer, we call it a simulation, and it is, but is it any less real than our reality, when thinking in terms of realities? In other words, we feel like our reality is more real because the game is less complex, we understand it fully and it is run using mechanisms we know and understand. So what would make us think it is a more real reality than our own?: If we didn't understand how it works? If its workings and rules are more complex than ours?
Taking a step back, are we as humans even capable of understanding a reality that isn't ours, even as a concept? Things like time, space, and fundamental logic are properties of a reality. I can't imagine a reality without them (at least time and space). We keep thinking in terms of "another place with time and space", how about a place with just one or neither? Imagine a computer program trying to understand a reality that isn't memory and clock rate. memory isn't space as in the space we know, it is capacity. clock rate isn't time as in the time we know, but it is very similar. In an SMP system you have "clock rate" spread across cores and processors so it is a concept different from our concept of linear time. If our reality is in an SMP, there would be multiple separate parallel but converging timelines, but then again is dejavu speculative/preemptive execution?
I know I'm all over the place with this post, but my goal is to question the entire concept of a "simulation". Is it simply a relativistic and human-centric way of expressing our perception of reality relative to other realities? When We dream, is that dream world any less real than ours? Certainly, for us it is no different than any unreality, but that's only for us.
I'm thinking the whole concept of "simulation" stops making sense if there are multiple realities (which I'm only talking about hypothetically, I don't actually believe that). In terms of a single reality within the same time-space and rules of physics and all that, what does it mean for the universe to be a simulation?
With multiple realities, you have to stop presuming things like time and space as we understand them, just the same as time and space in a dream, or in a video game (or any program). Is the world of bits, bytes, processor instructions and memory addresses any less real or more of of a simulation than ours would be in a multiple-reality scenario?
Consider the very basic assumption of causality, that things originate from somewhere and sometime, if the space-time assumption isn't a given, then the very concept of causality might not apply in some realities, and thus in the relationship between realities, and therefore the whole concept of our reality being a simulation depends on causality being a thing, because we're saying our reality is caused by another reality. For there to be a causal relationship, not only does space-time need to exist but it needs to be in the same space-time reference-frame for one to cause another. But again, we can't assume the rules of causality are the same or that there isn't some other fundamental element of reality that makes it all work when talking about inter-reality relationships.
I think we are too tethered to things like mass, energy, time, space. 1+1 resulting in two. What I would like to see explored more (by people smarter than me) is the fundamental element of reality that is information. Before all of those things (time,space,mass,energy, rules,etc...) there is information. Similar to the realities we create in our computers and how they need information to exist first and foremost, and then things can be done with that information and our own little primitive proto-reality is created. All those other things may be different from the perspective of a computer program, but information, while transformed in the way it is represented and processed, at least in our simulations (or proto-realities), the information from our reality is the whole point of that sub-reality's existence.
You can infer things about our reality as a computer program if you focus on the information. no matter how well it is described to a computer program, it can look at a picture and think "hmm.. an apple" but it simply cannot perceive things as we do. it does not experience time,space, color,taste like we do.
So, if we consider reality relative to the experience of the observer as defined by the properties of the world they're in, then the concept of a simulation is entirely relative to our experience in our world and its properties. But if our reality is "caused" by and "executed on" (presumptive concepts) elsewhere, then we would need to understand elsewhere's world's properties and perceive that super-reality, and only then can we experiencially claim that our reality is simulated?
It's a bit like motion and relativity isn't it? if you can't define the frame of reference you can't measure the motion in any meaningful way. You can't tell how fast a car is going if you can't define from what perspective you're measuring from. That sounds silly at first until you consider the entire planet is in motion around the sun, and the solar system hurtling through the galaxy. Not to mention another car traveling at the same speed would not observe any motion in relation to itself. We're trying to measure simulation, but from our perspective (we're the thing that's "moving" if it were motion), we can potentially measure it from the other reality's perspective but not without knowing what that reality is.
Can a computer program tell that it is in a computer reality?
Can you write code that can do that? Certainly it can print output that claims as much, we can even simulate the entire computer system within a program running in that system. But it still can't figure out what "space" means or "time" means as we experience it, it can learn about energy, rules of physics,etc..but it can't experience them. So when it determines that that is in a simulation, its definition of things is still relative to its own experience so it isn't really determining that it is in a simulation, it is just describing things we told it via information transfer about our reality. When you tell that program "We created your reality" its concept of "created" or "originated" is vastly different from ours, so unless it can test for things it can't even conceptualize, how can it truly tell that it is in a simulation?
Sorry for the really long post! I just wanted to sort of dump my philosophical thoughts on this (and I was bored). I think theoretical physics and philosophy need to work very closely together. Questioning philosophical assumptions is important before talking about theoretical physics. The title says "Physicists prove", that's what I keyed on, you can't prove something whose definition we (at least I?) don't entirely agree on, or haven't resolved. If we can't write a computer program that can prove it is in a computer on its own, how can we prove that we're in a simulation?
On the other hand, looking at the state of the world, some may have their doubts.
[1] (A,T,G,C) https://en.wikipedia.org/wiki/Genetic_code ; https://en.wikipedia.org/wiki/DNA_and_RNA_codon_tables#Stand...
Say you want to see what a car is made of. You can take it apart (reduce it) into parts on the workshop floor. Now you know what it's made of.
But you have to put it back together again before you have something you can start and drive away (emergent properties)
At no point does anything magical happen:
parts x organization_of_the_parts <-> the working car
reduction <- -> emergence
We cannot compute exactly what happens because we don't know what it is, and there's randomness. Superdeterminism is a common cop out to this. However, when I am talking about whether something is computable, I mean whether that interaction produces a result that is more complicated than a turing complete computer can produce. If it's random, it can't be predicted. So perhaps a more precise statement would be, my default assumption is that "similar" enough realities or sequences of events can be computed, given access to randomness, where "similar" is defined by an ability to distinguish this similulation from reality by any means.
The idea that no computer or system could possibly be powerful enough for the complexities of a simulation is a very trivial way of looking at things and isn’t thinking of something that is readily available.
I have long held the theory the brain is very capable of filling in all the complex details required for a simulation.