I've been studying viruses lately, and have found that the line between virus/exosome/self is much more blurry than I realized. But, given the niche interest in the subject, most articles are not written with an overview in mind.
What sorts of topics make you feel this way?
Why interpretations: There is an experiment you can do that is hard to explain: Either particles are able to somehow influence each other faster than light (non-local), or the particle somehow doesn't exist except when interacting with some other particle (non-real).
Try this video: https://www.youtube.com/watch?v=zcqZHYo7ONs the AHA moment in the video comes when you realize you can entangle the light and that adding a filter by one stream of light somehow causes the other stream of light to also be influenced.
https://en.wikipedia.org/wiki/Protein#/media/File:Chaperonin...
I'm working on making a model of this chaperone complex relative to a folded protein to get a sense of how it might be interacting with the amino acid chain before it becomes globular
Here is a few books you can read on the subject. They do a pretty good job on describing what the issue is and what the interpretations mean:
Max Tegmark - Our Mathematical Universe
Sean M. Carroll - Something Deeply Hidden
Adam Becker - What is real?
Here are some things you can google if you want to just skim the subject: Wave–particle duality, The Measurement Problem, Quantum decoherence, Copenhagen interpretation, Bell's theorem, Superdeterminism, Many-worlds interpretation, Ghirardi–Rimini–Weber theory (GRW).
Last but not least, look at the Wolfram Physics Project. (https://wolframphysics.org). The take on quantum mechanics if you go along with the idea of hyper-graph is fascinating (to me)
https://www.youtube.com/watch?v=zcqZHYo7ONs
Basically: particle are the quanta of waves. So it's not really a duality in the end.
1: https://www.lesswrong.com/posts/AnHJX42C6r6deohTG/bell-s-the...
2: https://kantin.sabanciuniv.edu/sites/kantin.sabanciuniv.edu/...
https://www.physics.wisc.edu/undergrads/courses/spring2016/4...
AFAICS it was published in the American Journal of Physics in 1981 but it's addressed to the general reader. It requires no knowledge of quantum physics.
"You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move?"
Starts from basic concepts and builds up a nice overview.
Maybe you'll find this paper helpful: https://aapt.scitation.org/doi/10.1119/1.18578
Here is the chapter on Fourier transforms from my linear algebra book that goes into more details: https://minireference.com/static/excerpts/fourier_transforma...
As for the math, there really is no other way to convince yourself that sin(x) and sin(2x) are orthogonal with respect to the product int(f,g,[0,2pi]) other than to try it out https://live.sympy.org/?evaluate=integrate(%20sin(x)*sin(2*x... Try also with sin(3x) etc. and cos(n*x) etc.
https://www.amazon.com/Quantum-Non-Locality-Relativity-Metap...
If you don't want to read a whole book then I recommend this article:
https://kantin.sabanciuniv.edu/sites/kantin.sabanciuniv.edu/...
but the book will give you a much deeper understanding.
An answer is that the d->0 approaches infinity presumes a nice, continuous analytic function. If d->epsilon, you can't get to that singularity.
There was an equivalent problem in the E/M space with "The Ultraviolet Catastrophe" [1], which turned out to go away if you assumed quantization.
I'm not going to claim this is a perfect analog to the gravity problem, only that a lot of physics doesn't quite work right when you assume continuity. (The Dirac delta is a humorous exception that proves the rule here, in that doing the mathematically weird thing actually is closer to how physics works, and it required "distribution theory" as a discipline to prove it sound.)
To say anything more concrete requires requires defining the question much more precisely. I believe there is still some disagreement on the interpretation of Mach's principle in light of general relativity. For example, see https://en.wikipedia.org/wiki/Mach's_principle#Variations_in... (and a couple sections above, the 1993 poll of physicists asking: "Is general relativity with appropriate boundary conditions of closure of some kind very Machian?"
I hope that is helpful in some way.
the summary being:
- The vertical velocity of the diverted air is proportional to the speed of the wing and the angle of attack.
- The lift is proportional to the amount of air diverted times the vertical velocity of the air
it also debunks the myth of "air flows faster on the top side of the wing, causing lift"
For example, see "Some recent developments in bicycle dynamics" (2007). Especially the folklore section:
"The world of bicycle dynamics is filled with folklore. For instance, some publications persist in the necessity of positive trail or gyroscopic effect of the wheels for the existence of a forward speed range with uncontrolled stable operation. Here we will show, by means of a counter example,that this is not necessarily the case.
https://pdfs.semanticscholar.org/bb70/d679c5a2ff67dd2a1a51f2...
It turns out that flat wings work just fine, but the airfoil shape we see on airplanes is more efficient:
2. The tides. The explanation I was given is roughly something like “the tides happen because the moon’s gravity pulls the water toward it, so you have high tide facing the moon. There’s also a high tide on the opposite side of the earth, for subtle reasons that are too complicated for you to understand right now and I don’t have time to get into that.”
The first problem with this explanation is this: gravitational acceleration affects everything equally right? So it’s not just pulling on the water, it’s also pulling on the earth. So why does the water pull away from the earth? Shouldn’t everything be accelerating at the same rate and staying in the same relative positions?
The second problem is that, when viewed correctly, the explanation for why there is a high tide on the opposite side of the earth as the moon is equally simple to why there is a high tide on the same side as the moon.
The resolution to both these problem is this: tides aren’t actually caused by the pull of the moon’s gravity per se, but are actually caused by the difference in the strength of the pull of the moon’s gravity between near and far sides of the earth, since the strength of the moon’s gravitational pull decreases with distance from the moon. The pull on the near water is stronger than the average pull on the earth, which again is stronger than the pull on the far water. So everything becomes stretched out along the earth-moon axis.
3. This one isn’t so much a problem with the explanation itself, more about how it’s framed. I remember hearing about why the sky is blue, and wondering, “ok, more blue light bounces off it than other colours. But isn’t that essentially the same reason why any other blue thing is blue? Why are we making such a big fuss about the sky in particular? ” A much superior motivating question is “why is the sky blue during midday, but red at sunrise / sunset”? I was relieved when I saw this XKCD that I’m not the only one who felt this way:
It's more akin to the direction or axis of the spin being changed, and simply measuring the spin along a certain access will change it: https://en.wikipedia.org/wiki/Spin_(physics)#Measurement_of_...
1) Compartmentalizing of biological functions. Its why a cell is a fundamental unit of life, and why organelles enable more complex life. Things are physically in closer proximity and in higher concentrations where needed.
2) Multienzyme complexes. Multiple reactions in a pathway have their catalysts physically colocated to allow efficient passing of intermediate compounds from one step to the next.
https://www.tuscany-diet.net/2019/08/16/multienzyme-complexe...
3) Random chance. Stuff jiggles around and bumps into other stuff. Up until a point, higher temperature mean more bumping around meaning these reactions happen faster, and the more opportunities you can have for these components fly together in the right orientation, the more life stuff can happen more quicky. There's a reason the bread dough that apparently everyone is making now will rise faster after yeast is added if the dough is left at room temp versus allowed to do a cold rinse in the fridge. There are just less opportunities for things to fly together the right way at a lower temperature.
3a) For the ultra complex protein binding to the DNA, how those often work in reality is that they bind sort of randomly and scan along the dna for a bit until they find what they're looking or fall off. Other proteins sometimes interact with other proteins that are bound to the DNA first which act as recruiters telling the protein where to land.
Kip Thorne, a Nobel prize-winning physicist, worked as the science advisor for Interstellar so the hollywood bs is pretty good!
[1]: https://www.amazon.com/Science-Interstellar-Kip-Thorne/dp/03...
As for the multiverse, I don't know enough to talk about it. I just know it's one of the possible interpretations of quantum mechanics. Note that the various interpretations are generally considered more philosophy than science, and have no (or very little) practical implications. I would suggest ignoring all analogies and not looking too deeply for interpretations, and instead focus on basic concepts like "What is a quantum state?" and "How do I compute measurement outcomes?" which are super well understood and the same under all interpretations.
You can think of the various interpretations of QM as different software methodologies, scrum, agile, waterfall, etc. just stuff people like talk about endlessly, but ultimately irrelevant to the code that will run in the end.
I had the same "problem" as you. What finally made me feel I sort of cracked it was those videos. The way I think of it now is: They let you do matrix multiplication. The internal state of the computer is the matrix, and the input is a vector, where each element is represented by a qubit. The elements can have any value 0 to 1, but in the output vector of the multiplication, they are collapsed into 0 or 1. You then run it many times to get statistical data on the output to be able to pinpoint the output values more closely.
I disagree with that. It's pretty easy to prove it in general by calculating \int_0^{2\pi} sin(mx)sin(nx) dx etc. for m ≠ n.
Spin being an intrinsically quantum mechanical concept, I'm afraid the microscopic mechanism by which that transfer occurs will only be explainable in a quantum mechanical context. Here it will appear as a term in the Hamiltonian coupling the spin of an electron to its motion in a potential.
https://en.m.wikipedia.org/wiki/Einstein%E2%80%93de_Haas_eff...
First, two general points.
The most important thing to keep in mind is time. Life has had insane amounts of time. Billions of years. Beyond human comprehension amounts of time.
The second most important thing is that complicated != carefully orchestrated or optimal. Life is pretty cool, but it doesn't hold itself to a very high standard. It's the survival of the good enough, and is full of so many random hacks and poor design choices it's insane. Things get easier to accomplish when you lower the standard.
Now an attempt at an explanation.
Evolution by natural selection works on two principles. First, generation of diversity. Second, selective pressure.
DNA can and does mutate frequently. One important type of mutation is a duplication, since it let's you gain new functionality. You make two versions of the same gene, one keeps it's original function, and the other does some new function. This theme of repurposing existing things comes up again and again. Take something you have, make another version of it, change it a bit. If you've worked out how to grow a vertebra as a lizard and want to become a snake, turn off legs amd make more vertebra. Use the same genes, and just modify how you control them. This video (https://youtu.be/ydqReeTV_vk) is actually pretty good at running through the science behind evolutionary development, and how evolution can quickly reuse and modify existing parts. Basically, once smaller features evolve, you start modifying them in a modular way and can start making really big changes really easily. Keep in mind again that you are helped by enormous, mind boggling amounts of time randomly generating these features originally as well. Heres a summary of how our eyes evolved by repurposing neurons (https://youtu.be/ygdE93SdBCY). Our eye is a good example of how the standards are just good enough, we have veins and nerves on top of our light detecting cells instead of behind and just poke a hole to get them through to the other side. Doesn't that leave a blind spot? Yep, and we just hallucinate something to fill in the space. There are a couple other major ways we generate more diversity. You have things like viruses transferring DNA, but a really powerful one is sex. Sexual reproduction lets you combine and generate new combinations of Gene's to speed up how quickly diversification happens.
For selective pressure, think about it purely statistically. You have sets of arrangements of atoms, some of which are good at making new sets of arrangements of atoms that look like then, and others less so. Each tick of the clock, versions that are able to make more increase, and versions that dont decrease. This basically provides a directionality for evolution - whatever is good at replicating is successful. This weeds out mutations that dont over time while keeping mutations that do. This means the next round of mutations are building on ones that were good enough, and not ones that weren't. This lets evolution be more cumulative than a random search.
Neither of those are complete descriptions by any stretch, I'm just trying to give you a taste of the mechanisms behind it, but it goes a lot deeper. The most important things do just boil back down to what I started with. Survival of the good enough - lower your standards. And there's just so much time for these to happen. Evolutionary step only has a 1 in a million chance to happen in a given year? Then it's happened about 65 times since dinosaurs went extinct.
Yeah, I'd definitely second that. How could evolution result so quickly in something as "rudimentary" as Chlorella (i.e. the simplest plant). https://en.wikipedia.org/wiki/Chlorella
Have a look at the simple inference example here: https://en.m.wikipedia.org/wiki/Time_dilation
Time doesn't necessarily slow down the further away you get from a clock. If you and a clock are both stationary (ie you're in the same inertial frame), you will observe it ticking in "normal" time, albeit delayed due to the distance. If the clock is moving relative to you however, you will measure its ticks to be slightly slower.
You may be confusing general relativistic effects which are distance dependent (as gravity weakens the further away you get).
If you carry a clock in your rocket, you will (in the rocket) measure it to tick once a second. When you get back to Earth, you'll find that it's lagged behind a clock that was started at the same time but was left on Earth.
Maybe have a look at simple wiki too https://simple.m.wikipedia.org/wiki/Special_relativity though it doesn't actually derive the Lorentz transforms unfortunately.
Ignore the gravity bit for now, that's general relativity and it's more complicated to explain.
It's by Andy Matuschak and Michael Nielsen, and it is excellent. Have fun!
Entropy is usually poorly taught, there's really three entropies that get convoluted. The statistical mechanics entropy, which is the math to describe random distribution of ideal particles. There's Shannon's entropy, which is for describing randomness in strings of characters. And there's classical entropy, which is to describe the fraction of irreversible/unrecoverable losses to heat as a system transfers potential energy to other kinds of energy on its way towards equilibrium with its surroundings or the "dead state" (which is a reference state of absolute entropy).
These are all named with the same word, and while they have some relation with each other, they are each different enough that there should be unique names for all three, IMO.
It's used when discussing propulsive efficiency, as it's a proxy measurement for how much "work" each blade is doing. Because propeller/rotor blades are just high-aspect wings, if you have high disc loading your blades are at a high lift coefficient which means they'll be incurring lots of lift-induced drag which increases your power requirements.
Solidity in the same context refers to the amount of volume within an actuator disk that's occupied by actual solid material. If you have a 4-bladed rotor and you move to a 5-bladed rotor, all else equal, you've increased your solidity.
There are many many equations, and as most things in fluid mechanics you can get as deep into the weeds as you want. As a starting point, have a look at the wiki article for Blade Momentum Theory[0]
This comes close -- It shows the jittery thermal motion of this tiny machinery, instead of nice smooth glides.
https://www.scientificamerican.com/article/how-was-avogadros...
PS: Speed of sound is 343 m/s, diameter of a cell nucleus is ~ 0.000006m to give an idea.
I'm not totally sure what you mean by a higher dimension. The properties of the emitter (which is, e.g. a laser cavity) aren't affected by the gravitational wave because the emitter is a rigid body, which doesn't get stretched. (It's the same thing as described here: https://news.ycombinator.com/item?id=22990753 ) So it puts out light of a given frequency.
By contrast, LIGO is not a rigid body, because the mirrors at the ends of the arms hang freely, hence allowing gravitational waves to change the distance between them.
> What's baffling to me is everyone who has tried to explain the LIGO detector doesn't even realize this question exists. I've independently thought this question and when people start explaining LIGO to me, and I take the time to spell out the question, they realize they don't understand LIGO either.
Yup, it generally is the case in physics that over 95% of people who claim they can explain any given thing don't actually understand it! But the professionals are aware. I even know a LIGO guy who goes to popular talks armed with a pile of copies of the paper I linked.
Water vapor around the LCL starts condensing and turning from a gas into liquid cloud droplets. This process happens considerably faster once it begins for a variety of reasons, so once you can have cloud droplets, you get a ton of cloud droplets - not a gradual transition from water vapor to cloud. It's almost like a light switch.
Most air masses are relatively homogenous anyways, so unless there are underlying processes causing things like undulatus asperatus, it will certainly appear very, very flat over a large area.
Entropy is a the amount of information it takes to describe a system. That is, how many bits does it take to "encode" all possible states of the system.
For example, say I had to communicate the result of 100 (fair) coin flips to you. This requires 100 bits of information as each of the 100 bit vectors is equally likely.
If I were to complicate things by adding in a coin that was unfair, I would need less than 100 bits as the unfair coin would not be equally distributed. In the extreme case where 1 of the 100 coins is completely unfair and always turns up heads, for example, then I only need to send 99 bits as we both know the result of flipping the one unfair coin.
The shorthand of calling it a "measure of randomness" probably comes from the problem setup. For the 100 coin case, we could say (in my opinion, incorrectly) that flipping 100 fair coins is "more random" than flipping 99 fair coins with one bad penny that always comes up heads.
Shannon's original paper is extremely accessible and I encourage everyone to read it [1]. If you'll permit self-promotion, I made a condensed blog post about the derivations that you can also read, though it's really Shannon's paper without most of the text [2].
[1] http://people.math.harvard.edu/~ctm/home/text/others/shannon...
The basic gist I get is that quantum computing, for a very specific set of problems, like optimization, let's you search the space more efficiently. With quantum mechanics you can associate computations with positive or negative probability amplitudes. With the right design, you cause multiple paths to incorrect answers to have opposite amplitudes, so that interference causes them to cancel out and not actually happen to begin with. That's just my reading of the comic over and over though.
Imagine two circles in 2D that repel each other the closest you get them together, like magnets do. In 2D it would look like they're interacting at a distance, but maybe in 3D they're two cylinders that are a bit flexible, that are actually touching at the ends, but not in the 2D plane you're observing. The interaction is "properly physical" in 3D but in the 2D plane it seems magical.
That's a way that I imagine it in 2D vs 3D, so this might be similar in 3D vs ND, where N > 3. Of course this is all baseless speculation, but it seems kinda plausible in my head.
Edit: bad drawing of what I meant: https://imgur.com/362tcHg
However, even with understanding how a Quantum Computer works at its most basic level I still have difficulty understanding the more useful Quantum Algorithms:
1. The magical orthogonal basis functions: complex sinusoids. Shifting of a time signal just multiplies the Fourier counterpart by a new phase (relative to its represented frequency). Thus transforming to the Fourier basis enables an alternate method of implementing a lot of linear operations (like convolution, i.e. filtering).
2. The magic of the fast implementation of the Discrete Fourier Transform (DFT) as the Fast Fourier Transform (FFT) makes the above alternate method faster. It can be most easily understood by a programmer as a clever reuse of intermediate results from inner loops. The FFT is O(N log N), a direct DFT transform would be O(N^2)
A mathy demonstration of this at https://sourceforge.net/projects/kissfft/
That's from 10 years ago, so you might be able to find video of a more recent version; try to find a year when Wagner taught, he's great.
The reason some people regard Faraday's original explanation of the eponymous law (it is worth noting that at the time it was widely regarded as inadequate and handwavy) as illuminating is because Faraday visualized his "lines of force" as literal chains of polarized particles in a dielectric medium, thereby providing a seemingly mechanistic local explanation of the observed phenomena. Not much of this mindset survived Maxwell's theoretical program and it has very little to do with how we regard magnetism today. Instead, the unification of electricity and magnetism naturally arises from special relativity, whereas the microscopic basis of magnetism requires quantum mechanics. There isn't really any place for naive contact mechanics in the modern picture of physics, so in that sense I would regard Faraday's view as misleading.
Finally, I can't end any "explanation" of magnetism without linking the famous Feynman interview snippet [1] where he's specifically asked about magnetism. It doesn't answer your question directly, but it's worth watching all the more because of it.
It’s an animated series that takes place inside the human body. I’ve been meaning to watch it myself. It’s supposed to be pretty accurate.
There are attempts to rigorously define it. I'm currently reading this paper, but not really convinced: https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...
The amount of complexity is just absolutely insane. My favourite example: DNA is read in triplets. So, for example, "CAG" adds one Glutamine to the protein it's building[1].
There are bacteria that have optimised their DNA in such a way that you can start at a one-letter offset, and it encodes a second, completely different, but still functional protein.
I found the single cell to be the most interesting subject. But of course it's a wild ride from top to bottom. The distance from brain to leg is too long, for example, to accurately control motion from "central command". That's why you have rhythm generators in your spine that are modulated from up high (and also by feedback).
Every human sensory organ activates logarithmically: Your eye works with sunlight (half a billion photons/sec) but can detect a single photon. If you manage to build a light sensor with those specs, you'll get a Nobel Prize and probably half of Apple...
Proof? Just look at all the replies you got: each one is dozens of pages of complex (imaginary) math, control theory, and statistics.
The hardest part of QC is exactly what you described: how to extract the answer. There is no algorithm, per se. You build the system to solve the problem.
This is why QC is not a general purpose strategy: a quantum computer won't run Ubuntu, but it will be one superfast prime factoring coprocessor, for example (or pathfinder, or root solver). You literally have to build an entire machine to solve just one problem, like factoring.
Look at Shor's algorithm: it has a classical algorithm and then a QC "coprocessor" part (think of that like an FPU looking up a transcendental from a ROM: it appears the FPU is computing sin(), but it is not, it is doing a lookup... just an analogy). The entire QC side is custom built just to do this one task:
https://en.wikipedia.org/wiki/Shor%27s_algorithm
In this example he factors 15 into 5x3, and the QC part requires FFTs and Tensor math. Oy!
Like I said, it will take decades for this to become easier to explain.
For fun, look at the gates we're dealing with, like "square root of not": https://en.wikipedia.org/wiki/Quantum_logic_gate
The other thing you can do is think about what it means for particular types of categories. For a posetal category, it says that an element of a poset is uniquely determined by the set of all elements that come before it in the ordering. For a group, it says that every element is uniquely determined by its action on the group. (This is basically Cayley’s theorem.) See this MSE post for more intuition: https://math.stackexchange.com/questions/37165/can-someone-e...
There are people who put limit orders on the exchanges. Say that the price of TSLA is $500. I think it's overpriced, and its likely to go down, but then grow in the future. I can say, "I'm willing to buy 100 shares of TSLA at $420." Someone else holds TSLA and thinks its likely to go up, but not hold it's value, so they say, "I'm willing to sell 100 shares of TSLA at $690." The sum of all of these limit orders forms the market depth chart.
The more common way to interact with the market is to say, "I want to buy a share of TSLA at the current market price." In the above example, the only option is to buy TSLA for $690, even though the last transaction was $500! This is a example with very little market depth. In the normal case, you'd buy your share for $500.02 or something like that. (Same, but reversed, for selling at market price.)
For more information, but with a crypto focus, see https://hackernoon.com/depth-chart-and-its-significance-in-t...
For your example, you would put in a market order, and buy the stock at the lowest price that someone was willing to sell it at. If the last price was $4, but the lowest limit order that currently existed was for $100, and you bought it for $100, then yes, the price would go up to $100. (In real life, those sharp upticks don't happen much. It's more likely that a sharp downtick happens, where suddenly everyone wants to sell oil futures at the same time, but almost no one is willing to buy them, so the price ends up negative.)
Note that whenever people defend high-frequency trading for "providing liquidity to the market," this action of setting buy and sell limit orders that are close to each other is what they are talking about. There are algorithms that will see TSLA at $500, and offer to sell TSLA at $500.02 and buy TSLA at 499.98. If both orders go through, they make $0.04. If you operate fast enough to get out ahead of any big market moves, you can make a lot of money. But if you ever accidentally buy a bunch of TSLA for $499.98 right before the price plummets to $420, then you just lost a lot of money. This is why HFT and other trades with similar risk profiles are sometimes referred to as "picking up nickels in front of a steamroller."
I should add: As a human being, it is probably impossible to separate the scientist from the philosophy in which they explore, proceed with, and promote their work. In some cases, it might not be something they are even aware of. Instead, the scientific system (as a sort of world institution) should itself be designed to always seek out and protect truth, regardless of prevailing contemporary knowledge.
My favorite illustration was a video of simulated icosahedral viral capsid assembly. The triangular panels were tethered together to keep them slamming into each other. Even then, the randomness and struggle was visceral. Lots of hopeless slamming; tragic almost but failing to catch; being smashed apart again; misassembling. It was clear that without the tethers forcing proximity, there'd be no chance of successful assembly.
Nice video... it's on someone's disk somewhere, but seemingly not on the web. The usual. :/
> yeast
Nice example. For a temperature/jiggle story, I usually pair refrigerating food to slow the bacterial jiggle of life, with heating food to jiggle apart their protein origami string machines of life. With video like https://www.youtube.com/watch?v=k4qVs9cNF24 .
> Compartmentalizing
I've been told the upcoming new edition of "Physical Biology of the Cell" will have better coverage of compartmentalization. So there's at least some hope for near-term increasing emphasis in introductory content.
The basic answer is that the extra energy that goes to the rocket comes from harvesting the kinetic energy that the fuel itself had by virtue of being in the moving rocket.
The site most MDs use is here: https://www.uptodate.com/home
(They use some other stuff, but you get the idea)
You can back out Avogadro constant starting with this experiment.
It also goes on to explain the delayed choice quantum eraser experiment but I don't think that's quite convincing.
Tangential, and not an answer to your question, but if you're like me, you will be fascinated to learn that there is a drug (MPPP, synthetic opiate) that if cooked incorrectly yields "MPTP"[1] which will give you Parkinsons. As in, forever. You take this drug (at any age) and then you have Parkinsons for the rest of your life.
If you understand Turing Machines, you probably also understand other automata. So you probably understand nondeterministic automata [1].
A quantum computer is like a very restricted nondeterministic automaton, except that the "do several things a once" is implemented in physics. That means just like a NFA can be exponential faster than a DFA, a QC can be exponential faster than a normal computer. But the restriction on QCs makes that a lot harder to do, and so far it only works for some algorithms.
As to why quantum physics allows some kind of nondeterminism: If you look at particles as waves, instead of a single location you get a probability function that tells you "where the particle is". So a particule can be "in several places at once". In the same way a qbit can have "several states at once".
> What I don't understand is how a programmer is supposed to determine the correct solution when their computer is out in some crazy multiverse.
Because one way to explain quantum physics is to say that the waveform can "collapse" [2] and produce a single result, as least as far as the observers are concerned. There are other interpretations of this effect, and this effect is what makes quantum physics counterintuitive and hard to understand.
[1] https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...
I've started but haven't finished the physics lectures by Card Bender on mathematical physics, where he features perturbation theory prominently [2].
If someone could chime in on this, I would also be appreciative. Also if someone has better resources to learn about perturbation theory, I would also be appreciative.
"Science", as it is represented in the media, and in turn repeated and enforced (not unlike religion, interestingly) on social media and in social circles.
As opposed, of course, to actual science.
"Perception is reality." - Lee Atwater, Republican political strategist.
https://www.cbs46.com/news/perception-is-reality/article_835...
https://en.wikipedia.org/wiki/Lee_Atwater
"Sauron, enemy of the free peoples of Middle-Earth, was defeated. The Ring passed to Isildur, who had this one chance to destroy evil forever, but the hearts of men are easily corrupted. And the ring of power has a will of its own. It betrayed Isildur, to his death."
"And some things that should not have been forgotten were lost. History became legend. Legend became myth. And for two and a half thousand years, the ring passed out of all knowledge."
https://www.edgestudio.com/node/86110
Threads like this one, and many others like it, well demonstrate the precarious situation we are in at this level. Imagine the state of affairs around the average dinner table. Although, it's not too infrequent to hear the common man admit (which is preceded by realization) that they don't know something. As one moves up the modern day general intelligence curve, this capability seems to diminish. What the exact cause of this is a bit of a mystery (24 hour cable propaganda and the complex dynamics of social media is my best guess) - hopefully someone has noticed it and is doing some research, although I've yet to hear it mentioned anywhere. Rather, it seems we are all content to attribute any misunderstanding that exists in modern society to Fox News, Russia, QAnon, or the alt-right. I'm a bit concerned that this approach may not be the wisest, but I imagine we will find out who's right soon enough.
It explains in terms a computer scientist can understand. As in: it sets out a computational model and explores it, regardless whether we can physically realize that machine.
Hope this helps!
For recent times, you can also compare the dates of the C14 with other methods like counting tree rings, or the date of a total eclipse and check the calibration.
2) You are almost right. The tides are not produced by the gravity of the Moon, but from the differences in the gravity of the Moon in the water that is nearby and the average of the Earth.
You forgot to include the centrifugal force [when you are in the non-inertial frame frame that rotates like the Earth-Moon system https://xkcd.com/123/ ]. The centrifugal force is bigger in the water that is in the more far from the Moon and again the difference creates the other tide.
3) The sky is blue because the single molecules in the air disperse the blue/violet color more than the other colors. There are many ways to produce colors. In this case the light is dispersed by the whole molecule.
A different method to produce blue is using a CD to produce a rainbow and the using a slit block the other colors. Some birds and butterflies use a somewhat similar method. [Not very similar but closer to the CD method than to the air method.]
The blue in the die for cloth uses another method. You make a long chain of conjugate chemical bounds C-C=C-C=C-C=C-C, and pick length and atoms so the electrons absorb the colors you don't like and transform the energy into heat.
I'm probably forgetting a few more method, there are many of them, so it's interesting to understand which of them make the sky blue.
*) These are good questions. My explanations are not 100% complete (and probably not 100% accurate) but I hope you can fix the holes.
Trust (knowing the chemist directly, indirectly, ...) in specific individuals > a largely unknown (but known to be imperfect) system, for many people anyways. Obviously this isn't practical for the not well connected, but it's all we got for now.
But as for your question, I've seen little to suggest it's anything more than war on drugs propaganda and hearsay.
Since the outbreak of COVID-19, demand for the kind of services offered by those 3 Internet businesses have in fact skyrocketed. Increasing demand imply those businesses still have room to grow revenue. Shopify [1] for instance is now seeing huge Black Friday-like traffic during the shelter-in-place and a lot of these small businesses are first-timers on their platform who will likely stick around after the pandemic.
1: https://mobile.twitter.com/jmwind/status/1250816681024331777
I understand flight from a mathematical point of view. I've actually read a few books on the subject, and I could explain how flight works to someone. However, I'm still fishing for an explanation that "feels" more satisfying though. Per the question, I still want it explained better.
EDIT: There's already a thread about flight. I asked the same question there, but phrased a bit differently: https://news.ycombinator.com/item?id=22993460
Spinors are difficult to describe in an HN post since they require a good amount of linear algebra, but my favorite explanation is probably here: http://www.weylmann.com/spinor.pdf
Sure, some ("plenty", in absolute numbers) will tell you this, but I don't recall being in many forums where that attitude doesn't get significant pushback (as opposed to the anti-drug community). The modern "pro drug" community has a fairly significant culture of safety within it, unlike back in the sixties.
> The truth is almost impossible to find.
There is plentiful anecdotal evidence online. Any clinical evidence, if they ever get around to doing it in any significant volumes, will be utterly miniscule (and I highly doubt more trustworthy, considering what you're working with, and the size of the tests that will be done) to the massive volume of trip reports and Q&A available online, much from people who know very well what they're talking about, not unlike enthusiasts in any domain.
> Now from what I gathered about LSD (and psychedelics in general): these are very random.
Depends on one's definition of random.
> If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more.
Effects vary by dose of course, but I've seen little anecdotal evidence that suggest high doses have a different outcome, and plenty that suggests the opposite.
> But it can also fuck you up for years, or maybe bring significant improvement in your life.
See: https://rationalwiki.org/wiki/Balance_fallacy
> The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.
I believe this to be true, but don't forget the fallacy noted above.
That said, these things are not toys - extreme caution is warranted.
Note that the 4th edition is (sortof) freely available at the NIH website. The way to navigate through that book is bizarre though, as the only way to access its content is by searching.
I think this was a pretty neat explanation:
https://sites.google.com/site/butwhymath/m/convolution
The problem with convolutions, like many things in science, is that how you learn it, depends on what you're studying. Same theory, but with N different explanations, which can cause confusion if some of them are very different and tough to connect (i.e learning convolutions in a physics class vs leaning one in a statistics class)
I build this at school, using the same principle: https://no.wikipedia.org/wiki/Sivilingeni%C3%B8r#/media/Fil:...
https://www.youtube.com/watch?v=B_zD3NxSsD8&t=3m17s
The artistic director has a ted talk where he talks about how beautiful biological processes are, and it's like no, man, you made it look that way.
If you want a really fantastic video that captures just how messy and random it is I recommend the wehi videos, like the one on apoptosis, where the proteins look way more derpy than the secret life of the cell: https://www.youtube.com/watch?v=DR80Huxp4y8 There's a couple of places where they have a hexameric protein where things magically snap into place, but I give them a pass because the kinetics on that are atrociously slow. Let's just say for the sake of a short video the cameraman happened to be at the right place at the right time.
https://www.youtube.com/watch?v=DR80Huxp4y8
here's the artistic director for the inner life of the cell (the worse one) going on and on about how "beautiful" the science of biology is:
https://www.ted.com/talks/david_bolinsky_visualizing_the_won...
http://www.righto.com/2011/07/cells-are-very-fast-and-crowde...
But in a nutshell, the animations are heavily idealized, showing the process when it succeeds, slowing it way, way down, and totally ignoring 90% of the other nearby material so you can see what's going on. Then you remember that you have just a bajillion of cells within you, all containing this incredibly complex machinery and... it's really kindof humbling just how little we actually know about any of it. Not to discredit the biologists and scientists for whom this is their life's work; we've made incredible amounts of progress over the last century. It's just... we're peeking at molecular machinery that is so very small, and moves so quickly that it's nigh impossible to observe in realtime.
https://www.nand2tetris.org/ may also be insightful, but I did not look further into to.
> Permanent schizophrenic zombie, maybe a bit extreme, but severe and traumatic long-lasting psychological damage is a not-uncommon phenomena.
https://english.stackexchange.com/questions/6124/does-not-un...
https://towardsdatascience.com/an-introduction-to-multivaria...
HOW PSYCHEDELICS REVEALS HOW LITTLE WE KNOW ABOUT ANYTHING - Jordan Peterson | London Real --> https://www.youtube.com/watch?v=UaY0H9DBokA
Jordan Peterson - The Mystery of DMT and Psilocybin --> https://www.youtube.com/watch?v=Gol5sPM073k
> LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.
https://en.wikipedia.org/wiki/Hallucinogen_persisting_percep...
I have a close friend who had the same experience with excessive use of marijuana, but my money would be on psychedelics being far more likely to produce the outcome you unfortunately experienced. He's much better today, but not entirely "ok".
> But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.
This sounds rather similar to my friend's story.
Can Taking Ecstasy (MDMA) Once Damage Your Memory?
https://www.sciencedaily.com/releases/2008/10/081009072714.h...
According to Professor Laws from the University’s School of Psychology, taking the drug just once can damage memory. In a talk entitled "Can taking ecstasy once damage your memory?", he will reveal that ecstasy users show significantly impaired memory when compared to non-ecstasy users and that the amount of ecstasy consumed is largely irrelevant. Indeed, taking the drug even just once may cause significant short and long-term memory loss. Professor Laws findings are based on the largest analysis of memory data derived from 26 studies of 600 ecstasy users.
> (from your comment below) I took 300ug of LSD recklessly on a particularly bad day for me, in a particularly uncomfortable setting.
https://www.trippingly.net/lsd/2018/5/3/phases-of-an-lsd-tri...
Lots of details, plus dosage guide (25 ug and up) & typical experinces
https://www.reddit.com/r/LSD/comments/34acza/do_you_guys_bel...
imo 300ug is the point where you need to have some serious experience with tripping to be able to handle yourself. because if you're coming up, the acid is already circulating your bloodstream, and you get that horrible sinking sensation of thinking you've taken too much... you're in for a really bad time if you don't know how to control the trip.
I think it's difficult to say how big a dose really is until you've had a bad trip on it. only then can you see how insidious everything can get and as such just how intense 300ug can be. the reason people say not to start on doses like that is so they will AVOID those horrible experiences. so yeah, 300ug is a large dose, just because if shit goes wrong on it then you're fucked.
The basic idea is that by making the amplitudes of the qubits destructively interfere with each other in certain ways, you can eliminate all of the wrong answers to the question you're trying to answer.
https://smartairfilters.com/en/blog/n95-mask-surgical-preven... https://smartairfilters.com/en/blog/coronavirus-pollution-ma...
Long answer: You need to understand how the Limit Order Book works. I wrote up something about this here [1]. It also goes into different definitions of price.
> If a very low-volume stock is listed at $4, and then I offer to buy a share for $100, does the NYSE suddenly start listing its price at $100?
If you trade actually absorbs the order book and pushes the asks to $100 then yes, that could be case depending on the exchange, but I'm not sure about NYSE specifically. Most likely that could never happen due to various hidden order types and HFT market makers though.
[1] https://www.tradientblog.com/2020/03/understanding-the-limit...
Assume there are no other orders in the order book.
Scenario 1: Seller submits a limit sell order for $90. Since there are no buyers, this order goes into the book. Then a buyer submits a limit buy order for $100. The order would be filled at $90 (the best ask) and the buyer only pays $90. Here, the seller is the maker and the buyer is the taker.
Scenario 2: Buyer submits a limit buy order for $100. Since there are no sellers, this order goes into the book. Then a seller submits a limit sell order for $90. The order will be filled at $100 (the best bid) and the seller gets $100. Here, the buyer is the maker and the seller is the taker.
Market makers are responsible for setting prices and providing liquidity. If you want to understand this in more detail, check out this post [1] I wrote up a while ago.
[1] https://www.tradientblog.com/2020/03/understanding-the-limit...
I gifted myself The Vital Question in 2015 December. While Lane writes effectively without any mind-numbing jargon, the book still has quite a bit of technical chemistry (understandably). After the excellent first 80 pages, it took me a lot more will power to plough through. (I paused at page 112 to get back later.)
Once when I was reading the book on a plane, a seasoned biologist happened to be sitting next to me. When I told that it's the first book of Nick Lane that I picked up, he said: "I'd rather suggest you to pick up Laine's other book, Life Ascending, and only then get back to The Vital Question."
PS: FWIW, I've previously mentioned the above in an older thread, where an ex-biochemist chimed in to confirm the above advice: https://news.ycombinator.com/item?id=18714115
https://www.amazon.com/Trading-Exchanges-Market-Microstructu...
I don't know if this was it, but an explanation nonetheless https://medium.com/@omaraflak/automatic-differentiation-4d26...
What that image drove home for me is:
1) that DNA transcription isn't something that happens rarely, or once-at-a-time. DNA is constantly being transcribed; proteins are constantly being built. The scale and rate isn't something I'd ever been taught.
2) How RNA polymerase works must be taking into account a hell of a lot of congestion. Polymerase molecules must constantly be bumping into each other.
3) How the picture would make no sense whatsoever unless you already know what the mechanism is.
I think it does make sense to start with the idealised process, as long as you follow up with messy reality.
It's true, but you need to realize that you're qualified enough only when you understand that you shouldn't roll out your own crypto.
In my opinion, the only person who has credibly demonstrated being able to roll his own crypto is djb (http://cr.yp.to/)
> but isn’t all security obscuring something,
Keeping a secret isn't "obscuring" something, it's hiding it entirely. Security through obscurity is bad because it relies on attackers being dumb. The smartest person in the world cannot be expected to guess a well chosen and kept secret.
https://metacpan.org/pod/Quantum::Superpositions
As far as I can tell this one still outperforms all existing "hardware implementations".
Edit - I just lost 20 mins reading the start of https://en.wikipedia.org/wiki/RNA_world which is interesting on that stuff
If you or a friend wants a crash course on econ, check it out.
Visuals help: [1] https://aviationphoto.org/wp-content/uploads/2016/11/Paul-Bo... [2] https://www.popphoto.com/sites/popphoto.com/files/import/201... [3] https://imgur.com/gallery/EHW7D [4] https://www.youtube.com/watch?v=dfY5ZQDzC5s&t=192
https://python.readthedocs.io/en/latest/library/asyncore.htm...
'There are only two ways to have a program on a single processor do “more than one thing at a time.” Multi-threaded programming is the simplest and most popular way to do it, but there is another very different technique, that lets you have nearly all the advantages of multi-threading, without actually using multiple threads. It’s really only practical if your program is largely I/O bound. If your program is processor bound, then pre-emptive scheduled threads are probably what you really need. Network servers are rarely processor bound, however.'
'If your operating system supports the select() system call in its I/O library (and nearly all do), then you can use it to juggle multiple communication channels at once; doing other work while your I/O is taking place in the “background.” ...'
> In physics, the twin paradox is a thought experiment in special relativity involving identical twins, one of whom makes a journey into space in a high-speed rocket and returns home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin sees the other twin as moving, and so, according to an incorrect[1][2] and naive[3][4] application of time dilation and the principle of relativity, each should paradoxically find the other to have aged less. However, this scenario can be resolved within the standard framework of special relativity: the travelling twin's trajectory involves two different inertial frames, one for the outbound journey and one for the inbound journey, and so there is no symmetry between the spacetime paths of the twins. Therefore, the twin paradox is not a paradox in the sense of a logical contradiction.
There's multiple explanations included to resolve the "paradox" from different lines of argument; I particularly like this one: https://en.wikipedia.org/wiki/Twin_paradox#A_non_space-time_...
Yeah. One might for example reduce reinforcement of the big-empty-cell misconception by briefly showing more realistically dense packing, eg [1], before fading out most of it to what can be easily rendered and seen. But that would be less "pretty". Prioritizing "pretty" over learning outcomes... is perhaps a suboptimal for education content.
> better
But still painful. Consider those quiet molecules in proteins, compared with surrounding motion. A metal nanoparticle might be that rigid, but not a protein.
One widespread issue with educational graphics, is mixing aspects done with great care for correctness, with aspects that are artistic license and utter bogosity. Where the student or viewer has no idea which aspects are which. "Just take away the learning objectives, and forget the rest" doesn't happen. More like "you are now unsalvageably soaked in a stew of misconceptions, toxic to transferable understanding and intuition - too bad, so sad".
So in what ways can samplings of a protein's configuration space be shown? And how can the surround and dynamics be shown, to avoid misrepresenting that sampling by implication?
It can be fun to picture what better might look like. After an expertise-and-resource intensive iterative process of "ok, what misconceptions will this cause? What can we show to inoculate against them? Repeat...". Perhaps implausibly intensive. I don't know of any group with that focus.
My goal was to explain quantum computing in a way that is mathematically precise but doesn't require one to learn linear algebra first. To do this, I implemented a quantum computer simulator in Javascript that runs in the web browser. Conceptually (in mathematical language), in each simulation I present, I've started by enumerating the computational basis of the Hilbert space (all possible states the qubits could be in) and represented the computational state by putting an arrow beside each of them, which really is a complex number. (This similar to how Feynman explains things in his book QED.) The magnitude of the complex number is the length of the arrow, and its phase is the direction it points (encoded redundantly by its color). I've filled out the amplitude symbol with a square so that at any given point, its probability of a measurement resulting in that outcome is proportional to the area of that square. Essentially, in this language, making a measurement makes the experimenter color blind -- only the relative areas of the amplitudes matter and there is no way to learn directly phase information without doing a different experiment.
I could make a further document explaining along these lines if people are interested. The source is on github too: https://github.com/garrison/jsqis
Read "Communicating Sequential Processes" by Tony Hoare https://www.cs.cmu.edu/~crary/819-f09/Hoare78.pdf
There's also a book: http://www.usingcsp.com/
[1] https://cdn.rcsb.org/pdb101/molecular-machinery/ [] http://pdb101.rcsb.org/sci-art/goodsell-gallery [] http://pdb101.rcsb.org/motm/motm-by-date [] https://cdn.rcsb.org/pdb101/molecular-machinery/
The core of the star is the hottest and most dense part. Greater heat and density make it easier for fusion reactions to run. If suddenly the core is made mostly of iron, then the amount of energy it produces rapidly drops. Even if there are nice, easily fusible hydrogen atoms farther out from the core, they will not be fusing at a very high rate, because the temperature and pressure is lower where they are. Also, the more easily fusible atoms remaining outside the core can't diffuse into the core fast enough to refuel it. The only possible outcome is collapse.
In some sense "dark matter" and "dark energy" are just placeholder words for "whatever thing is causing all this weird stuff to happen". This is actually very analogous to how "the ether" was a placeholder term for "whatever thing that radio waves are waves in". (Now we refer to it as "the electromagnetic field". The "ether" terminology was associated with some incorrect assumptions, such as a privileged reference frame, which is why people sometimes say it was an incorrect hypothesis. But the electromagnetic field is certainly real, it just didn't turn out to work like some people thought it did.) Scientists have observed so far the dark matter seems to behave pretty much like ordinary matter, except that it just happens to ignore the electromagnetic and strong nuclear forces. Not only does it hold galaxies together, but its gravity also bends the paths of light rays, just as we expect of anything massive. So calling it "matter" isn't too much of a stretch. It's still very mysterious, though.
Radiation pressure actually does limit the mass of stars, to something on the order of 100 to 200 solar masses, see this stack exchange question: https://astronomy.stackexchange.com/questions/328/is-there-a... That doesn't stop smaller clouds of gas from collapsing to form smaller stars, though.
* The reflection angle laws are due to the laws of conservation, see https://en.wikipedia.org/wiki/Snell%27s_law
* For a pure colour, the colour is simply the energy of the photons. Atoms have discrete stable electron orbits, and electrons moving between these levels will absorb or emit discrete levels of energy in the form of photons, which is why we have spectral lines. Reality is more complicated because part of the energy may be converted to vibrations of the atom itself (phonons).
* Another factor is the perception of colour. In physics to characterize light one measures its spectra, the intensity of the light versus its wavelength (wavelength = speed of light in vacuum / frequency). The perception of colour of these distributions isn't always always what one would expect.
This project involves a minisatellite (capable of generating entangled photons in space) to establish a space platform with long-distance satellite and ground quantum channel, and to carry out a series of tests about fundamental quantum principles and protocols in space-based large scale
No idea, sorry.
> favorite books on how things work at that scale
I've found the bionumbers database[1] very helpful. Google scholar and sci-hub for primary and secondary literature. But books... I'd welcome suggestions. I'm afraid I mostly look at related books to be inspired by things taught badly.
The bionumbers folks did a "Cell Biology by the Numbers" book... the draft is online[2].
Ha, they've done a Covid-19 by the numbers flyer[3].
If you ever encounter something nice -- paper, video, text, or whatever, or even discussion of what that might look like -- I'd love to hear of it. Sorry I can't be of more help.
[1] https://bionumbers.hms.harvard.edu/search.aspx [2] http://book.bionumbers.org/ [3] http://book.bionumbers.org/wp-content/uploads/2020/04/SARS-C...
You might find this useful. Along with the author's write-up:
https://medium.com/@stew_rtsmith/quantum-javascript-d1effb84...