I've been studying viruses lately, and have found that the line between virus/exosome/self is much more blurry than I realized. But, given the niche interest in the subject, most articles are not written with an overview in mind.
What sorts of topics make you feel this way?
And adding on to that: Will light inside a box redshift? If I weigh the box (i.e. weigh the light inside the box), then wait a bit for the light to redshift, then weigh the box again?
Why interpretations: There is an experiment you can do that is hard to explain: Either particles are able to somehow influence each other faster than light (non-local), or the particle somehow doesn't exist except when interacting with some other particle (non-real).
Try this video: https://www.youtube.com/watch?v=zcqZHYo7ONs the AHA moment in the video comes when you realize you can entangle the light and that adding a filter by one stream of light somehow causes the other stream of light to also be influenced.
How immune system and medications work.
Why some plastics are recyclable and others are not.
The underlying reason for this is that Noether's theorem tells us that every physical symmetry implies a conservation law for some physical quantity. Conservation of energy and momentum comes from the fact that the physical laws are the same throughout time and space. However, cosmological expansion violates that assumption, so there is no reason that energy and momentum should still be conserved. [2]
[1]: One side note here is that relativistically, energy and momentum are not really separate physical quantities, but instead two components of the same underlying physical quantity. Unfortuantely, this quantity does not really have a good name (despite Taylor & Wheeler's attempt to call it "momenergy"). It ends up being called the momentum 4-vector, but the temporal component of this 4-vector is energy.
[2]: This is only true globally. Locally, the laws are approximately the same from one moment to the next, so conservation of energy and momentum hold for small distances and short times.
I believe even folks at NASA have even said it helped cement their mathematical knowledge with a better intuitive understanding.
1. Subatomic Matter is by default both mass and a wave, but when "observed" it becomes a particle as we know it i.e. with mass.
2. Atomic bonds are formed due to electrons (waves) being shared between adjacent atoms.
Hope I have some parts correct. Perhaps someone can shed some photons.
https://en.wikipedia.org/wiki/Protein#/media/File:Chaperonin...
I'm working on making a model of this chaperone complex relative to a folded protein to get a sense of how it might be interacting with the amino acid chain before it becomes globular
ZK proofs have a number of good explainers, mostly using graph colorings. Non-interactive versions, however, require quite a bit more than that explanation allows - and despite asking experts, I still haven't found a good, basic explanation.
The StackExchange sites have less coverage and answers tend to be more technical.
University websites return reliable answers, but often neither short nor accessible.
Here is a few books you can read on the subject. They do a pretty good job on describing what the issue is and what the interpretations mean:
Max Tegmark - Our Mathematical Universe
Sean M. Carroll - Something Deeply Hidden
Adam Becker - What is real?
Here are some things you can google if you want to just skim the subject: Wave–particle duality, The Measurement Problem, Quantum decoherence, Copenhagen interpretation, Bell's theorem, Superdeterminism, Many-worlds interpretation, Ghirardi–Rimini–Weber theory (GRW).
Last but not least, look at the Wolfram Physics Project. (https://wolframphysics.org). The take on quantum mechanics if you go along with the idea of hyper-graph is fascinating (to me)
The full 3d Coriolis force is more complicated than that (eg accounting for the Eötvös effect): The spinning disk example only gets you to the -2vω term (where v denotes radial velocity and ω angular velocity).
@GP: I'd recommend to try to read an understand the proof for N=3. (And why that approach does not extend to bigger N.) It requires only undergraduate level math and it is much much much easier. It uses very different tools, so it will give you very little insight of the general proof, but it will give you some taste of the problems of the proof.
Fermat's Last Theorem (book) by Simon Singh is the source to check out if you're interested in the details of how it eluded mathematicians and a general idea of how the problem was solved, without getting too technical. It's a great story well told.
https://www.youtube.com/watch?v=zcqZHYo7ONs
But why that solves the problem? Because it connects two branches of mathematics (modular forms and elliptic equations) in a way that proves that equations of that form cannot exist (where the exponent is > 2)
Though there probably is an easier way of explaining it, it is strongly suspected that Fermat got the wrong idea there.
In some experiments the weird mathematical thing can be approximated as an almost classical particle. That approximation simplifies the calculation a lot, and sometime you can get some result intuitively. But it is never true, it is only a very good approximation.
In some experiments the weird mathematical thing can be approximated as an almost classical wave. That approximation simplifies the calculation a lot, and sometime you can get some result intuitively. But it is never true, it is only a very good approximation.
Try to read again everything you have read about the subject, but every time the text says "here light is a wave/particle" use a red marker to rewrite that sentence as "here light can be approximated as a wave/particle".
Basically: particle are the quanta of waves. So it's not really a duality in the end.
1: https://www.lesswrong.com/posts/AnHJX42C6r6deohTG/bell-s-the...
2: https://kantin.sabanciuniv.edu/sites/kantin.sabanciuniv.edu/...
https://www.physics.wisc.edu/undergrads/courses/spring2016/4...
AFAICS it was published in the American Journal of Physics in 1981 but it's addressed to the general reader. It requires no knowledge of quantum physics.
The measurements were finally shown to be effects of the immediate environment on the measurement apparatus.
That detectors used in labs may vary with time by >0.1%, unknown to their users, seems pretty important. How did everybody involved not know?
To wit, the idea is that you cannot distinguish whether you are in an accelerated frame or in a gravitational field; alternatively stated, if you’re floating around in an elevator you don’t know whether you’re freefalling to your doom or in deep sideral space far from any gravitational source (though of course, since you’re in an elevator car and apparently freefalling... I think we’d all agree on what’s most likely, but I digress).
Anyway, what irks me that this is most definitely not true at the “thought experiment” level of theoretical thinking: if you had two baseballs with you in that freefalling lift, you could suspend them in front of you. If you were in deep space, they’d stay equidistant; if you were freefalling down a shaft, you’d see them move closer because of tidal effects dictated by the fact that they’re each falling towards the earth’s centre of gravity, and therefore at (very slightly) different angles.
Of course, they’d be moving slightly toward each other in both cases (because they attract gravitationally) but the tidal effect presents is additional and present in only one scenario, allowing one to (theoretically) distinguish, apparently violating the bedrock Equivalence Principle.
I never see this point raised anywhere and I find it quite distressing, because I’m sure there’s a very simple explanation and that General Relativity is sound under such trivial constructions, but I haven’t been able to find a decent explanation.
I also like that FLT follows easily from the Beal conjecture, which seems overlooked. Maybe its overlooked because its closely related to some other (harder to understand) conjectures.
The well known example that if you travel into space you'd gain let's say 5 years and people on earth 25 in the same time or so.
I just don't get it and I can't find any logic explanation.
For instance: Two twins who came to live exactly at the same moment in the year 2000 and both die on their 75th birthday at the same time. One travels into space, the other stays on earth. Earth-brother dies on earthyear 2075,space-brother dies in earthyear 3050 or so...
I know its Einstein's point but that just doesn't instantly make it correct to me.
"You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move?"
The first part of the argument is that for single point particles falling, the effect of gravity is the same for all particles. This suggests that we should model gravity as something intrinsic to spacetime itself, rather than as a field living on top of spacetime, which could couple to different particles with different strengths.
The second part of the argument, which is what you point out, is that gravity can have nontrivial tidal effects. (This had better be true, because if all gravitational effects were just equivalent to a trivial uniform acceleration, then it would be so boring that we wouldn't need a theory of gravity at all!) This suggests that whatever property of spacetime we use to model gravity, it should reduce in the Newtonian limit to something that looks like a tidal effect, i.e. a gradient of the Newtonian gravitational field. That leads directly to the idea of describing gravity as the curvature of spacetime.
So both parts of the argument give important information (both historically and pedagogically). Both parts are typically presented in good courses, but only the first half makes it to the popular explanations, probably out of simplification.
Starts from basic concepts and builds up a nice overview.
This point isn't raised anywhere because it's mostly a pedantic point that has nothing to do with the thought experiment. You shouldn't try and decompose thought experiments literally, otherwise you'll get caught up in unimportant details like this. Just assume the elevator is close enough to the earth such that the field lines are effectively parallel, or better yet, just pretend the elevator is in an infinite plate field.
Well, I'll bite. I'm a physicist and I understand LIGO. What's your alternative explanation?
The real principle of relativity is a bit more subtle (sometimes called the strong principle): that the effects of gravity can be explained entirely at the level of local geometry, without any need for non-local interaction from the distant body that is generating the gravitational field. To describe the geometry of non-uniform fields, we need more sophisticated mathematical machinery than what is implied by the elevator car thought experiment, but nonetheless, the elevator example is a useful launching point for that type of inquiry.
The other effect is that time in a strong gravitational field runs slowly.
The situation is somewhat similar to a classical spinning charged sphare, although this similarity easily breaks down.
Clearly it will fail given a big enough lift to experiment in, since a big enough lift would essentially include whatever object is creating that gravitational pull (or enough to conclude its existence from other phenomena). However these effects are nonlocal, you need two different points of reference for them to work (like your two baseballs). In fact most Tidal forces are almost by definition nonlocal.
The precise definition involves describing curved spacetime and geodesics, but that one is really hard to visualize as a thought experiment. The thought experiment does offer insight though, as it is possible to imagine that, absent significant local variations in gravity, you cannot distinguish between free-fall and a (classical) inertial frame of reference without gravity. This insight provides the missing link that allows you to combine gravity with the laws of special relativity and therefore electromechanics, including the way light bends around heavy objects, which provided one of the first confirmations of this theory.
Maybe you'll find this paper helpful: https://aapt.scitation.org/doi/10.1119/1.18578
But, we specifically have no way of proving that theory. So now we're back to the essence of the original question - if these things seem random why do we know that they're in fact deterministic without any hidden variables?
Which file can contain more information: a 1.44 MB floppy disk or 1 TB hard disk?
Which password is more random (i.e. harder to guess): one that can be stored only 1 byte of memory or one that is stored in 64 bytes?
Information theory deals with determining exactly how many bits of it would take to encode a given problem.
Myopia is definitely hereditary, especially the pathologic variants that can lead to retinal tears and the like.
That being said, there is the process of refractive development that occurs early on in life. The eye develops at a frighteningly fast pace, and you achieve near-adult globe size after about 18 months. The genes that drive this refractive development could be hereditary, if that is what you're trying to figure out.
Now, we can make a claim that this adaptation during infancy could eventually affect our genome, but I have not delved into the epigenetic literature to determine if that has been borne out or not.
If space is expanding why aren’t the radii of fundamental particles and their orbits and molecules also expanding? And if that were the case we couldn’t notice space expanding.
> Does space only expand somewhere else? Only between me and the Andromeda galaxy, and not _within_ me and the Andromeda galaxy? How would it know to do that?
If you start with expanding space in general relativity, and then carefully take the limit where you get back to Newtonian gravity, then it just corresponds to a classical force, specifically a very tiny force that weakly pulls everything apart, growing with distance.
This doesn't expand small objects, because they're rigid. It's the same reason that I can't make my laptop get bigger by gently pulling on the ends. On the other hand, it would pull apart two independent, noninteracting objects (such as the Milky Way and Andromeda).
Temperature. Sometimes you read that it's a measure of warmth; sometimes cold. Aren't hot and cold opposites?
Yes, hot and cold are opposites, but in a way they give the same kind of information. That's also true for information and randomness. Specifically, little randomness means more (certain) information.
Correct.
> But why do we call it spin?
Because it is a physical quantity whose units are those of angular momentum, and we have to call it something.
> What are the possible values?
+/- h/4pi where h is Planck's constant. (It is usually written has h-bar/2 where h-bar is h/2pi.)
> Is it a magnitude or a vector?
It's a vector that always points in a direction corresponding to the orientation of the apparatus you use to measure it.
> Is there a reason we call it "spin" instead of "taste" or some other arbitrary name?
Yes. See above.
> How do you change it?
You can change an electron spin by measuring it along a different axis than the last time you measured it. The result you get will be one of two possible values. You can't control which one you get.
> What happens to it when particles interact?
Their spins become entangled.
Here is the chapter on Fourier transforms from my linear algebra book that goes into more details: https://minireference.com/static/excerpts/fourier_transforma...
As for the math, there really is no other way to convince yourself that sin(x) and sin(2x) are orthogonal with respect to the product int(f,g,[0,2pi]) other than to try it out https://live.sympy.org/?evaluate=integrate(%20sin(x)*sin(2*x... Try also with sin(3x) etc. and cos(n*x) etc.
https://www.amazon.com/Quantum-Non-Locality-Relativity-Metap...
If you don't want to read a whole book then I recommend this article:
https://kantin.sabanciuniv.edu/sites/kantin.sabanciuniv.edu/...
but the book will give you a much deeper understanding.
It's the same reason the universe has an average speed (unlike what you might expect from special relativity), although it is unclear if this is true for the entire universe or just the portion we can see. We can measure how fast we're moving w.r.t the cosmic microwave background radiation though (it is red-/blue-shifted in a particular direction).
Basically, we just declare that we have no idea what is going on at such short distances, and put in some regulator by hand to get rid of the infinity. One very crude regulator (which nobody uses, but which is suitable for demonstration) would just be to say that particles are simply not allowed to get any closer than some fixed tiny distance.
But what about the effects that occur when particles actually do get that close? Well, in most theories, whatever is happening can be parametrized in terms of a few numbers (e.g. it could shift the observed mass of the particles, or their charge, etc.). Our ignorance of what is actually happening prevents us from computing these numbers from first principles. But we can still make scientific progress, because we can treat them as free parameters and measure them -- and after that measurement, we can use the values to crank out perfectly well-defined predictions.
Repeating this process through several layers was crucial to building the Standard Model, which currently has about 20 free parameters.
A function is like a vector, but instead of having two or three dimensions you have a continuous number of them. Adding functions component-wise works just like adding vectors.
Just like regular vectors, you can choose to represent functions in a different basis. So you choose a family of other functions (call it a basis) that's big enough to represent any other you want. For a lot of reasons [1, 2], a very good choice is the set of complex exponentials g_w(x) = exp(2πiwx), for every real w. It's an infinite family, but that's what you need to deal with the diversity of functions that exist.
So you try to find the linear combination of exponentials that sum to your original function. You need a coefficient for each w, so call it c(w) for simplicity. After fixing the basis, the coefficients really have all the information to describe your function. They're an important object, and we call c(w) the Fourier transform.
How do you find the coefficients? Just project your original function onto a particular exp(2πiwx), that is, take the inner product. Usually the inner product is the sum of the products of coefficients. Since functions are continuously-valued, you use an integral instead of a sum. This is your formula for the Fourier transform.
I known there are technical conditions I am glossing over, but this is the intuition of it for me.
[1] There is an intuition for these exponentials. Complex exponentials are periodic functions, so you are decomposing a function in its constituent frequencies. You could also separate the exponential into a sin and cos, and will obtain other common formulas for the Fourier transform.
[2] Exponentials are like "eigenvectors" to the derivative operation (taking the derivative is just multiplying by a constant), so they're really useful in differential equations as well.
Plus, I'd rather expect at least one professional (QED) physicist exists able to explain it and he isn't one. Mermin is, but the explanation is decidedly less clear.
BTW I came here to say Bell's inequality as well. For me it's as baffling as science could ever be.
I'm sure you or another physicist could point out the flaws in my mental model.
To see it, imagine you have a struct with a data member for each local variable of your function, and replace your function with a member function that has no local variables, but uses "this" to get at what was local data.
Add one more data member, a number that is set differently right before each place the function returns.
Finally, insert some code at the start of the function that, according to the number, jumps to just after the last return statement executed.
Then, each time you call the function, what happens depends on what happened last time.
There are more details, but that is the gist.
You can write that yourself in C++98, with the body of the function inside a switch statement. Getting it past code review would be the real challenge.
For every authoritative-sounding, in-depth explanation, there is an equally plausible, yet conflicting and contradictory alternative.
An answer is that the d->0 approaches infinity presumes a nice, continuous analytic function. If d->epsilon, you can't get to that singularity.
There was an equivalent problem in the E/M space with "The Ultraviolet Catastrophe" [1], which turned out to go away if you assumed quantization.
I'm not going to claim this is a perfect analog to the gravity problem, only that a lot of physics doesn't quite work right when you assume continuity. (The Dirac delta is a humorous exception that proves the rule here, in that doing the mathematically weird thing actually is closer to how physics works, and it required "distribution theory" as a discipline to prove it sound.)
1. These molecules are moving around a lot. The kinetic energy of molecules at room or body temperature gives them impressive velocity relative to their scale, and they're also rotating altogether and internally.
2. Compatible molecules are like magnetic keys and locks. They attract each other and the forces align with meeting points. The same way that proteins fold spontaneously.
So the remaining part is getting concentrations appropriate for what you want to happen - and that's a matter of signaling molecules and "automatic" cell responses to changes in equilibrium. It's a really chaotic system and it's a wonder it works at all.
I imagine that's also one reason life is imprecise, i.e. no two individuals are alike even with identical genes. There's a lot of extra "entropy" introduced by that mess of a soup.
To say anything more concrete requires requires defining the question much more precisely. I believe there is still some disagreement on the interpretation of Mach's principle in light of general relativity. For example, see https://en.wikipedia.org/wiki/Mach's_principle#Variations_in... (and a couple sections above, the 1993 poll of physicists asking: "Is general relativity with appropriate boundary conditions of closure of some kind very Machian?"
I hope that is helpful in some way.
I think one frequent source of confusion is the difference between "randomness" and "uncertainty" in colloquial versus formal usage. Entropy and randomness in the formal sense don't have a strong connotation that the uncertainty is intrinsic and irreducible. In the colloquial sense, I feel like there's often an implication that the uncertainty can't be avoided.
If you move away from a clock, time seems to slow down as your distance to the clock gets larger and the time between a change on the clock reaches you over a longer period. But if you carry a clock in your rocket it will just tick at the same pace as on earth (minus the gravitational impact, which is measured but why does gravity have an impact on time...?)
the summary being:
- The vertical velocity of the diverted air is proportional to the speed of the wing and the angle of attack.
- The lift is proportional to the amount of air diverted times the vertical velocity of the air
it also debunks the myth of "air flows faster on the top side of the wing, causing lift"
If you are properly amazed by it, rejecting MWI or any crazy-ish borderline-conspiracy theory seems suddenly a lot harder.
I feel the whole Yudkowsky's QM series in fact served to deliver that one post.
For example, see "Some recent developments in bicycle dynamics" (2007). Especially the folklore section:
"The world of bicycle dynamics is filled with folklore. For instance, some publications persist in the necessity of positive trail or gyroscopic effect of the wheels for the existence of a forward speed range with uncontrolled stable operation. Here we will show, by means of a counter example,that this is not necessarily the case.
https://pdfs.semanticscholar.org/bb70/d679c5a2ff67dd2a1a51f2...
I never really understood what happened really when the guy fell inside it in Interstellar and how come he started seeing all those photos. I just accepted it as Hollywood bs.
I know my question is based on a movie but would still like to know what will someone witness (assuming of course they somehow live)
Let's use an analogy of a remote observation post and with a soldier sending hourly reports:
0 ≝ we're not being attacked
1 ≝ we're being attacked!
Instead of thinking of a particular message x, you have to think of the distribution of messages this soldier sends, which we can model as a random variable X. For example in peaceful times, the message will be 0 99.99% of the time, while in war times could be 50-50 in case of active conflict.The entropy, denoted H(X), measures how uncertain the central command post have about the message before they will receive, or equivalently, the information they gain after receiving the message. The peace time messages contain virtually no information (very low entropy), while wartime 50-50-probability messages contain H(X)=1 bit each.
Another useful way to think about information is to say "how easy would be to guess the message" instead of receiving it? In peacetime you could just assume the message is 0 and you'll be right 99.99% of the time. In wartime, it would be much harder to guess---hence the intuitive notion that wartime messages contain information.
If you think it's sneaky to "implicitly" assume they're in the same direction, I would point out that this is no different from assuming they have the same magnitude. It would be kinda dumb to say "well this 1m/s^2 acceleration can't possibly be equivalent to gravity because gravity is 9.8m/s^2, so the statement is obviously wrong and they're trying to trick me!!"... same thing for direction.
Do you know an example of a process that moves angular momentum from one kind of spin to the other?
I believe the poster's general premise to be false. While renormalization may be useful in resolving infinities in general, I don't think it's necessary for this one.
You can't commute the dp*dx of a Hamiltonian to be zero in a quantized world, so if gravity has quantum properties, you don't need to worry about what happens when d -> 0. There is no "0" distance.
It is techinically a two-component spinor, which is why the direction of the spin 'moves' if you measure it along different x,y,z axes. It is also quantized unlike a normal vector: All fermions have quantized half-integer spin magnitudes and all bosons have integer magnitudes.
Magnetic fields can be used to change the spin.
When particles interact, opposing spins tend to pair up in each electron orbital which cancels the magnetic field. This is why permanent magnets must have unpaired electron orbitals.
It turns out that flat wings work just fine, but the airfoil shape we see on airplanes is more efficient:
The total velocity of our <x, y, z, t> vector will always be equal to the speed of light constant, c. You can think of something that has no physical movements as moving forward in time at the speed of light. As x, y, or z increases the magnitude of t will decrease so that the speed of light constant is always achieved.
Why this link has to hold is more complex and I cannot explain it well, but hopefully this gives some insight into time slowing as velocity increases.
I have trouble with this too. I think it's actually incorrect, or at least misleading. I think what it's _trying_ to say is that even if an entity can perform a complex task doesn't mean it can understand a complex task.
I think the more important result of this argument is that certain complex tasks can be "pre-baked" into rulesets _by an existing intelligence_. To me this just means that intelligent entities can sort of copy parts of their intelligence into other entities which are not intelligent i.e. computer programming.
I think with this argument they're trying to say "a series of sufficiently complex if statements isn't necessarily intelligent" by choosing something we know computers are good at - string manipulations and applying it to something we consider intelligence - language translation.
The argument holds that the computer is obviously not intelligent because it's just a function that takes a character and outputs another character.
But it needs to be a convincing translation, right? The computer would then be able to spit out not just accurate translations but also properly converted cultural idioms and new combinations of words where one didn't exist in the other language. That requires context of surrounding characters, memory of common language use, statistical analysis and creativity.
One implication that arises from this argument is actually about humans. How do we know that we aren't all just incredibly detailed rulesets ourselves without any actual understanding?
Well, first off - we technically can't prove it for anyone other than ourselves. More pragmatically, it's obvious that we, unlike the computer translator, can probe ourselves and be probed by others on whether or not we understand the subject. It's not like we're a bunch of Boltzmann's Brains that just happened into existence. We evolved intelligence in order to survive, not to "trick" other intelligent beings into thinking we're more intelligent than we are. There's no need for that. There's no one smarter around that we need to "trick".
2. The tides. The explanation I was given is roughly something like “the tides happen because the moon’s gravity pulls the water toward it, so you have high tide facing the moon. There’s also a high tide on the opposite side of the earth, for subtle reasons that are too complicated for you to understand right now and I don’t have time to get into that.”
The first problem with this explanation is this: gravitational acceleration affects everything equally right? So it’s not just pulling on the water, it’s also pulling on the earth. So why does the water pull away from the earth? Shouldn’t everything be accelerating at the same rate and staying in the same relative positions?
The second problem is that, when viewed correctly, the explanation for why there is a high tide on the opposite side of the earth as the moon is equally simple to why there is a high tide on the same side as the moon.
The resolution to both these problem is this: tides aren’t actually caused by the pull of the moon’s gravity per se, but are actually caused by the difference in the strength of the pull of the moon’s gravity between near and far sides of the earth, since the strength of the moon’s gravitational pull decreases with distance from the moon. The pull on the near water is stronger than the average pull on the earth, which again is stronger than the pull on the far water. So everything becomes stretched out along the earth-moon axis.
3. This one isn’t so much a problem with the explanation itself, more about how it’s framed. I remember hearing about why the sky is blue, and wondering, “ok, more blue light bounces off it than other colours. But isn’t that essentially the same reason why any other blue thing is blue? Why are we making such a big fuss about the sky in particular? ” A much superior motivating question is “why is the sky blue during midday, but red at sunrise / sunset”? I was relieved when I saw this XKCD that I’m not the only one who felt this way:
It's more akin to the direction or axis of the spin being changed, and simply measuring the spin along a certain access will change it: https://en.wikipedia.org/wiki/Spin_(physics)#Measurement_of_...
Once they are in close enough proximity to bump into each other, intermolecular forces can come into play to get the "docking process" done.
For something like transcription, once they are "docked", think of it like a molecular machine - the process by which the polymerase moves down the strands is non-random.
There are also several ways to move things around in a more coordinated fashion. Often you have gradients of ion concentration, and molecules that want to move a certain direction within that gradient. You also have microtubules and molecular machinery that moves along them to ferry things to where they need to be. You can also just ensure a high concentration of some molecule in a specific place by building it there.
If this helps, then it can also help with understanding other projections such as the Laplace transform (a dot project against the complex signal space).
While this analogy has helped me, I still have no clue why real valued signals result in an even FT.
edit: grammar
1) Compartmentalizing of biological functions. Its why a cell is a fundamental unit of life, and why organelles enable more complex life. Things are physically in closer proximity and in higher concentrations where needed.
2) Multienzyme complexes. Multiple reactions in a pathway have their catalysts physically colocated to allow efficient passing of intermediate compounds from one step to the next.
https://www.tuscany-diet.net/2019/08/16/multienzyme-complexe...
3) Random chance. Stuff jiggles around and bumps into other stuff. Up until a point, higher temperature mean more bumping around meaning these reactions happen faster, and the more opportunities you can have for these components fly together in the right orientation, the more life stuff can happen more quicky. There's a reason the bread dough that apparently everyone is making now will rise faster after yeast is added if the dough is left at room temp versus allowed to do a cold rinse in the fridge. There are just less opportunities for things to fly together the right way at a lower temperature.
3a) For the ultra complex protein binding to the DNA, how those often work in reality is that they bind sort of randomly and scan along the dna for a bit until they find what they're looking or fall off. Other proteins sometimes interact with other proteins that are bound to the DNA first which act as recruiters telling the protein where to land.
Weights on neural networks don't have to be independent functions.
Independence gives you a set of mathematical guarantees that insure you fully cover the space you're representing. For example that given a 2 dimensional space, X and Y are pointing in different directions. If they pointed in the same direction you could not fully decompose all vectors on the plane into two coefficients of X and Y.
Kip Thorne, a Nobel prize-winning physicist, worked as the science advisor for Interstellar so the hollywood bs is pretty good!
[1]: https://www.amazon.com/Science-Interstellar-Kip-Thorne/dp/03...
As for the multiverse, I don't know enough to talk about it. I just know it's one of the possible interpretations of quantum mechanics. Note that the various interpretations are generally considered more philosophy than science, and have no (or very little) practical implications. I would suggest ignoring all analogies and not looking too deeply for interpretations, and instead focus on basic concepts like "What is a quantum state?" and "How do I compute measurement outcomes?" which are super well understood and the same under all interpretations.
You can think of the various interpretations of QM as different software methodologies, scrum, agile, waterfall, etc. just stuff people like talk about endlessly, but ultimately irrelevant to the code that will run in the end.
100% completely false.
Imagine you have two particles of air, and they are immediately adjacent to each other. Suppose now that one goes above the wing, and one goes underneath. In your example, the particle going upward goes further in the same amount of time.
But ask yourself this: Why do the particles of air have to arrive at the same time? What mechanism from physics requires that they meet up again at the far end of the wing?
Then ask yourself this: If what you described is true, then how do aircraft fly upside down?
Part of the popular confusion around how LIGO works is the freedom in coordinates: there are different, perfectly good definitions of space and time you can use, and the explanation sounds different in each one. So people can get them mixed up. For example, my previous paragraph makes sense in "transverse traceless gauge", but not in others.
I'm not sure what GP was referring to with "higher spatial dimensions".
But the fact remains that it is impossible to prove and it is conveniently well equipped to handle this situation. I'd prefer an argument that presupposes the Copenhagen interpretation as that is when my intuition fails.
I had the same "problem" as you. What finally made me feel I sort of cracked it was those videos. The way I think of it now is: They let you do matrix multiplication. The internal state of the computer is the matrix, and the input is a vector, where each element is represented by a qubit. The elements can have any value 0 to 1, but in the output vector of the multiplication, they are collapsed into 0 or 1. You then run it many times to get statistical data on the output to be able to pinpoint the output values more closely.
Coupled with people saying "but they had scientists on staff! they talked with scientists that makes it so cool and accurate, lets ignore that other part"
What a lot of people don't know is that the wings are actually installed on a small upward incline, relative to the longitudinal axis of the body. Think of holding your hand out the window of a moving car, and then tilting your hand to catch air under your palm. In aerospace we call this the Angle of Incidence, and most aircraft have a small amount, usually in the 1-5 degree range. So while you might be walking on a perfectly horizontal path as you go to the bathroom over the Atlantic on your way to Paris, the wings keeping you aloft are actually angled such that the leading edge is higher than the trailing edge by a small amount.
Now google any picture of an airfoil and notice that many of them are slightly concave on the underside. This is called Camber, and in a nutshell it creates a "cupping" effect under the wing that intensifies the high-pressure area under the wing and correspondingly increases the amount of air deflected downward. Additionally, the teardrop shape reduces the tendency of air to billow off the trailing edge of the wing in favour of kinda sticking to the wing's surface and following its curvature. This also causes downwash off the trailing edge (i.e. more air going downward, which is a good thing).
That's really all there is to it, from a high level. The wings deflect air downward such that the total momentum change causes an upward force that is exactly equal to the aircraft's weight, and that equilibrium of forces keeps the aircraft aloft.
Obviously it gets more complex than that, because guys spend entire PhD careers researching edge cases, but there's no magic involved.
Note that wings don't have to be of the classic teardrop shape. There are plenty of research papers about lift forces on flat plates. In fact that's classic fodder for an undergraduate assignment. The airfoil shape is beneficial in several ways, some of them quite subtle, but you can think of the airfoil as being the most efficient cross-section for a wing known to science, whereas a flat plate is much less efficient (though it still works).
>I even heard we don't completely understand why it works (?!?).
I don't think that's true. For a while there was the meme about "science says bumblebees shouldn't be able to fly" but that was a clickbait headline because we didn't know enough about the structure and motion of bumblebee wings. That's about all I can think of.
There are certainly areas of ongoing research and exploration (I'm thinking hypersonic flight, novel means of propulsion, aeroelastic structures, etc.) but in general, the physics behind conventional aircraft are quite well-understood.
I've often heard it said that Quantum Computers can crack cryptographic keys by trying all the possible inputs for a hashing algorithm or something handwavey like that. Are they just spitting out "probable" solutions then? Do you still have to try a handful of the solutions manually just to see which one works?
I had to apologize and say that the explanation was over simplified and really it would work, say, only for some creatures living exactly on the floor of the elevator.
One of the two, at a challenging high school, made Valedictorian (surprise to her parents who didn't know she had long been first in her class) then in college PBK, got her law degree at Harvard, started at Cravath-Swain, went for an MD, and now is practicing medicine. Bright niece.
If any of this is true, are there any sources aside from "my friend's friend's brother took too much and now he is....", and what is the scientific explanation and do we know enough about the mind at all?
I feel like LSD has a lot of contradictory information out there, and the proponents feel the need to hand waive concerns away because it is 'completely harmless and leaves your system in 10 hours'. But when nobody knows what they're actually getting because it doesn't exist in a legal framework, then it muddies the whole experience.
People say certain doses can't do more effect than lower doses after a certain threshold. It seems like the same people say "omg man 1000ug you are going to fry your brain!"
What is the truth? If it "just" had an FDA warning like "people with a family history of schizophrenia should not take it", that would be wildly better than what we have today.
Please no explanation about shrooms. Just LSD the 'research chems' distributed as LSD.
When the wheel pitches to the either side, the road under the bicycle pushes the wheel back to center.
When the bicycle leans to the side, the wheel pitches as well. Now the road under the bicycle pushes the wheel back towards center, but the angle of the tire to the road is also skewed, so the some of the force also gets translated into pushing the bike upright.
It's a little reminiscent of the self-centering action you get when you have a double-cone (or cylinder tapered on both ends) rolling down two rails.
I think if you fixed the front tire so there was no steering, you wouldn't get any stability from speed.
I disagree with that. It's pretty easy to prove it in general by calculating \int_0^{2\pi} sin(mx)sin(nx) dx etc. for m ≠ n.
Spin being an intrinsically quantum mechanical concept, I'm afraid the microscopic mechanism by which that transfer occurs will only be explainable in a quantum mechanical context. Here it will appear as a term in the Hamiltonian coupling the spin of an electron to its motion in a potential.
https://en.m.wikipedia.org/wiki/Einstein%E2%80%93de_Haas_eff...
First, two general points.
The most important thing to keep in mind is time. Life has had insane amounts of time. Billions of years. Beyond human comprehension amounts of time.
The second most important thing is that complicated != carefully orchestrated or optimal. Life is pretty cool, but it doesn't hold itself to a very high standard. It's the survival of the good enough, and is full of so many random hacks and poor design choices it's insane. Things get easier to accomplish when you lower the standard.
Now an attempt at an explanation.
Evolution by natural selection works on two principles. First, generation of diversity. Second, selective pressure.
DNA can and does mutate frequently. One important type of mutation is a duplication, since it let's you gain new functionality. You make two versions of the same gene, one keeps it's original function, and the other does some new function. This theme of repurposing existing things comes up again and again. Take something you have, make another version of it, change it a bit. If you've worked out how to grow a vertebra as a lizard and want to become a snake, turn off legs amd make more vertebra. Use the same genes, and just modify how you control them. This video (https://youtu.be/ydqReeTV_vk) is actually pretty good at running through the science behind evolutionary development, and how evolution can quickly reuse and modify existing parts. Basically, once smaller features evolve, you start modifying them in a modular way and can start making really big changes really easily. Keep in mind again that you are helped by enormous, mind boggling amounts of time randomly generating these features originally as well. Heres a summary of how our eyes evolved by repurposing neurons (https://youtu.be/ygdE93SdBCY). Our eye is a good example of how the standards are just good enough, we have veins and nerves on top of our light detecting cells instead of behind and just poke a hole to get them through to the other side. Doesn't that leave a blind spot? Yep, and we just hallucinate something to fill in the space. There are a couple other major ways we generate more diversity. You have things like viruses transferring DNA, but a really powerful one is sex. Sexual reproduction lets you combine and generate new combinations of Gene's to speed up how quickly diversification happens.
For selective pressure, think about it purely statistically. You have sets of arrangements of atoms, some of which are good at making new sets of arrangements of atoms that look like then, and others less so. Each tick of the clock, versions that are able to make more increase, and versions that dont decrease. This basically provides a directionality for evolution - whatever is good at replicating is successful. This weeds out mutations that dont over time while keeping mutations that do. This means the next round of mutations are building on ones that were good enough, and not ones that weren't. This lets evolution be more cumulative than a random search.
Neither of those are complete descriptions by any stretch, I'm just trying to give you a taste of the mechanisms behind it, but it goes a lot deeper. The most important things do just boil back down to what I started with. Survival of the good enough - lower your standards. And there's just so much time for these to happen. Evolutionary step only has a 1 in a million chance to happen in a given year? Then it's happened about 65 times since dinosaurs went extinct.
Yeah, I'd definitely second that. How could evolution result so quickly in something as "rudimentary" as Chlorella (i.e. the simplest plant). https://en.wikipedia.org/wiki/Chlorella
Have a look at the simple inference example here: https://en.m.wikipedia.org/wiki/Time_dilation
Time doesn't necessarily slow down the further away you get from a clock. If you and a clock are both stationary (ie you're in the same inertial frame), you will observe it ticking in "normal" time, albeit delayed due to the distance. If the clock is moving relative to you however, you will measure its ticks to be slightly slower.
You may be confusing general relativistic effects which are distance dependent (as gravity weakens the further away you get).
If you carry a clock in your rocket, you will (in the rocket) measure it to tick once a second. When you get back to Earth, you'll find that it's lagged behind a clock that was started at the same time but was left on Earth.
Maybe have a look at simple wiki too https://simple.m.wikipedia.org/wiki/Special_relativity though it doesn't actually derive the Lorentz transforms unfortunately.
Ignore the gravity bit for now, that's general relativity and it's more complicated to explain.
It's by Andy Matuschak and Michael Nielsen, and it is excellent. Have fun!
Entropy is usually poorly taught, there's really three entropies that get convoluted. The statistical mechanics entropy, which is the math to describe random distribution of ideal particles. There's Shannon's entropy, which is for describing randomness in strings of characters. And there's classical entropy, which is to describe the fraction of irreversible/unrecoverable losses to heat as a system transfers potential energy to other kinds of energy on its way towards equilibrium with its surroundings or the "dead state" (which is a reference state of absolute entropy).
These are all named with the same word, and while they have some relation with each other, they are each different enough that there should be unique names for all three, IMO.
By chance, in the last few years I've started reading more and more comments debunking this absurd explanation. Not that I understand perfectly now, but at least I know I'm not crazy.
Both sexes have haploid gametes, which form a zygote when combined. I think this can steer your research when you look into what gametes contain and chromosome combination.
The "no other way..." was referring to me not having an intuitive explanation to offer about why an sin(x) and sin(2x) are orthogonal.
But of course anyone who's seen snow billowing off the back of a car knows that air doesn't just close up behind the object like a ziplock bag: it's messy and turbulent and gets all over your windows while you're tailgating.
If light is emitted at a constant wavelength independent of the stretching of the universe, doesn't that imply light is traveling through a higher spatial dimension, otherwise the emitter itself would be stretched with the universe and we'd never be able to observe differences in the speed of light? If I understand this paper, once light is emitted, it's "stuck" to space and will stretch along with it. But if the emitter wavelength stays constant doesn't that imply it's waving through a higher dimension?
When you produce sex cells, you body splits the code randomly. The cell might contain the first copy for hair color, but the 2nd copy for eye color. This happens in both the sperm and eggs.
Then when they combine you have a full set of genetic data again, but it’s a random selection of 50% of you DNA.
The fun part is that the code itself determines which copy is dominant. Your offspring has a copy of your eye color data, and your wife’s.
Added on that is that the two copies combine to produce the outcome. Depending on the code, the dad’s copy might dominate, or the mother’s, or the both of them can produce yet a 3rd outcome.
Then we choose some settings and press GO and record whatever number pops up. We do this many times so we each have a nice frequency chart. Now Bell proved that if you live in a local hidden variable universe, the correlations between these numbers is upper bounded, no matter how you choose settings on the boxes. Then, he also gave a prescription for choosing the settings, such that if you live a in quantum universe, the correlations between these numbers will be higher than the upper bound.
The rest is mathematics, which cannot really be simplified without leaving the reader unsatisfied.
What sets science apart from most other methods of seeking answers is its focus on disproof. Your goal as a scientist is to devise experiments that can disprove a claim about the natural world.
This misconception rears its head most prominently in discussions at the intersection between science and public policy. Climate change. How to handle a pandemic. Evolution. Abortion. But I've even talked to scientists themselves who from time to time get confused about what science can and can't do.
The problem with believing that science proves things is that it blinds its adherents to new evidence paving the way to better explanations. It also leads to the absurd conclusion that a scientific question can ever really be "settled."
It's used when discussing propulsive efficiency, as it's a proxy measurement for how much "work" each blade is doing. Because propeller/rotor blades are just high-aspect wings, if you have high disc loading your blades are at a high lift coefficient which means they'll be incurring lots of lift-induced drag which increases your power requirements.
Solidity in the same context refers to the amount of volume within an actuator disk that's occupied by actual solid material. If you have a 4-bladed rotor and you move to a 5-bladed rotor, all else equal, you've increased your solidity.
There are many many equations, and as most things in fluid mechanics you can get as deep into the weeds as you want. As a starting point, have a look at the wiki article for Blade Momentum Theory[0]
This comes close -- It shows the jittery thermal motion of this tiny machinery, instead of nice smooth glides.
Theoreticians choose very different mindsets about the same equations, which (they say) somehow create them grounds to form various new hypotheses. As far as I know neither approach was very fruitful so far in terms of new science, so people try multitude of others.
What I've meant to say above, I have much trouble using Copenhagen to understand Bell's experiment. MWI fits the bill here for me.
https://www.scientificamerican.com/article/how-was-avogadros...
When I took physics they basically said "at first scientists were disturbed by the fact that magnets imply that two objects are interacting without any physical contact, but then Faraday came along and said 'the magnets are actually connected by invisible magnetic field lines' and that resolved everything."
How does saying "but what if there's invisible lines connecting them" resolve anything? To be clear, I'm not objecting to any of the actual electromagnetic laws or using field lines to visualize magnetic fields. It's just that I don't get how invoking invisible lines actually explains anything about how objects are able to react without physical contact.
(Also, it is not lost on me I that this question boils down to "fraking magnets, how do they work?")
Let's take a contrived example. I have a 4 engine high wing airplane with 2 bladed 55 inch props on each engine with 100kg force (1 kg-f is equal to 9.8 N) thrust per engine.
Now, I need to make that a low wing airplane so need to change to 22 inch blades with the same thrust and I don't want to change engines. So I want to add more prop blades. How many more blades do I need to add?
PS: Speed of sound is 343 m/s, diameter of a cell nucleus is ~ 0.000006m to give an idea.
I'm not totally sure what you mean by a higher dimension. The properties of the emitter (which is, e.g. a laser cavity) aren't affected by the gravitational wave because the emitter is a rigid body, which doesn't get stretched. (It's the same thing as described here: https://news.ycombinator.com/item?id=22990753 ) So it puts out light of a given frequency.
By contrast, LIGO is not a rigid body, because the mirrors at the ends of the arms hang freely, hence allowing gravitational waves to change the distance between them.
> What's baffling to me is everyone who has tried to explain the LIGO detector doesn't even realize this question exists. I've independently thought this question and when people start explaining LIGO to me, and I take the time to spell out the question, they realize they don't understand LIGO either.
Yup, it generally is the case in physics that over 95% of people who claim they can explain any given thing don't actually understand it! But the professionals are aware. I even know a LIGO guy who goes to popular talks armed with a pile of copies of the paper I linked.
I thought the music, acting, and triumph of humanity were pretty inspiring much like Star Trek can be despite the fact that most of the technology violates the laws of physics. You may have thought it was a terrible movie which is fine. I thought Star Wars Rogue One was one of the most boring films I've seen in the last decade, but a LOT of people loved that film.
"Air flows faster on top" is the Bernoulli explanation. The Bernoulli principle tells us that fast air means low pressure, and low pressure sucks the plane up.
Newton explanation is the idea that the wing pushes the air down, and by reaction, pushes the plane up. Based on Newton's third law.
In reality, both are correct. The Bernoulli explanation is more specific and the Newton one is more generic. But if you want the whole picture, you need the Navier Stokes equations. Unfortunately, these are very hard to solve, so even engineers have to use simplified models.
I personally prefer the Newton explanation. It explains less, but the Bernoulli one is confusing and results in many misunderstandings. For example, that air takes the same time to follow the top side and bottom side of the wing, which is completely wrong.
The common depiction also tends to hide the fact that the trailing edge of wings is at an downwards angle, even though it is the most important part. Nice profiles make wings more efficient, but the real thing that makes planes fly is that angle, called angle of attack.
Focusing on the profile rather than on the angle of attack leads to questions like "How can planes fly upside down?" (the answer is "by pointing the nose up", and that should be obvious). If you are just trying to understand how planes fly, forget about wing profile, it is just optimization.
1) 100kg is a measure of mass, not force. Thrust is a force, not a mass. But let's say you have 100N of thrust per engine.
2) Why would changing from a high-wing to a low wing monoplane require you to add prop blades?
Water vapor around the LCL starts condensing and turning from a gas into liquid cloud droplets. This process happens considerably faster once it begins for a variety of reasons, so once you can have cloud droplets, you get a ton of cloud droplets - not a gradual transition from water vapor to cloud. It's almost like a light switch.
Most air masses are relatively homogenous anyways, so unless there are underlying processes causing things like undulatus asperatus, it will certainly appear very, very flat over a large area.
So estimate 1 minor good random mutation per 10,000 population. Assume 1 major mutation per 1000 years, per 10,000 population.
Entropy is a the amount of information it takes to describe a system. That is, how many bits does it take to "encode" all possible states of the system.
For example, say I had to communicate the result of 100 (fair) coin flips to you. This requires 100 bits of information as each of the 100 bit vectors is equally likely.
If I were to complicate things by adding in a coin that was unfair, I would need less than 100 bits as the unfair coin would not be equally distributed. In the extreme case where 1 of the 100 coins is completely unfair and always turns up heads, for example, then I only need to send 99 bits as we both know the result of flipping the one unfair coin.
The shorthand of calling it a "measure of randomness" probably comes from the problem setup. For the 100 coin case, we could say (in my opinion, incorrectly) that flipping 100 fair coins is "more random" than flipping 99 fair coins with one bad penny that always comes up heads.
Shannon's original paper is extremely accessible and I encourage everyone to read it [1]. If you'll permit self-promotion, I made a condensed blog post about the derivations that you can also read, though it's really Shannon's paper without most of the text [2].
[1] http://people.math.harvard.edu/~ctm/home/text/others/shannon...
Spin is not an object really 'spinning', but in fact, neither is angular momentum, and momentum isn't really an object 'moving'. Let's be clear about what momentum is in quantum mechanics: there is, say, an electron field, and along a particular path it oscillates as `e^i(Et-px)/hbar`. The momentum determines how frequently the wave oscillates as you move in space; the energy determines how much it oscillates in time (the E and p operators pull E and p scalars out of this, and the Schrodinger equation says that E^2 - p^2 = m^2 (sorta; we take the positive solution of a low-momentum expansion...).
Anyway, the point is, momentum means "the wave function oscillates as you change the position'. Angular momentum means "the wave function oscillates as you change the angle'. The base orbital angular momentum state looks like `e^i m φ`, and the operator that extracts `m` is `- i d_φ`. Etc.
The other thing about wave functions is that they are continuous everywhere. The presence of a particle is something like "having a knot" in the field -- there is a region where there must be some net object, because if you integrate around the region you see non-zero net flux. That kind of thing. So to have "intrinsic angular momentum of 1/2" means that, if you integrate around a region where a fermion is, you'll see a net rotation of the wave function by half a phase.
Now, that seems nonsensical, because if you integrate around a point, you should get back to the value you started at. And in fact, you do, but the two are distinguishable: the reconciliation for this is related to the fact that SO(3) is not simply-connected; if you produce a path of rotations that takes every vector back to where it started (such as XYZXYZ, where X rotates around the X axis), any path that performs one loop does not deform the identity path, but one that does two does -- which makes these states physically different. So if you gave me a wave function, I could bucketize all of the points in, say, the 'z' direction into ones that are in the 'identity' element (relative to a point of my choosing) vs the 'anti-identity' element. These have opposite spins.
I am still working on tightening up the model, and I haven't quite figured out how this causes the magnetic field to send such a particle in a different direction. But the rest feels close, to me, and doesn't rely on any hand-waving statements like "because it seems to work this way".
The basic gist I get is that quantum computing, for a very specific set of problems, like optimization, let's you search the space more efficiently. With quantum mechanics you can associate computations with positive or negative probability amplitudes. With the right design, you cause multiple paths to incorrect answers to have opposite amplitudes, so that interference causes them to cancel out and not actually happen to begin with. That's just my reading of the comic over and over though.
The difference between thermoset and thermoplastic polymers has to do with irreversible chemical bonding during curing. With thermosets, you have chemical bonds between molecules preventing deformation, whereas with thermoplastics, you just have a viscous friction between molecules that varies with temperature. If you heat up a thermoplastic, that viscous friction goes away and the plastic can be remolded.
A guy was walking along the beach and found a lamp. Of course he rubs the lamp, and sure enough a genie appears.
Genie: master of the lamp I can grant you a wish, you may wish for anything.
Guy: Wait, isn't it supposed to be 3 wishes?
Genie: One or nothing, and do not wish for more wishes.
Guy thinks for a while ....
You know I have pretty much everything I need. But I have always wanted to travel to Hawaii. But I get sea sick and am afraid to fly.
Genie: Very well I will take you there.
Guy: No no, if you take me there, I wont be able to come back. And what about next year? Since I only get one wish, I want a bridge built to Hawaii.
Genie: That does not make any sense. Please make a different wish. One that does not involve so much construction.
Guy: hmmmm you know I know, can you explain women?
Genie: So do you want the bridge to be a suspension bridge or truss? and how many lanes ....
Hawking radiation does not discriminate between matter and anti-matter. Either form of any type of particle can be emitted.
It seems conceptually simple, except the requirement that the energy of an photon exactly match some required energy in order to be absorbed seems really unlikely, since photon energy not a discrete quantity, and varies according to doppler effects and other things.
It seems like the vast majority of photons would just fly through the universe without interacting with anything, unless there are other ways for photons to interact with matter besides being absorbed. (If there are other ways, they are seemingly never mentioned as a potential alternative fate for the photon).
So, if we don't have the notion of fields, then we have a kind of situation of how does object A know about remote object B. Like how does one object know about the motions of literally every other object in the Universe. Perplexing.
Once you come up with the idea of a field, okay you have to at some level accept that there are fields that permeate all of space. But what this intellectual cost buys you is that now an object only has to sense the field local to it to respond to all objects in the universe.
Think of objects bobbing on the ocean. One way to conceptualize that is that any object anywhere could cause this object here to bob in some way. How does this object know about all the other objects? Instead we could say that there is ocean everywhere. Locally, objects bobbing put ripples into the ocean. Locally, ripples cause objects to bob. Each object no longer needs to "know about" every other object it just needs to react to the ripples at its location, and the ripples get sent out from its location.
Does this help?
* Galois Theory - I have a basic understanding of abstract algebra but for some reason Galois theory confounds me, especially as it relates to the inability of radical solutions to fifth and higher degree polynomials
* "State-of-the-art" Quantum Entanglement experiments and their purported success in closing all loopholes
* Babai's proof on graph isomorphism being (almost/effectively) in P - specifically how it might relate to other areas of group actions etc.
* Low density parity checks and other algorithms for reaching the Shannon entropy limit for communication over noisy channels
* Hash functions and their success as one-way(ish)/trapdoor(ish) functions - is SHA-2 believed to be secure because a lot of people threw stuff at the wall to see what stuck or is there a theoretical backpinning that allows people to design these hashes with some degree of certainty that they are irreversible?
Imagine two circles in 2D that repel each other the closest you get them together, like magnets do. In 2D it would look like they're interacting at a distance, but maybe in 3D they're two cylinders that are a bit flexible, that are actually touching at the ends, but not in the 2D plane you're observing. The interaction is "properly physical" in 3D but in the 2D plane it seems magical.
That's a way that I imagine it in 2D vs 3D, so this might be similar in 3D vs ND, where N > 3. Of course this is all baseless speculation, but it seems kinda plausible in my head.
Edit: bad drawing of what I meant: https://imgur.com/362tcHg
Everything is being jostled around randomly. The molecules don't have brains or seeker warheads. They can't "decide" to home in on a target.
The only mechanisms for guidance are: diffusion due to concentration gradients, movement of charged molecules due to electric fields, and molecules actually grabbing other molecules.
It's all probabilities. This conformation makes it more likely that this thing will stick to this other thing. You may have heard that genes can be turned on or off. How? DNA is literally wound on molecular spools in your cell nuclei. When the DNA is loosely wound other molecules can bump into it and transcribe it -- the gene is ON. When the DNA is tightly spooled, other molecules can't get in there and the gene is OFF for transcription. There's no binary switch, just likelihoods.
Everything is probabilistic, but the probabilities have been tuned by evolution through natural selection to deliver a system that works well enough.
But to get to the heart of your question: You want to reduce the prop diameter (ground clearance perhaps? The engineer in me asks why you don't just make the landing gear taller) and not change the engine. You don't actually have to add blades (maybe). You could also just make the existing blades fatter, or change the airfoil, or increase the propeller RPM. Lots of ways to attack that.
But, playing along that adding blades is the only way:
1) Take your existing high-wing plane and calculate power required for all phases of flight: Take-off, climb, cruise, etc.
2) Take your new low-wing plane with its smaller prop diameter and work backwards to ensure you can actually meet the power requirements to stay aloft through your envelope. Adding blades reduces efficiency because the blades' wakes interfere with each other so you'll have to dig into some experimental data based on the prop of your choice. Much depends on blade pitch and washout.
Very likely you will need to increase the RPM (if your engine can deliver enough power) or change engines to a more powerful model because your props are now less efficient.
Such is the nature of aircraft design - almost nothing can be changed without affecting something else.
How is it possible that the thread is up 5 hours and ctrl-f consciousness returns nothing?
However, even with understanding how a Quantum Computer works at its most basic level I still have difficulty understanding the more useful Quantum Algorithms:
To get an intuitive idea of why this necessarily results in symmetrical time dilation, imagine two people walking along non-parallel paths at a constant rate on a 2D surface. From either person's point of view, the other person has a one-dimensional relative velocity, either towards or away from the observer, and that relative velocity depends on the angle between their paths. One-dimensional acceleration is just rotation in the 2D space. Now, what happens if you project one person's path onto the other person's 2-velocity? The projection will be shorter! And remember, the direction of your velocity is the direction of forward time from your perspective. So, from your perspective, the other person has traveled less distance along the time direction than you have, because some of their constant-velocity path was used up traveling in space instead. I.e., from your perspective, time has slowed down for them. But, projecting your path onto their velocity vector also results in a shorter path--so the effect is 100% symmetrical!
Now, this analogy fails in two ways because the real universe doesn't have any meta-time that you can use to observer where the other guy is "right now", and because spacetime rotations are hyperbolic rather than Euclidean, but those two sources of error happen to cancel out nicely and you get the correct result that moving objects appear to move through time slower.
1. The magical orthogonal basis functions: complex sinusoids. Shifting of a time signal just multiplies the Fourier counterpart by a new phase (relative to its represented frequency). Thus transforming to the Fourier basis enables an alternate method of implementing a lot of linear operations (like convolution, i.e. filtering).
2. The magic of the fast implementation of the Discrete Fourier Transform (DFT) as the Fast Fourier Transform (FFT) makes the above alternate method faster. It can be most easily understood by a programmer as a clever reuse of intermediate results from inner loops. The FFT is O(N log N), a direct DFT transform would be O(N^2)
A mathy demonstration of this at https://sourceforge.net/projects/kissfft/
I had a fling with psychedelics in my teens, and everything was great until the one time it wasn't. I was taking psychedelics pretty much every weekend, and by my count have tried over a dozen of them.
Had an experience with LSD which completely shook me to my core and gave me such severe PTSD and trauma that every night I started to have massive panic attacks and needed medical help. My entire worldview and perception of reality was shattered, I wasn't able to "anchor" myself anymore and it all felt like a sham. I was completely dissociated. I also got HPPD: to this day, everything has a sharpened oil-painting type texture to it that increases based on my anxiety level, and I'm sensitive to visual + aural stimuli (loud, brightly-colored places are unpleasant). If I get too anxious, I start to dissociate.
It took ~2 years for the PTSD to subside for the most part, but still if I am under a lot of stress I am liable to have a panic attack and get flashbacks and need to go find somewhere quiet to sit somewhere alone to try to work through it.
LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.
But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.
It's been a long time since I've touched any of that stuff and I'm not sure I ever will again, but I don't think it's inherently bad or good. Psychedelics are like knives, they're neutral - can be used as a tool or cut the hell out of you if you're reckless.
---
Footnote: For context, this was probably due to life circumstances/psyche at the time. I was in a relationship with a pretty toxic partner, and my mental state wasn't the greatest. In hindsight, it seems like I was almost begging for a "slap in the face" if you will.
It should be easy to see that a non-accelerating object (or person) will trace out a straight worldline. If you ever change your velocity, though, either through smooth acceleration or instantaneous rotation of your velocity vector, you will trace out some non-straight curve in spacetime. If you leave your friend behind and then, at some later time, meet back up again, if you did not undergo exactly the same amount of acceleration throughout your journeys (i.e., because one of you stayed behind and hardly accelerated at all, tracing out a boring straight line path), then you will have different world-line lengths (different "path integrals") between the starting and ending points, and thus will have experienced different amounts of subjective time.
Now, in a Euclidean spacetime, the traveling twin would end up older, because a straight line is the shortest distance between two points. But our spacetime is not Euclidean--it is a Minkowski space, in which acceleration is equivalent to a hyperbolic rather than Euclidean rotation of your velocity vector, so it turns out that straight line is actually the longest distance between any two points, and the twin who leaves and comes back will have a shorter worldline, and thus will have aged less.
For some intuition, consider music, especially on a violin. Fourier series applies to a periodic function (wave), and represents the whole wave as sine waves that fit the one period exactly. So, get sine waves at frequency 1, 2, ... that of the period. In music, these waves are called overtones.
Playing with a violin, the overtones are fully real and even important! E.g., get a tuning fork and tune the A string (second from the right as the violinist sees them) to 440 cycles per second (440 Hertz, 440 Hz). Then the D string, the next to the left, is supposed to have frequency 2/3rds that of the A string. So, bow the two strings together and listen for the pitch 880 Hz, that is, 3 times the desired frequency of the D string and twice that of the A string. So are listening to the second overtone of the D string and the first overtone of the A string; are hearing the third Fourier series term of the D string and the second Fourier series term of the A string. Adjust the tuning peg of the D string until don't hear beats. If the D string is at, say, 881 Hz, then will get 1 beat a second -- so this is an accurate method of tuning. Similarly for tuning the E string from the A string and the G string from the D string -- on a violin, the frequencies of adjacent strings are in the ratio of 3:2, that is, a perfect fifth. That's how violinists tune their violin -- which is needed often since violins are just wood and glue and less stable than, say, the cast iron frame of a piano.
For one more, hold a finger lightly against a string at 1/2 the length of the string and hear a note one octave, twice the frequency, higher. That's often done in the music, e.g., playing harmonics. And it's a good way to get the left hand where it belongs at the start of the famous Bach Preludio in E-major that starts on the E half way up the E string. Lightly touch one third of the way up the string and get three times the fundamental frequency, sometimes done in music to give a special tone color. Net, Fourier series, harmonics, and overtones are real everyday for violinists.
E.g., on a piano, hold down a key and then play and release the key one octave lower and notice that the strings of the key held down still vibrate. The key vibrating was stimulated by the first overtone of the key struck and released.
The Fourier integral applies to functions on the whole real line. Very careful math is in Rudin, Real and Complex Analysis.
Yes, Fourier series and integrals can be looked at as all about perpendicular projections of rank 1 as emphasized in Halmos, Finite Dimensional Vector Spaces, written in 1942 when Halmos was an assistant to John von Neumann at the Institute for Advanced Study. That Halmos book is a finite dimensional (linear algebra) introduction to Hilbert space apparently at least partly due to von Neumann. So, right, Fourier theory can be done in Hilbert space.
Fourier integrals and series are very close both intuitively and mathematically, one often an approximation to the other. E.g., if multiply in one (time, frequency) domain, then convolve in the other (frequency, time) domain. E.g., take a function on the whole real line, call it a box, that is 0 everywhere but 1 on, say, [-1,1]. Well the Fourier transform of the box is a wave, roughly a bell curve, that goes to zero quickly away from 0. A convolution is just a moving weighted average, usually a smoothing. Then given a function on the whole real line, regard that line as the time domain and multiply by the box. Now can regard the result as one period, under the box, of a periodic function to which can apply Fourier series. And in the frequency domain, the Fourier transform of the product is the smoothing with the Fourier transform of the box of the Fourier transform of the function and, then, an approximation of Fourier series coefficients of a periodic function with the one period under the box. Nice. That is partly why the fast Fourier transform algorithm is presented as applying both to Fourier series and the Fourier transform.
Mostly Fourier theory is done with an L^2, that is, finite square integral, assumption, but somewhere in my grad school notes I have some of the theory with just an L^1 assumption. Nice notes!
Essentially the Fourier transform of a Gaussian bell curve is a Gaussian bell curve -- if the curve is wide in one domain, then it is narrow in the other.
The uncertainty principle in quantum mechanics is just Plancherel's theorem from Fourier theory.
Can do a lot with Fourier theory just with little pictures such as for that box -- can get a lot of intuition for what is actually correct.
I got all wound up with this Fourier stuff when working on US Navy sonar signal processing.
Then at one point I moved on to power spectral analysis of wave forms, signals, sample paths of stochastic processes, as in
Blackman and Tukey, The Measurement of Power Spectra: From the Point of View of Communications Engineering.
Can get more on the relevant wave forms, signals, stochastic processes from
Athanasuis Papoulis, Probability, Random Variables, and Stochastic Processes, ISBN 07-048448-1.
with more on the math of the relevant stochastic processes in a chapter of
J. L. Doob, Stochastic Processes.
Doob was long a leader in stochastic processes in the US and the professor of Halmos.
At one time a hot area for applications of Fourier theory and the fast Fourier transform was to looking for oil, that is, mapping underground layers, as in
Enders A. Robinson, Multichannel Time Series Analysis with Digital Computer Programs.
Quickly antenna theory depends deeply on Fourier theory so can do beam forming, etc.
Can also see
Ron Bracewell, The Fourier Transform and its Applications.
Of course, one application is to holography. So, that's why can cut a hologram in half and still get the whole image, except with less resolution: The cutting in half is like applying that box, and the resulting Fourier transform is just the same as before except smoothed some by the Fourier transform of the box.
As I recall, in
David R. Brillinger, Time Series Analysis: Data Analysis and Theory, Expanded Edition, ISBN 0-8162-1150-7,
every time-invariant linear system (maybe with some meager additional assumptions) has sine waves as eigenvectors. That it, feed in a sine wave and, then, will get out a sine wave with the same frequency but maybe with amplitude and phase adjusted.
So, in a concert hall, the orchestra plays and up in the cheap seats what hear is the wave filtered by a convolution, that is, with the amplitudes and phases of the Fourier transform of the signal adjusted by the characteristics of the concert hall.
In particular, the usual audio tone controls are essentially just such adjustments of Fourier transform amplitudes and phases.
Since there a lot of systems that are time-invariant and linear or nearly so, there is no shortage of applications of Fourier theory.
That's from 10 years ago, so you might be able to find video of a more recent version; try to find a year when Wagner taught, he's great.
Black holes were sometimes called "frozen stars", because time slows down at the event horizon to a stop. If light could escape from black holes and somehow the material continued to emit/reflect light despite time being stopped for it, then as outside observers, what we would see is the star at the moment it collapsed to the size of its event horizon. Though if you entered the event horizon, from your point of view you would see time progressing again inside.
An object falling into a black hole would often go through the process of "spaghettification", being pulled apart because the gravity from the black hole on the close side of the object would be immensely stronger than the gravity on the far side of the object. Though It's possible for a black hole to have an event horizon large enough that the point where spaghettification is beyond it; I think Interstellar had some line to imply that about their black hole.
>I never really understood what happened really when the guy fell inside it in Interstellar and how come he started seeing all those photos. I just accepted it as Hollywood bs.
In the movie, it was supposed to be that the same aliens (well, future humans) that had constructed the original wormhole had also connected another wormhole inside the blackhole. On the inside of the second wormhole, they brought him into a constructed environment to use him as a tool to encode a message to the past to himself that would be able to get him to arrive here in the first place. I think the vague idea of the future-humans was that they used the spacetime-warping nature of the black hole to somehow transcend space and time in our future, but as a nearly one-way trip: they could only interact with our universe / the past through warping gravity (vibrations or making wormholes), and inside the black hole, they had more freedom to do this, and they used that to make the interface for the main character. (Story-wise, I think it seems like good contrivance to allow the mysterious benefactors that ability to give some help, but without being able to do everything and still allow the relatable modern human characters to do all the interesting detail work.)
So unless you have a good reason to do something else, and the budget to pay experienced people to bash their heads against it, you should stick to an implementation that has had this effort expended on it.
If you want an intro about common problems in custom cryptosystems, go look at cryptopals or something, but don't get too cocky that you know everything.
I think it is a bit too reductive to say they're neutral, just yet, but I am willing to say they can be used responsibly if the right information actually existed - but like with any science I am open to changing that if the conclusions were found to be different. Again let's just stick with acid instead of all psychedelics.
When talking about light or radio waves, you're probably talking about Classical Electrodynamics, which does not include the notion of a photon. In Classical EM, light is an EM wave in the sense that if you look at the electric field E or magnetic field B (of a plane wave) at a fixed point in space it is varying sinusoidally. So the waviness is in the amplitude of the E and B fields.
Once you talk about photons, you're in the realm of Quantum Mechanics (QM), and yes things are harder to understand.
It's actually all just fields according to the Standard Model (particle physics), a quantum field theory (QFT).
In QFT there's a field for each fundamental particle that permeates the whole universe. E.g. an electron field, a photon field, etc. Disturbances in these fields are what one would call particles in non-relativistic QM.
So Classical -> QM (quantum system, classical observer/apparatus) -> QFT (quantum everything)
In classical EM, light is a wave. In QM, light is particles. In QFT particles are just disturbances in the all-pervading fields.
Binney has said that QM is just measurement for grownups, or some such. What is a measurement? It's when the system you're observing becomes entangled with the measuring device. We don't know the exact state of every atom in our measuring device, but these could all perturb the system we're measuring. So QM is a hack where you treat the system as quantum but the observer/measuring device as classical which is why you need this confusing wave-function collapse. It was a conscious choice in the development of the theory. This last bit might give some insight into why trying to sense the photon at one of the double-slits ruins the interference pattern.
Flat hand, you feel a pressure at the front of your hand. At the back you should notice is a bit dry.
The pressure at the front is dynamic pressure. The gas piles up as your hand plows into it at speed. The pressure you feel is the mass of air you're picking up and carrying with you. The dryness at the back (you don't feel it per se, but you can notice it) is the resulting area of low pressure created by you plowing through the air. This is drag.
Now. Tilt your hand in the stream, and up your hand will go! The ways you can break this down/visualize it are varied, but in reality are all manifestations of the same phenomena.
Newtonian/Conservation of Energy: Each particle of air impacting the bottom of your hand is +1, each impacting the top is -1. +1 & -1 don't neutralize, so up you go.
The vacuum visualization: imagine a density visualization overlaid on the situation. There's a vacuum bubble over the top of your the hand. Nature hates a vacuum, so everything tries to fill it. The end result of that filling, is that air particles that would otherwise be slamming into the top of your hand get "sucked" into the bubble instead. This is important, because without this understanding, you can't account for things like dumping energy into the flow stream via a spinning shaft or the infamous UFO X-plane, where all the engine power was devoted to keeping air flowing faster over the top surface, allowing the darn thing to get lifted by the relatively unaccelerated air beneath even at 0 velocity of the machine relative to the environment. The key to all lift is making that asymmetry in airflow.
Symmetric airfoils can create lift at Angle of Attack, because while they are symmetric at 0 degrees, they aren't at angles offset from dead on.
There are also some weird degeneracies that you can take advantage of, like using a spinning cylinder and flat strip of material just barely offset from it to create lift with near zero relative forward speed of the apparatus to the surrounding space. (This is a function of viscosity, and the energy of the spinning rod is basically picking up the fluid and accelerating it, it separates from the cylinder and follows the strip of material creating a pressure differential, ergo lift).
Then there is the whole bit about about vortex circulation etc, the main thing to remember though is that air that is trying to fill a void created by an object moving through the air is too busy doing that to neutralize the energy gained by air transferring energy to the bottom of the lifting surface. Ergo, lift. Further, the useful "lift" you make, the more "drag" you'll create as well, because in order to maintain that vacuum you're coaxing all that air on the topside of the lifting surface to head into instead, you have to account for the energy expended in 'picking up and carrying' that air/fluid with you.
Fluid dynamics is weird, complicated, and seems like black magic, but at the end of the day it's all about what you convince the fluid to do instead of smacking into you.
There are gobs of seriously bloody weird equations all around it, but they are mostly useless in terms of being able to visualize what is going on.
Imagining a bubble sucking the lifting surface upward, and the airflow on the bottom pushing the lifting surface upward like a stone skipping on water on the other hand? Gets you good mileage on being able to imagine things.
The vacuum visualization is even more relevant at supersonic speeds, as at that point, your "flight regime" becomes "exotic chemistry occurring is a compressed flow" and an ever increasing column of air getting picked up and carried along with you as you rip a gigantic hole in the atmosphere and carry it along with you; turning aircraft operation into a balancing act between skipping off the atmosphere correctly, and not becoming part of the exotic chemistry you're causing.
Once you get the rudimentsdown though, everything becomes averageable vectors, which makes stuff like KSP with FAR a fun thing to mess with.
The reason some people regard Faraday's original explanation of the eponymous law (it is worth noting that at the time it was widely regarded as inadequate and handwavy) as illuminating is because Faraday visualized his "lines of force" as literal chains of polarized particles in a dielectric medium, thereby providing a seemingly mechanistic local explanation of the observed phenomena. Not much of this mindset survived Maxwell's theoretical program and it has very little to do with how we regard magnetism today. Instead, the unification of electricity and magnetism naturally arises from special relativity, whereas the microscopic basis of magnetism requires quantum mechanics. There isn't really any place for naive contact mechanics in the modern picture of physics, so in that sense I would regard Faraday's view as misleading.
Finally, I can't end any "explanation" of magnetism without linking the famous Feynman interview snippet [1] where he's specifically asked about magnetism. It doesn't answer your question directly, but it's worth watching all the more because of it.
"lasting permanent changes, obviously"
"I’ve personally seen several people experience total amnesia after tripping on high doses." No further information.
"Not lasting permanent changes"
what.
Names, sources, medical records, news reports, court cases, there has got to be something out there!
In the context of Covid19, I see so many people wearing PPE, but failing to act as though they understand that the actual goal is to prevent this tiny virion dust from entering your orifices. Like wearing gloves and a mask, but then picking up unclean item in store then using now unclean gloves to adjust mask and make it unclean.
People seem to think of things as having essences or talismanic effects. Like gloves give you +2 against covid and a mask gives you +5 when it's really all about preventing those virus things from bumping into your cell things.
I had done it probably ~20-25 times by that point, along with a bunch of other stuff.
LSD
Mushrooms
2C-B
2C-C
2C-I
DMT
4-AcO-DMT
5-MeO-DMT
5-MeO-MiPT
DOM
There might be some others I've forgotten, it's been a long time.> Again let's just stick with acid instead of all psychedelics.
What you won't find in academia or textbooks is that, at a high enough dose, all psychedelics feel the same. You reach a point where it's indistinguishable and the unique properties vanish. It's hard to describe if you don't have experience with a bunch of them, but there's this "peak psychedelic state" where they all sort of converge, which is what I only assume is the result of your serotonin receptors getting completely bombed/saturated.
Personally, I was much more of a fan of phenethylamine psychedelics (particularly the 2C series), they're more clear-headed and "light"/enjoyable. The time dilation from psychedelics makes the 12-16 hours from LSD feel like days, and by the end of it, generally the last 4-6 hours you just want to be finished with it already.
It's really difficult to make a blanket-statement like "can be used responsibly" about psychedelics, because it's a dice roll. No matter how cautious you are, there's always the possible that this time, things go sideways. Though most people (when I was in that scene as a teen) couldn't really empathize after my bad trip because they'd never had one, so it's a rare occurrence. Maybe I was psychologically predisposed, who knows.
But I do think that people stand to gain a lot from having a psychedelic experience in their life, and from having an experience taking MDMA and talking with someone they love.
On one hand you have anti-drug people, usually backed by the authorities. Listen to them and all drugs will make your body rot, give you hallucinations like datura, and for some reason cause complete addiction after a single dose.
Drug users on the other hand will tell you that it not as bad as alcohol/tobacco/coffee/... that concerns are unfounded, that police is the only risk, etc...
The truth is almost impossible to find. Even peer reviewed research is lacking. I guess there are several reasons for that. Availability of controlled substances. Ethical concerns regarding experimentation. Issues with neutrality.
Now from what I gathered about LSD (and psychedelics in general): these are very random. If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more. But it can also fuck you up for years, or maybe bring significant improvement in your life. High doses increase the chance of extreme effects and nasty bad trips, but it shouldn't kill you unless you are dealing with industrial quantities. The substance itself is not addictive, but the social context may be. The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.
As for fake LSD, there are cheap reagent tests for that. They are not 100% reliable but that's better than nothing. You can also send your sample anonymously to a lab that will do a much more accurate GC/MS analysis for you.
It’s an animated series that takes place inside the human body. I’ve been meaning to watch it myself. It’s supposed to be pretty accurate.
There are attempts to rigorously define it. I'm currently reading this paper, but not really convinced: https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...
The singularity is a sign that General Relativity breaks down -- an artifact of the theory, not necessarily a physical thing.
1) news articles/lay press - basically terrible and typically get things wrong
2) scientific lay press (scientific american, discover, science news) - get things right, but generally no data/citations or nuance
3) journal summaries - get things right, citations and data for everything. Good summary of the latest scientific thought on a topic. Tend to push a point of view, which generally will be right, but that educated people can debate. Dont always show the data, but at least refer to it. These help you to get up to speed with the primary experiments that were used to establish current thinking.
4) first source articles - typically make claims too broad for the actual results. But has all data. Often times the claims don't follow from the data at all. Generally have to work in the field to understand strengths and weaknesses of methods and you cant just take the conclusions at face value.
As a PhD student, I used #3 a lot to get centered on a space. To understand 4, I typically had to learn directly from my research advisor or other grad students that specialized in an area.
My point here is that you can find these summary articles in journals (microbiology, immunology,virology etc). They are published infrequently so can be hard to find, but they exist and you should look for them.
I also wonder why we cannot reproduce evolution's effectiveness computationally. Genetic algorithms and the like are not very good, no where near capable of matching biological systems, even though we can match evolution timescales and populations with our computing power today.
As far as I can tell, we have absolutely no idea why and how evolution works, let alone works so well.
To make things more specific, those labs had uncertainty budget with something like 20 terms for the things they measured. Each of those terms had associated probability distribution etc. They had uncertainty budgets for all the methods they did etc., and some of those where probably dated, done by someone else, etc. etc. Who checks that? Is the check rigorous enough? Are some assumptions made that don't hold to scrutiny?
So it is actually very easy for error to creep in, I would say actually very likely.
Because it's wrong. It's a quantum of the electromagnetic field. It's neither a wave nor a particle. It just happens to have some properties of both.
Now imagine horizontal as space, and vertical as time. In this case a 2D spacetime, but we can't really visualize 4D.
The reason they always talk about space-time in relativity is that you can't separate the two. If you want to travel faster through time, you have to travel slower through space. If you want to travel faster through space, you have to travel slower through time. There's an invariant like the length of that rotating clock arm called the "spacetime interval" that remains constant under the transformations that you have to do to go from one observer's perspective to another observer's perspective.
Problem is its in 4D so it's hard to visualize. There is a mathematical framework that can explain all of the transformations leading to length-contraction and time-dilation as simple rotations in a 4D spacetime (3 space + 1 time). It requires a bit more math, but then unifies things in a conceptually simple way.
But maybe just remember: "If you go faster through space, you go slower through time" "If you go faster through time, you go slower through space"
Your maximum speed in space is the speed of light, at which others will observe you as having no time passing.
Your maximum speed through time is one second per second, at which others will observe you as being stationary relative to them. Look up Alex Fluornoy's youtube video lectures. I'll edit this and link the specific one here later, if I can find it.
If none of these hold up, it could be that dark matter just doesn't annihilate. That's not weird or anything -- I mean, your room is full of matter right now, and it's not annihilating itself either.
Ultimately, though, you are right that you have to understand it all, once, even if you can't remember it all a month later. The explanations I find online are not good at presenting just the details you need when you need them, and building up to the full picture.
- It sounds like (based on the answer you linked) the "expansion" of the universe is a lie in the sense that the fabric of space is not actually expanding, things are just moving farther away from each other via motion. So it's not points on a balloon being blown up (in which case the points themselves are also growing in size), but a force pushing things apart
- If that's true, then I don't understand why the paper talks about light waves expanding with the cosmological expansion, implying that the fabric of space itself is indeed expanding, and talks about how that makes the doppler effect make no sense since the light wavelength expands with the universe. It sounds like there's a fundamental incompatibility with your explanation ("this doesn't expand small objects") and the paper ("the wave itself expands with the expanding space in which it travels, so that its wavelength grows with the cosmic scale factor"), which implies the fabric of space, eg all particles, is expanding
- It sounds like light "sticks" to space as space expands, but new light emitted after an expansion will still have some constant wavelength. So in this way in an expanding universe, light which has a constant wavelength will have further to go between particles, so light will appear to slow down as the universe expands
- If light does stick to space, and the fabric of space is expanding, then I never realized that the doppler effect makes no sense for measuring cosmological expansion, because we wouldn't be able to see it (hinted at in the paper)
- Maybe I don't fully understand why LIGO needs two arms. If you had a clock that could accurately measure light wave crests, could you do it only with one arm? i'll take a leap of faith in believing that a gravitaional wave compresses in one dimension and expands another (maybe not if the wave hits it exactly at 45 degrees?). Maybe the two arms are just for convenience to get phase difference for free?
- I think what I'm missing still is what is actually being measured and how it happens. Space expands, the wave gets longer in one direction, so it has further to go (only for a fraction of time), and it will take longer for the next crest to get to the detector, (I guess the crest itself is still moving at C? but through a farther distance?) so for a tiny blip of time, there will be a phase difference, not for all the light in the arm but just for the one or few crests that make it back along the further length until the wave resets the overall distance?
- Does space compressing and expanding prove that its compressing and expanding through a higher dimension? Especially if new light emitted is at some constant wavelength independent of the stretching of the spacetime it enters? Does that also imply this constant wavelength is happening independent of our (3d) space stretch, so it's a constant through some higher dimension?
It is simply wrong to think that scientific questions can never be definitively settled. Clearly there are some hypotheses that have been difficult (and may be impossible) to prove, for example, Darwin's idea that natural selection is the basis of evolution. There's ample correlative evidence in support of natural selection, but little of the causal data necessary for "proof" (until perhaps recently). In the case of evolution the experiments required to prove that natural selection could lead to systematic genetic change were technically challenging for a variety of reasons.
In the case of climate change, the problem again is that the evidence is correlative and not causal. Demonstrating a causal link between human behavior or CO2 levels and climate change (the gold standard for "proof") is technically challenging, so we are forced to rely on correlations, which is the next best thing. But, you are right, it is not "proof".
Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done. As a result, we are forced to rely on correlation - the historical correlation between CO2 and temperature change is compelling evidence that CO2 increases cause global warming, but it is not proof. It then becomes a statistical argument, giving room for some to argue the question remains "unsettled".
My point is that there are plenty of examples in science where things have been proven -- DNA carries genetic information, DNA (usually) has a double stranded helical structure, V=IR, F=Ma, etc. And there are things that are highly likely, but not "proven", e.g., human activity causes of climate change.
While some of the issues you bring are remain unproven, what's really absurd is to think that no scientific questions can be settled.
Even in each of those, there are two "levels" of implementation: specifying an exact algorithm that implements a solution to problem x, and actually producing the code that implements the algorithm.
At some level, there is no ready-made solution to every problem. Even if the foundations are implemented by "somebody else", the line's blurry. At which level of (lack of) expertise and which level of "lowness" of the implementation should I start to worry?
>The immune system: The most awesome thing ever.
It's actually two systems. One called the innate that we have in common with most forms of complex life, and the adaptive immune system, something we've only seen manifested in jawed vertebrates.
The innate immune system is a set of cellular signals/behaviors that are triggered by cells being exposed to damage and or stress.
These responses are generalized. Just about anything odd can invoke them, so they are typically the first line of defense. These include things like alteration of permeability of the local extracellular matrix (swelling), formation of impermeable tissue barriers to isolate damage (cysting, compartmentalization), and setting up signaling molecule gradients that attract phagocytic/cytotoxic cells to the terminate anomalous cellular activity/clean up the place (macrophage attraction), and alteration of metabolic activity to generate thermal stress (fever).
The issue with the general immune system though, is that it's non-specificity and versatility makes it a bit like a sledgehammer in the context of a complex organism. It can do as much or more damage as it can do good, and it isn't that good at only eradicating the exact thing causing the issue without excessive collateral damage.
Enter the adaptive immune system. The adaptive immune system is composed of various cell lines, and organ systems all specialized into dealing with specific facets of an immune response, and mediated through a set of special cellular surface receptors.
The facets of the adaptive immune response are: antigen recognition, coordination, moderation, and memory.
The major cell lines are T and B cells. T cells are further broken down into cytotoxic T, and helper T cells.
The adaptive immune system starts with naive lymphocytes. These cells rapidly multiply randomizing the ever loving crap out the region of the genome dedicated to the MHC receptor. By doing this, it'll cause the receptor to fold in ways that will allow it to bind with certain types of antigen (think of it as the antigen's key fitting the Mac's lock.) This proves of new receptor generation is mediated by the Thymus. The thymus tests every new variant to see whether there is any sensitivity to proteins that may expressed in other parts of the body. If it finds that to be the case, itinduces that particular cell to suicide to prevent the proliferation of immune cell lines with a high chance of being prone to autoimmunity. Those that survive are allowed to move out into the lymphatic and circulatory systems to patrol for their particular antigen. Upon meeting it, a few things happen. First off, the immune cell can help kick off or amplify a general immune response. Secondly, signaling proteins are released to attract more leukocytes to the area. Third, an antigen bearing cell will migrate toward the Thymus to recruit more immune cells. Once an antigen presenting helper T cell binds with a compatible B or Cytotoxic T cell line, that cell line undergoes massive replication without further modifying it's receptor, and the helper T cell does likewise.
B cells will create and secrete antibodies. Small snippets of protein that will bind to and foul up the workings of the antigen to which the are sensitive.
Cytotoxic T cells will patrol for and engulf antigen it encounters, either breaking it down with a burst of oxidative substances, or if the antigen is detected being presented on a cellular membrane protein, and a helper T cell is near by to enable the response, a cytotoxic T cell can induce cell death of an antigen presenting cell, but with much greater specificity and numbers than the mechanism used by macrophages. Once the cell death takes place, the cell will either clean up the remains, or attract macrophages to do so while it heads off for the next target.
The cytotoxic T cells are handicapped in their destructive potential by the need for a nearby Helper T cell. B cells just shotgun anti-antigens into the blood stream.
>Medications: oh dear God askapharmacologist. The closest I have committed to memory, is that most pharmaceuticalsarea veryspecifically formulated chemical intended to be able to be absorbed without difficulty, being capable of making their way to a target area in the body, modified to an active form by enzymes in the body to do their thing before eventually getting degraded and excreted by more of the same.
Plastics: no comment.
The other great thing about Lie groups is you can discover new and valuable groups just from pretty basic topology. Like the Spin group, which you know has to be out there as soon as you know the fundamental group of SO_n, but otherwise would be very hard to think of.
The fancypants but I think most intuitive way to think about Galois theory is also with topology. It's an algebraic version of a much more geometric, visible story, the correspondence between {subgroups of the fundamental group} and {normal covering spaces}.
Yeah this is another thing I've seen.
Online there are lots of stories of "bad trips", like this one.
In person its "what happened? I've never had a bad trip [so what's wrong with you]". It is very unscientific, and for the people that do empathize, it is very reductive to "bad trip". No discussion about PTSD. And then you can't talk to anybody else about it because they are illicit substances.
The amount of complexity is just absolutely insane. My favourite example: DNA is read in triplets. So, for example, "CAG" adds one Glutamine to the protein it's building[1].
There are bacteria that have optimised their DNA in such a way that you can start at a one-letter offset, and it encodes a second, completely different, but still functional protein.
I found the single cell to be the most interesting subject. But of course it's a wild ride from top to bottom. The distance from brain to leg is too long, for example, to accurately control motion from "central command". That's why you have rhythm generators in your spine that are modulated from up high (and also by feedback).
Every human sensory organ activates logarithmically: Your eye works with sunlight (half a billion photons/sec) but can detect a single photon. If you manage to build a light sensor with those specs, you'll get a Nobel Prize and probably half of Apple...
Maybe someone who actually knows will chime in, but afaik:
- Most light doesn't have a fixed frequency. If it did, it would have a fixed momentum, but then you would have no idea where it is! Instead it is some superposition of many frequencies. That could be part of the story.
- Light could be stopped by something other than absorption into an individual atom. Metals don't have discrete spectra.
Science also doesn't seek disproof. It uses both example and counter example to confirm or deny or increase how much one confirms or denies.
When most everything is unmoving, it's "obvious"... well no, not to students, but... there's no pretense of doing anything other than stitching together an extremely selective set of "snapshots", to tell a completely bogus narrative of smooth motion.
Here it seems something like a Maya "jiggle all the things" option has been turned on. Making it sort of kind of look like you're being shown more realistic motion. But you're so not. It's the same bogus smooth narrative, now with a bit of utterly bogus jiggle. Those kinesin legs still aren't flailing around randomly. Nor only probabilistically making forward progress. And the thing it's towing still isn't randomly exploring the entire bloody space it can reach given the tether, between each and every "step". It still looks like a donkey towing a barge, rather than frog clinging to rope holding a balloon in a hurricane.
And given that the big vacuole or whatever should be flailing at the timescale defined by the kinesin feet, consider all those many much smaller proteins scattered about, just hanging out, in place, with a tiny bit of jiggle. Wow - you can't even rationalize that as being selective in "snapshots" - those proteins should just be blurs and gone.
And that's just the bogosity of motions, there's also... Oh well.
So compared with older renders, these new jiggles made it even harder to recognize that all the motion shown is bogus. And not satisfied with the old bogus motion, we've added even more. Which I suggest is dreadful from the standpoint of creating and reinforcing widespread student misconceptions. Sigh.
Every article on "Where do stock prices come from?" seems to just talk at a high level about supply and demand.
But where does the price come from at a nitty-gritty level? Is it an average of all existing offers or something?
Do different exchanges and stock-ticker websites have different formula for calculating share price?
If a very low-volume stock is listed at $4, and then I offer to buy a share for $100, does the NYSE suddenly start listing its price at $100?
Proof? Just look at all the replies you got: each one is dozens of pages of complex (imaginary) math, control theory, and statistics.
The hardest part of QC is exactly what you described: how to extract the answer. There is no algorithm, per se. You build the system to solve the problem.
This is why QC is not a general purpose strategy: a quantum computer won't run Ubuntu, but it will be one superfast prime factoring coprocessor, for example (or pathfinder, or root solver). You literally have to build an entire machine to solve just one problem, like factoring.
Look at Shor's algorithm: it has a classical algorithm and then a QC "coprocessor" part (think of that like an FPU looking up a transcendental from a ROM: it appears the FPU is computing sin(), but it is not, it is doing a lookup... just an analogy). The entire QC side is custom built just to do this one task:
https://en.wikipedia.org/wiki/Shor%27s_algorithm
In this example he factors 15 into 5x3, and the QC part requires FFTs and Tensor math. Oy!
Like I said, it will take decades for this to become easier to explain.
For fun, look at the gates we're dealing with, like "square root of not": https://en.wikipedia.org/wiki/Quantum_logic_gate
I'm going to throw out an analogy that gets at what's observed and why it's surprising, but doesn't relate to the physics of spin, momentum, position or anything that's actually under observation in these experiments.
It's as if we have a pair of dice, and I throw my die and you throw your die many times. In a classical world, if I throw a three, it has no influence on what you throw; you're equally likely to throw 1-6. But in the quantum world it's as if when I throw a one, your die still has the expected uniform distribution, but when I happen to throw a three, you're a little bit more likely to throw a three. Your die is fair if I happen to roll a one, but it's weighted if I happen to throw a three.
Back in the real world, this is the strange behavior that is observed in experiment. Schroedinger's equation predicts the probabilities perfectly. But Bell shows that it's far from intuitive.
Can you please explain to me how you went from"looks like a tidal effect in the Newtonian limit" to "a gradient of the Newtonian Graviational field"?
The other thing you can do is think about what it means for particular types of categories. For a posetal category, it says that an element of a poset is uniquely determined by the set of all elements that come before it in the ordering. For a group, it says that every element is uniquely determined by its action on the group. (This is basically Cayley’s theorem.) See this MSE post for more intuition: https://math.stackexchange.com/questions/37165/can-someone-e...
There are people who put limit orders on the exchanges. Say that the price of TSLA is $500. I think it's overpriced, and its likely to go down, but then grow in the future. I can say, "I'm willing to buy 100 shares of TSLA at $420." Someone else holds TSLA and thinks its likely to go up, but not hold it's value, so they say, "I'm willing to sell 100 shares of TSLA at $690." The sum of all of these limit orders forms the market depth chart.
The more common way to interact with the market is to say, "I want to buy a share of TSLA at the current market price." In the above example, the only option is to buy TSLA for $690, even though the last transaction was $500! This is a example with very little market depth. In the normal case, you'd buy your share for $500.02 or something like that. (Same, but reversed, for selling at market price.)
For more information, but with a crypto focus, see https://hackernoon.com/depth-chart-and-its-significance-in-t...
For your example, you would put in a market order, and buy the stock at the lowest price that someone was willing to sell it at. If the last price was $4, but the lowest limit order that currently existed was for $100, and you bought it for $100, then yes, the price would go up to $100. (In real life, those sharp upticks don't happen much. It's more likely that a sharp downtick happens, where suddenly everyone wants to sell oil futures at the same time, but almost no one is willing to buy them, so the price ends up negative.)
Note that whenever people defend high-frequency trading for "providing liquidity to the market," this action of setting buy and sell limit orders that are close to each other is what they are talking about. There are algorithms that will see TSLA at $500, and offer to sell TSLA at $500.02 and buy TSLA at 499.98. If both orders go through, they make $0.04. If you operate fast enough to get out ahead of any big market moves, you can make a lot of money. But if you ever accidentally buy a bunch of TSLA for $499.98 right before the price plummets to $420, then you just lost a lot of money. This is why HFT and other trades with similar risk profiles are sometimes referred to as "picking up nickels in front of a steamroller."
Now that can either mean that someone bought a share that someone else was selling or that someone was selling a share to someone who was offering to buy.
The shares are listed as a series of buy and sell orders in what's called an order book.
If the price a share was sold at was 100$ and you think it will go a bit lower, you could place a buy order at 90$. Should enough people sell shares to reach your price and order, your order will be filled and you will own the share at 90$.
If someone wants a million shares at 91$, you may not get your single share at 90$.
To go back to your example, if you were to place a buy order at 100$ for a 4$ priced share, how much the price moves depends on how many sell orders are in place from 4$ to 100$ and how much you are buying.
If it's only one for share, your order will probably get filled at something like 4.01$ if the spread is low (the spread being the difference between the highest buy order and the lowest sell order).
If you're buying 1000 shares and it's a low volume stock with a "thin" order book, maybe it could go up a few dollars instead but for it to go up to 100$ you have to buy every single share between 4$ and 100$.
Imagine explorers on Mars find the ruins of an ancient alien civilization. In those ruins they find several small devices that have three buttons. Beside each button are two colored lights. red and blue. Above the buttons is a display. The linguistics team figured out enough alien writing to tell that the buttons are labeled with the alien's equivalent of A, B, and C, and that the display is a numerical display that goes from 0 to 38413 displayed in base 14 (which fits with other evidence found that the aliens have two hands with 7 fingers).
There is also some kind of docking station, which can hold two of the devices, and has a single button.
If two of the devices are placed in the docking station and its button is pressed, all the lights briefly flash on the devices, and the counter resets to 0. The lights stay on until the device is removed from the dock. Nothing happens if only one device is placed in the dock.
To try to figure out what these devices do, pairs are placed in the dock, reset, and then given to a couple people who go off and press the device buttons are record what happens.
Here is what those people observe.
1. If they press one of the buttons (A, B, or C), exactly one of the two lights next to that button comes on. When the button is released, the light goes out, and the counter goes up by 1, until it reaches 38413. After the next press/release, the counter goes blank and the device is unresponsive until reset again in the dock.
2. As far as anyone can tell, there is no pattern to which light lights. It acts as if pressing a button consults a perfect true unbiased uniformly distributed random bit generator to decide between red and blue.
3. When they compare their results with those of the person who had the box that was their box's dock mate for reset, they find that if on each person's n'th press
-- if they both pressed A, or both pressed B, or both pressed C, they got the same color light.
-- if one of them pressed B, and the other pressed either A or C, they got the same color light 85.36% of the time.
-- if one of them pressed A and the other pressed C, they got the same color light 50% of the time.
4. These results do not depend on the timing between the two people's presses. Those correlations are the same if the people happen to make their n'th press at the same time, or at wildly different times. Even if one person goes through all their presses before the other even starts, their n'th presses exhibit the above correlations.
5. These results do not depend on the distance between the boxes. If a box pair is split up, with one person taking theirs back to Earth while the other remains on Mars, and the two then run through all their presses at nearly the same time, completing quickly enough that there can be no communication between the two boxes during the run due to speed of light limits, they still exhibit the correlations.
Challenge: try to figure out how such boxes could be built without using quantum entanglement. Assume the aliens have nearly unlimited storage technology, so you can include ridiculously large tables if you want, so you can even propose solutions that involve the dock preloading the responses for every possible sequence of presses (all 3^38414 of them). Anything goes as long as it produces the right correlations, and does not involve quantum entanglement.
I should add: As a human being, it is probably impossible to separate the scientist from the philosophy in which they explore, proceed with, and promote their work. In some cases, it might not be something they are even aware of. Instead, the scientific system (as a sort of world institution) should itself be designed to always seek out and protect truth, regardless of prevailing contemporary knowledge.
How do I know this?
I have mental illness in my family and have spent considerable amounts of time at those facilities.
My favorite illustration was a video of simulated icosahedral viral capsid assembly. The triangular panels were tethered together to keep them slamming into each other. Even then, the randomness and struggle was visceral. Lots of hopeless slamming; tragic almost but failing to catch; being smashed apart again; misassembling. It was clear that without the tethers forcing proximity, there'd be no chance of successful assembly.
Nice video... it's on someone's disk somewhere, but seemingly not on the web. The usual. :/
> yeast
Nice example. For a temperature/jiggle story, I usually pair refrigerating food to slow the bacterial jiggle of life, with heating food to jiggle apart their protein origami string machines of life. With video like https://www.youtube.com/watch?v=k4qVs9cNF24 .
> Compartmentalizing
I've been told the upcoming new edition of "Physical Biology of the Cell" will have better coverage of compartmentalization. So there's at least some hope for near-term increasing emphasis in introductory content.
Do you have any tips for how to quickly find these #3 materials in other spaces?
For something between 2 and 4, the best I can come up with would be textbooks or seminars, both being extremely spotty in terms of quality and understandability.
In any case, a big problem you get is a cliff of information content going from 4 down to whatever the next step is. The incentive structure substantially motivates putting out new material, which must have some novel concepts. The focus on novelty and accomplishment leads to quite a mess. People put out half-baked work to be the first to write on a particular subject, which gets citations, which means the next round, also half-baked, is built on a half-baked foundation. When what's most needed in almost all cases is to parse the last generation of literature into something coherent, real, and replicable.
Basically you get an educated layperson asking a veteran criminal lawyer questions, usually around the First Amendment, always related to current events. The lawyer (Ken White) explains in practical terms what is likely to happen and why.
This is not mutually exclusive with being against the attacks on science. Just because we shouldn't treat things as proven doesn't mean we can't come to a general consensus on a topic and act as if it was true. Climate change is real. Evolution is real. Don't inject yourself with bleach. Having a small number of quacks say 'its just a hypothesis and actually god is responsible for climate change and evolution' without any evidence doesn't change the general consensus and doesn't mean we have stop everything until we prove the negative.
Ultimate I think most of us agree in principle. Most of what we're discussing here is minor semantic differences in vocabulary.
One idea is known as the Copenhagen interpretation.
It basically says that the wave-like effects we associate with matter is merely a wave of probabilities. Or in terms of the double-slit experiment and in other words, light behaves like a particle, but the wave-like effects you see is just the result of probabilities where the particles end up. Dark areas are areas of low probability, and lighter areas high probability.
One might imagine the light particles streaming through the slit end up having slight variation in trajectory from one particle to another (for various reasons such as interference with other particles), which results in areas where most particles end up and others where few end up... representing a wave.
As a dancer, I have been fascinated by that fact. It means that dancers do not dance to the beat as they hear it - it takes too much time for the sound to be transformed by the ear/brain into an electrical pulse that reaches your leg. Instead, all dancers have a mental model of the music they dance to that is learnt by practice/repetition.
Dancing is just syncronizing that mental model to the actual rhythm that is heard. When I explained that to a bellydancer friend she finally understood the switch that she had made from being a beginning dancer to an experienced dancer who 'dances in their head'
But for the duality, there's something bigger that the responses always seem to blow past. Is wave-like nature for explaining behavior (wavy double-slit intensity pattern), or is it something to have a mathematical mapping to measured probabilities?
Quantum stories always seem so backwards. The root phenomenon is some sort of irreducible probability. But then the mechanical part (inference in double-slit) goes a totally different direction. Instead of just turning the situation into a probability of one-or-the-other slit, it STAYS as a wave.
Okay, now you have a new hole in the story. If the photon refuses to choose just 1 slit to go through, why does it choose 1 spot on the photo paper to land on?
Why do we not still have to consider interference in outcomes after the photon makes its mark on the paper? Why does there appear to be like a limit on entanglement, such that it goes away beyond a certain scale? Why are quantum computers hard?
The basic answer is that the extra energy that goes to the rocket comes from harvesting the kinetic energy that the fuel itself had by virtue of being in the moving rocket.
So many questions.
Where is the medical journal that says your conclusion "vast majority of mental health patients have a self-reported history of drug use of these specific drugs". I guess it can't exist because its crazy people self reporting a variety of substances that even the user would have no idea what was actually given to them.
In simple terms, when you want to buy 100 shares of a stock at no more than $4, you place a limit order into the exchange’s order book for that stock. Other buyers do the same, as do sellers. The order book is a sorted structure with the orders and their sizes on each side. It may look like this:
Sell 100 @ $7
Sell 300 @ $6
Sell 300 @ $5
Buy 100 @ $4 (your order)
Buy 200 @ $3
Buy 100 @ $2
Notice the gap between the highest buy (“bid”) and the lowest sell (“offer“ or “ask”). This is called the ”bid/ask spread.” Whether we’re talking stocks or eBay or a local outdoor market, buyers always want to pay less, sellers always want to earn more, and there is always a bid/ask spread.
If instead of sticking to your $4 limit, you said “forget it, I just want the stock” you would enter a market order instead of a limit order. In doing so you’d “cross the spread” and pay $5 per share. For a trade to happen, someone has to cross the spread.
If you entered a buy order with a limit of $100 in this example, you’d still buy at $5. If you ordered 400 shares at $100, you’d buy 300 at $5 and 100 at $6. The $5 offer would come out of the order book and the $6 offer would be reduced in size.
When you think of the market as all of this upward and downward price pressure focusing around a spread, you can see that the price the market values the stock at is conceptually the midpoint between the highest bid and the lowest offer, also known as the “mid.”
As prices change, the spread’s price level moves up and down, it narrows and widens, but the price you see always at least indirectly reflects that midpoint of price interest between all buyers and all sellers. There will always be intricacies in price reporting (based on the price feed, the price you see is the last trade made, the mid, or something more complex), but if you understand the order book, you’ll have the basic idea and can build from there.
If you’re really interested, you can google how and when the various exchanges calculate and report their prices, who they make them available to directly, what vendors provide raw and aggregate views of those prices, and more. There are many flavors varying from real-time tick-by-tick reporting to end of day feeds and more.
All of them ultimately begin with what you can now visualize as an order book.
Cynical answer: the culture of medicine was set during a 2000+-year period when physicians harmed more than helped; the way they made a living was by appearing authoritative.
Each exchange is basically its own world, with the exception of Reg NMS, which I'll get to in a sec.
Let's task about order books. Each stock has its own order book. This might be an example of the book for AAPL:
* SELLING 100 shares @ $10.02 * SELLING 200 shares @ $10.01 * SELLING 100 shares @ $10.00 * BUYING 100 shares @ $9.99 * BUYING 200 shares @ $9.98 * BUYING 100 shares @ $9.97
So if you want to buy some AAPL, you will want to go grab the cheapest shares you can see, which here is the fellow selling 100 shares at $10.00. You submit a limit order to buy at 10.00 and are matched with that guy. The book now looks like this:
* SELLING 100 shares @ $10.02 * SELLING 200 shares @ $10.01 * BUYING 100 shares @ $9.99 * BUYING 200 shares @ $9.98 * BUYING 100 shares @ $9.97
Now let's suppose a market maker decides they think the price is going to follow, so they go and fill in the hole by submitting an order to BUY 100 shares at 10.00. There's no more shares to buy at $10.00, so their order rests on the book.
Now we have:
* SELLING 100 shares @ $10.02 * SELLING 200 shares @ $10.01 * SELLING 100 shares @ $10.00 * BUYING 100 shares @ $10.00 * BUYING 100 shares @ $9.99 * BUYING 200 shares @ $9.98 * BUYING 100 shares @ $9.97
Now that we've played out this scenario, let's go back to your original question. What is the price of AAPL at any point in here? Well, it depends. At the start, if you wanted to buy, you could say the price is $10.00. But if you wanted to sell, the best you'd get is 9.99. So, hard to say.
It's worth noting that the prices you see in the book are only there because people aren't agreeing on the prices. If they did agree, a trade would happen, and the prices wouldn't be on the book. So, with that in mind, you could say that really, the price of a stock is the last price people agreed at: the last trade price. That's better, we're at least down to just one price to think about.
That could be quite different from what the best bid/offer are right now, though (some stocks don't trade very often) so even if (let's say) you last saw AAPL trade at 9.50 before our example, obviously that price is long gone. So even the last trade price is potentially not "the price of the stock".
So, in short, there's really no such thing as "the price of a stock". It'll all depend on how sophisticated you want to be about the price at which you buy your shares.
When people talk generally about the price of a stock, it's usually just up to whatever site people are looking at, and usually markets are liquid enough and trade enough that all the kinds of prices we just talked about are usually only a penny different, so when people are just at the watercooler saying "Did you see the price of AAPL?" they don't care about the pennies, and by the time they've managed to say that question, the price has moved anyway, probably lots of times. So it all gets a little hand-wave-y.
I want to mention two other things that might interest you. Reg NMS is what ties all the exchanges together, so to speak. Let's say you want to buy AAPL and NYSE has shares selling at $10.00 each, but NASDAQ has them for $9.99 each. It's actually illegal (against Reg NMS) to trade with that guy at $10.00 at NYSE because NASDAQ has the "NBBO" (national best bid/offer) right now. Extra caveat: if you sent a special order to NYSE that says "I promise you, I've also sent an order to NASDAQ to buy the shares for $9.99 and I've determined you're the next best price at $10.00, let me buy them", it'll let you. It's called an ISO (Intermarket Sweep Order) and if you lie about them or mistakenly lie about them, you get fined. A lot.
The other interesting thing: Your last question was "If a very low-volume stock is listed at $4, and then I offer to buy a share for $100, does the NYSE suddenly start listing its price at $100?" There's actually a lot to unpack here. Let's go through it.
If you're a registered broker-dealer and are connected directly to NYSE, and you send a limit order for XYZ @ $100/share, what's going to happen is you're going to get "price improvement" and you'll end up getting the shares at $4. If you send an order for LOTS of shares at $100, you'll clear out a bunch of price levels in one go. Ex:
Let's say this is the book for XYZ:
* [...] * SELLING 200 shares @ $110.00 * SELLING 1000 shares @ $5.00 * SELLING 200 shares @ $4.02 * SELLING 100 shares @ $4.01 * SELLING 500 shares @ $4.00 * BUYING 100 shares @ $3.99 * [...]
Usually when you get away from the middle of the book, liquidity dries up fast and the prices get further apart. So let's say you send an order for 10000 shares at $100. You're going to get 500 at $4, 100 at $4.01, 200 at $4.02, 1000 at $5.00. Now the next price is 110, but your limit price is 100. So your order will now actually rest partially-filled on the book. So now this is the book:
* [...] * SELLING 200 shares @ $110.00 * BUYING 8200 shares @ $100.00 * BUYING 100 shares @ $3.99 * [...]
Neat, huh? That was a lot of price movement. So yes, if you can send for enough shares and are willing to pay through a lot of price levels, you can move the price of the stock. Remember Reg NMS though - if more stock exchanges existed in our example, you'd also likely need to go get shares at them if they have a better price than the exchange you just moved the price at.
But let's now suppose you're NOT a registered broker-dealer, but are instead Joe A. Schmoe, a client of Charles Schwab Brokerage. You enter your order in your web browser and hit trade. Schwab has a legal obligation to fill your order, if possible, only at the NBBO. They could route your order right to an exchange, but instead, they will send your order to their friend, Citadel, who will have the opportunity to trade against your flow before it gets routed to the stock exchanges. Generally, this is good for you: they might decide your order represents good information and they want your shares. They could decide to fill your order themselves and sell you all 10000 shares you want. They're constrained by the NBBO though, so you get all 10000 shares at $4. For being the source of this order, Citadel pays Schwab some money. Usually practically a pittance, pennies, if that. Order flow is dirt cheap nowadays.
This is called "selling order flow" and lots of people find it scary, because it's not really super intuitive why someone would want to buy or sell the actual flow of orders. But it's actually pretty boring and more about high-level statistics than anything actually interesting to Joe Schmoe, who would get bored when he realized he's not really getting ripped off.
Sorry, I got a bit off-topic. But I love finance, so please forgive me.
How does large scale randomness result in such complicated and intelligent systems, while after decades of research and all the computing power we have today, we still struggle to model and reproduce the intelligence of an insect.
If you see any signal, it can be represented as a value at each time, x(0) = 1, x(1) = 2 .. x(100) = 5 etc. We can visualize them as you shouting 1 at time 0, 2 at time 1 and 5 at time 100. Alternatively we can also do the same with a larger number of persons.
Representation using dirac delta
--------------------------------------
Lets say that you have 100 persons at your disposal. You ask first person to shout 1 at time 0, second person to shout 2 at time 1 and person to shout 5 at time 100. Other times they will be silent. So with these 100 people you can represent the signal X. We call each of these person as bases. Mathematically they are delta functions of time, ie they get activated only at their specified time. Other times they are silent, ie 0. The advantage of this representation is that you have fine control on the signal. If you want to modify value at time=5, you can just inform the 5th guy.
Introduction to bases
--------------------------
Dirac delta is not the only bases. You can ask multiple guys to shout at multiple times. They can even tell negative numbers. All you have to ensure is that they add up to the value of X. The guys should be able tell any number that can come as a part of X. This we name the property "SPAN".
Instead of 100 guys, we can have 200 guys too, ie 2 guys for each time and they tell half of the original value. However, this is wasteful since you have to pay for extra guys with no use. Hence we say that the bases should be orthogonal, ie they should not have correlation with others in the group. So as we have uncorrelated and spanning guys, we can represent any signal using them.
Fourier transform
--------------------------
In case of Fourier transform, each guy will shout according to a sinusoidal wave. Lets say sine wave. ie guy 1 at time 0 will tell the value of sine(f0 t). Second guy will shout value of sine(f1t) and so on. The f0, f1 etc are the frequencies for each guy. Now it comes out that these guys will be orthogonal to each other, and they can span all the signals. Thus we have Fourier transform. Hence instead of representing signal as value at each step, we can represent it as value at each frequency.
Why Fourier transform
-------------------------
We have seen that as long as bases span and and are orthogonal, they can define a transformation. But why is Fourier transform so famous. This comes from the systems we use. The most common systems we use are LTI(Linear time invariant) systems. A property of the said system is that they work on sinusoidal waves. Ie if a sinusoidal wave of frequency f is passed through an LTI system, all it can do is to multiply with a scalar. Any other wave will have a more complex effect. Hence if we can represent signals as a sum of sinusoids, we can represent our system as just a amplifier at each frequency. This makes whole of system analysis into a set of linear equations which we are good at solving. So we love Fourier transform
Masks are for keeping your own particles from spreading far AND for lowering the probability of virions found in the environment from entering your respiratory system.
Masks lower the probability when all other variables are held constant. If someone thinks wearing a mask grants invincibility and in turn chooses to increase their exposure to high viral load individuals or environments, they're putting themselves at risk.
But it could have some bleeding edge new applications from the TCP/IP space for urgent point, new methods for cryptography, or speeding up algorithms for searching. ¯\_(ツ)_/¯
I am not a quantum person but i once saw a geometric explanation for grover's algorithm which kind of made it all make sense to me. (grover's algorithm is the quantum algo you use for problems where you dont know any approach better than brute force. It can bruteforce stuff in O(sqrt(n)) guesses instead of O(n) like a normal computer). Basically, the geometric version was that you start with all possibilities being of equal probability (i.e. an even superposition of all possible states), negate the amplitude of the correct answer, then reflect the amplitudes around a line that is the mean of the amplitudes (do that sqrt(n)) times. The end result is the correct answer has a higher probability than the other answers. I unfortunately can't find the thing where i originally saw this, but they visualized it basically as a bar graph (of the amplitudes of possible states) and it seemed much clearer to me than other explanations i have come across
For the other poster one nice source is of course The Annual Review journals. Arxiv of course too. The bibliographies in undergraduate/beginning graduate textbooks or syllabi are good sources too.
The site most MDs use is here: https://www.uptodate.com/home
By attacking crypto--a lot. And submitting your crypto to be attacked by others--a lot. It's the only way to develop the requisite level of humility to design good crypto.
Generally quantum computers are good for three things
* factoring numbers (and other highly related order-finding problems). RIP RSA, but not that applicable outside of crypto.
* unstructured search (brute forcing a problem in only O(sqrt(n)) gueses instead of an average of n/2 gueses). Certainly useful...but its not a big enough speedup to be earth shattering.
* simulating various quantum systems (so scientists can do experiments easier). Probably by far the most useful/practical application in the near/medium term.
There's not a whole lot else they are good for (that we know of, yet)
(They use some other stuff, but you get the idea)
You can back out Avogadro constant starting with this experiment.
I feel as though it's simply Occam's Razor to assume that evolutionary complexity is the result of randomness because I know of no better explanation. Is there a self-reinforcing process at play? (Natural selection partially counts as reinforcing, I just feel like randomness is still the engine that powers it).
It's difficult to relate the two together and even after hearing every heuristic and every cutesy analogy, I still can't quite wrap my head around what happens to one when I increase the other (and so on).
From the book:
> The evolution of the capacity to simulate seems to have culminated in subjective consciousness. Why this should have happened is, to me, the most profound mystery facing modern biology. There is no reason to suppose that electronic computers are conscious when they simulate, although we have to admit that in the future they may become so. Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself.
It also goes on to explain the delayed choice quantum eraser experiment but I don't think that's quite convincing.
No. What is the basis for these claims?
They're both wrong.
It's not true that CO2 increase is necessary for global warming. If the sun got a lot hotter, global temperatures would rise. If non-CO2 GHGs increased, global temperatures would rise. If the overall albedo of the planet changes, global temperatures can rise. There are literally thousands of things that could cause the temperature to rise.
It's also not true that CO2 increase, holding everything else constant, would lead to long term or even medium term warning. We have no idea what the ecosystem will do for any given change in CO2 levels, since there are countless species both who are net producers and net consumers of atmospheric CO2, all of whom have exponential growth and feedback loops.
Even still, even since both of those claims are wrong, CO2 increase may still cause global warming.
Furthermore, the things you claim are proven, are not proven, they are true by definition. All molecules carry information, and the fact that DNA carries genetic information is a direct consequence of the fact that it is DNA. V=IR by definition. F=ma by definition. There's no such thing as a "force" or "mass" or "acceleration" entity per se, these are metrics that are by definition equal in a given physical framework.
There is no way to 'technically' prove anything in science, and the reasons are simple:
(1) The past is gone - you can't access it
(2) You can't see the future
(3) Your knowledge of the present is extremely limited and inaccurate
These are the limitations of the real world, and science does its best to provide utility within that. It only focuses on making future predictions using the observed past as evidence, because you only can do that. You can't check your model in the present, because you can't instantaneously observe anywhere you aren't already observing. Checking your model on the past relies on what you think happened, i.e. what allegedly happened, but there is absolutely no way to truly know.
You can't even really prove anything 'novel' in mathematics, which is the only place where you can actually prove anything, but even there all proofs are effectively just framing something that was already implied axiomatically in a way that allows our limited human minds to see the relevant/useful patterns that aren't immediately obvious to us.
My point is, acting as though you can truly prove anything in science,
> what's really absurd is to think that no scientific questions can be settled
is not only wrong, but in my opinion is a distraction from what science is actually for. It's not about settling questions. Science is never settled, and that's part of what's beautiful about it. It's about reducing our own ignorance and proving our past selves wrong, discovering patterns and models that equip us with the knowledge to build a better world for ourselves and the rest of humanity.
Why lie about being a great soccer player when you're already great at basketball? Let's focus on the beauty of science as a great journey of growth and exploration that accelerates the progress of humanity, instead of trying to make it do something that isn't possible in the real world.
That said, it is indeed annoying when people who don't understand science interpret "open for disproof" to mean "it's easy to disprove." Quantum mechanics and the second law of thermodynamics could in principle be disproven, but the evidentiary burden would be extremely high. (Insert obligatory Carl Sagan quote here.)
Tangential, and not an answer to your question, but if you're like me, you will be fascinated to learn that there is a drug (MPPP, synthetic opiate) that if cooked incorrectly yields "MPTP"[1] which will give you Parkinsons. As in, forever. You take this drug (at any age) and then you have Parkinsons for the rest of your life.
If you understand Turing Machines, you probably also understand other automata. So you probably understand nondeterministic automata [1].
A quantum computer is like a very restricted nondeterministic automaton, except that the "do several things a once" is implemented in physics. That means just like a NFA can be exponential faster than a DFA, a QC can be exponential faster than a normal computer. But the restriction on QCs makes that a lot harder to do, and so far it only works for some algorithms.
As to why quantum physics allows some kind of nondeterminism: If you look at particles as waves, instead of a single location you get a probability function that tells you "where the particle is". So a particule can be "in several places at once". In the same way a qbit can have "several states at once".
> What I don't understand is how a programmer is supposed to determine the correct solution when their computer is out in some crazy multiverse.
Because one way to explain quantum physics is to say that the waveform can "collapse" [2] and produce a single result, as least as far as the observers are concerned. There are other interpretations of this effect, and this effect is what makes quantum physics counterintuitive and hard to understand.
[1] https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...
Maxwell picked up this idea and ran with it, developing a mathematical theory for the dynamics of the electromagnetic field. Instead of one object somehow magically interacting at a distance, interactions between objects resulted from changes in the electromagnetic field that propagated through space.
The final paragraphs of Maxwell's "Treatise on Electricity and Magnetism" are somewhat relevant.
This is 30-40 years after Faraday first wrote about lines of force, and there still wasn't really consensus about how to explain electromagnetic phenomena.
[emphasis added by me]
> Chapter XXII: Theories of Action at a Distance
> ...
> There appears to be in the minds of these eminent men, some prejudice, or a priori objection, against the hypothesis of a medium in which the phenomena of radiation of light and heat, and the electric actions at a distance take place. It is true that at one time those who speculated as to the causes of physical phenomena, were in the habit of accounting for each kind of action at a distance by means of a special aethereal fluid, whose function and property it was to produce these actions. They filled all space three and four times over with aethers of different kinds, the properties of which were invented merely to 'save appearances,' so that more rational enquirers were willing rather to accept not only Newton's definite law of attraction at a distance, but even the dogma of Cotes, that action at a distance is one of the primary properties of matter, and that no explanation can be more intelligible than this fact. Hence the undulatory theory of light has met with much opposition, directed not against its failure to explain the phenomena, but against its assumption of the existence of a medium in which light is propagated.
> We have seen that the mathematical expressions for electrodynamic action led, in the mind of Gauss, to the conviction that a theory of the propagation of electric action would be found to be the very key-stone of electrodynamics. Now we are unable to conceive of propagation in time, except either as the flight of a material substance through space, or as the propagation of a condition of motion or stress in a medium already existing in space.
> Hence all these theories lead to the conception of a medium in which the propagation takes place, and if we admit this medium as a hypothesis, I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its actions, and this has been my constant aim in this treatise.
Edit: I should add that even if you are an expert in cryptanalysis, you still shouldn’t just roll your own crypto. It’s the analysis of the entire community, not the credentials of the author, that makes modern cryptography so strong.
I've started but haven't finished the physics lectures by Card Bender on mathematical physics, where he features perturbation theory prominently [2].
If someone could chime in on this, I would also be appreciative. Also if someone has better resources to learn about perturbation theory, I would also be appreciative.
Also are these charges responsible for some weather effects such as the jet stream. In a tornado is the negative charges on the dry side of the dry line interacting with the moist air on the wet side really just a local intense acceleration of the dry air trying to "get to" the oppositely charged moist air?
Are the rotation of low and high pressure systems basically due to the same condition? Is lightning also just basically a flood situation of the charges?
More generally, security is like any other field. You have to evaluate arguments based on the logic and evidence given. The main difference is that with crypto, it is much easier to shoot yourself in the foot and have catastrophic failure, since you have to be perfect and the attackers just have to be right once to totally own you. Thus the industry has standardized on a few solutions that have been checked really really well.
More generally, if you are interested, i would say read the actual papers. The papers on bcrypt, argon2 etc explain what problems they are trying to solve, usually by contrasting with previous solutions that have failed in some fashion. That doesn't mean reading the paper will explain everything or make you an expert or qualify you to roll your own crypto. Nor should you believe just because a paper author says something is a good idea that it actually is. It will however explain why slow hash function like bcrypt/argon2/scrypt were created and are better choices than the previous solutions in the domain like md5.
It sounds to me like the grandparent is 100% correct.
> It is simply wrong to think that scientific questions can never be definitively settled.
They made no such claim, speaking of intuition.
> Clearly there are some hypotheses that have been difficult (and may be impossible) to prove, for example, Darwin's idea that natural selection is the basis of evolution
I've seen very little evidence in online discussions (Reddit for example) among armchair scientists that the theory of evolution is anything short of cold, hard, scientific fact.
> In the case of climate change, the problem again is that the evidence is correlative and not causal. Demonstrating a causal link between human behavior or CO2 levels and climate change (the gold standard for "proof") is technically challenging, so we are forced to rely on correlations, which is the next best thing. But, you are right, it is not "proof".
Is this (it is not proven) the message they're sending when they say things like "The science is in", just as one example?
> Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done. As a result, we are forced to rely on correlation - the historical correlation between CO2 and temperature change is compelling evidence that CO2 increases cause global warming, but it is not proof. It then becomes a statistical argument, giving room for some to argue the question remains "unsettled".
This is not the message I've heard, at all, from any mainstream news source, and it's certainly not the understanding of 95% of "right minded" people I've ever encountered.
> While some of the issues you bring are remain unproven, what's really absurd is to think that no scientific questions can be settled.
What's even more absurd, to me, is how you managed to find a way to interpret his text in that manner. And you're obviously (based on what you've written here), a genuinely intelligent person. Now, imagine how the average person consumes and processes the endless stream of almost pure propaganda, from both "sides" on this topic and many others.
The unnecessarily dishonest manner in which the government and media have chosen to represent (frame) reality to the general public has left an absolutely massive number of easily exploitable attack vectors for "conspiracy theorists" to exploit. And if you are of the opinion that all conspiracy theorists are idiots so you have nothing to worry about, consider the possibility that this too has been similarly misrepresented to you.
If a society chooses to largely abandon things like logic and epistemology in the education of its citizens, thinking propaganda is a suitable replacement, don't be surprised when things don't work out in your favor. If we can barely manage such things here, why should we expect Joe and Jane six-pack to somehow pull it off?
So it can be written as a vector? No?
I assume there's probably many more complex computational problems outside of my domain that QC can help with. Does anybody know of any?
Placebo. There are biological bases of it (I don't believe in soul). Find these bases, study them, make a model of them. Then use proxy variables to measure it instead of trying to eliminate it statistically. Predict it in studies to avoid the need of placebo groups (and possibly of double blind methodology). Also, after it is completely measurable and its mechanisms are understood, if (very hypothetical) it has a really substantial effect, just use it to help treat patients.
"Science", as it is represented in the media, and in turn repeated and enforced (not unlike religion, interestingly) on social media and in social circles.
As opposed, of course, to actual science.
"Perception is reality." - Lee Atwater, Republican political strategist.
https://www.cbs46.com/news/perception-is-reality/article_835...
https://en.wikipedia.org/wiki/Lee_Atwater
"Sauron, enemy of the free peoples of Middle-Earth, was defeated. The Ring passed to Isildur, who had this one chance to destroy evil forever, but the hearts of men are easily corrupted. And the ring of power has a will of its own. It betrayed Isildur, to his death."
"And some things that should not have been forgotten were lost. History became legend. Legend became myth. And for two and a half thousand years, the ring passed out of all knowledge."
https://www.edgestudio.com/node/86110
Threads like this one, and many others like it, well demonstrate the precarious situation we are in at this level. Imagine the state of affairs around the average dinner table. Although, it's not too infrequent to hear the common man admit (which is preceded by realization) that they don't know something. As one moves up the modern day general intelligence curve, this capability seems to diminish. What the exact cause of this is a bit of a mystery (24 hour cable propaganda and the complex dynamics of social media is my best guess) - hopefully someone has noticed it and is doing some research, although I've yet to hear it mentioned anywhere. Rather, it seems we are all content to attribute any misunderstanding that exists in modern society to Fox News, Russia, QAnon, or the alt-right. I'm a bit concerned that this approach may not be the wisest, but I imagine we will find out who's right soon enough.
- What does it mean the universe is expanding
- Bayesian statistics
- How information is stored in magnetic tapes
It explains in terms a computer scientist can understand. As in: it sets out a computational model and explores it, regardless whether we can physically realize that machine.
Hope this helps!
What's the playing field look like for proto-life? How "smart" are the simplest molecular interactions? What does almost-replication look like? Could we use a computational model for this?
Not sure how much of this is known, but I'd love to hear an expert paint a picture of their mental model of the subject.
Well, that night went bad. Really, really, life-alteringly bad. For the first time, I had a bad trip. And not like, some mildly uncomfortable thoughts. I got a bad feeling in my stomach from the moment I dosed, and I knew something was going to be different this time.
As I started to come up, the bad feeling and a dark presence grew, and I pulled out my phone. I started a timer, and I watched as the time slowed to a point where it completely stopped. I started looping, I would get up off the couch, walk a few feet, and be teleported back. Over and over.
I realized that I had gotten so high, that time was no longer moving. And if time was not moving, I could maybe never come down. I was stuck here forever. And then the hellish nightmare started.
I felt like I was losing control of myself, like something else was trying to take over, and whoever won the battle, that is the consciousness that would exist. The more I fought, the more painful things got. Pain the likes of which I no one can physically imagine.
Went upstairs and laid down in my bed, began going out of body. I started dying over and over in unimaginable ways in my head, trapped in loops. Pain beyond anything I've ever felt in reality, there was no limit. It was tied to my breath, I realized that it had been so long since I had breathed, I kept forgetting who I was and what was going on, and then I would catch a slight glimpse and remember and fight so hard to take another breath. And there was so much pain in fighting to "survive" and hold on to who I was.
Eventually, the pain/struggle became too much, and I "gave in" and said "okay, I give up, you win, I can't take it anymore, I'd rather die." And that's when it's stopped. There appeared this giant shape of light/energy that was every color at once, and colors we don't have words for, and it "touched me" (could have been me moving towards it, or it towards me, there wasn't really a concept of this).
When it "touched" me, what it "showed" me was something I later learned is called an "Ouroboros", the snake eating it's tail. It showed me what "infinity" really meant, and that was too much to handle and shattered my psyche.
In that moment my body/mind/soul felt like it was obliterated to pieces by some energy beam in the most excruciating, searing pain, and I woke up in my bed having just pissed myself.
It took a long time to piece myself back together after that one.
---
There are a lot of details I've omitted for brevity's sake, but this captures the gist of it.
The majority of my trauma has to do with anything related to loops: think Nietzsche's Eternal Return, general time-loops, fear of time-stopping, etc.
When I have panic attacks I have to stop myself from starting a stopwatch on my phone to make sure time is still moving because it'll cause a feedback loop and ratchet-up the panic, causing the time-dilation to increase in a vicious cycle.
Now I try to always put limit orders. I put sell for Boeing at $150 with “good till cancelled” option. One morning I wake up to see they’ve been fulfilled. Wohoo! But the price had dropped down to $140. So I cashed in on the spike.
The market is crazy. I still don’t understand if. P/E ratios for some companies are through the roof (100+), why are people still investing in them like crazy? We don’t have a cov2 vaccine, millions of people don’t have jobs, why did the marker recover half it’s losses already? Shopify, Amzn, Zoom. WTF! Their charts seem hyped. Or may be I’m just plain wrong and don’t understand the fundamentals.
1. Search X and sort by citation count. High quality review papers get cited A LOT, typically in introduction sections of primary research papers. Alternatively, google "[X] review" or "best review paper on X".
2. Look for review journals. Many fields will have journals who only publish reviews. Nature has several such publications for example.
3. Look for the top journals in the space (start by sorting by impact factor) and see if they have review sections. If they do, try to search those sections. Most journals will reach out to top labs in a space and request that they write a review on a subject if the journal editors feel one is needed.
4. Ask someone in the field. Any researcher should be able to immediately point you to canonical reviews in their space.
My understanding is that I buy a contract for 100 shares at a future date of my choice.
So let's say stock XYZ is currently trading at $20, and I buy a futures contract for $100 for Jan 1st 2021.
I don't have enough money sitting around to buy those 100 shares, so let's say XYZ is trading at $200 by Jan 1st 2021, That means I have a contract where I can buy 100 of them at $100 and immediately sell them for $200, so in theory it's a $10k profit.... but because I don't have enough money to actually do that, do I just sell my future for something close to 100* $200 (because someone with enough money will buy it and do the actual trade?)
What happens if it's already well over $100 long before Jan 1? Can I just set a price and sell it whenever I want like a can with a regular share?
For recent times, you can also compare the dates of the C14 with other methods like counting tree rings, or the date of a total eclipse and check the calibration.
2) You are almost right. The tides are not produced by the gravity of the Moon, but from the differences in the gravity of the Moon in the water that is nearby and the average of the Earth.
You forgot to include the centrifugal force [when you are in the non-inertial frame frame that rotates like the Earth-Moon system https://xkcd.com/123/ ]. The centrifugal force is bigger in the water that is in the more far from the Moon and again the difference creates the other tide.
3) The sky is blue because the single molecules in the air disperse the blue/violet color more than the other colors. There are many ways to produce colors. In this case the light is dispersed by the whole molecule.
A different method to produce blue is using a CD to produce a rainbow and the using a slit block the other colors. Some birds and butterflies use a somewhat similar method. [Not very similar but closer to the CD method than to the air method.]
The blue in the die for cloth uses another method. You make a long chain of conjugate chemical bounds C-C=C-C=C-C=C-C, and pick length and atoms so the electrons absorb the colors you don't like and transform the energy into heat.
I'm probably forgetting a few more method, there are many of them, so it's interesting to understand which of them make the sky blue.
*) These are good questions. My explanations are not 100% complete (and probably not 100% accurate) but I hope you can fix the holes.
One is gyroscopic forces. Ever picked up a spinning hard drive? Notice that it feels strangely hard to turn in some directions? Same idea.
The other is the feedback loop consisting of the bicycle and its rider.
If the bike is stationary, it's hard to keep it upright because you have no assistance from gyroscopic forces. At low speeds, you have some assistance but not enough. At higher speeds, the bike wants to maintain its current orientation, and it's easy to feed in the slight corrective forces needed to keep it that way. Hop off the bike and it will keep going until something causes it to veer off course.
You can throw a ton of math at it, as in the paper mentioned elsewhere in the thread, but at the end of the day, gyroscopic forces and negative feedback are all that's necessary. The Schwab paper appears to show that the gyroscopic forces aren't necessary, but no bicycle in the real world is ever going to work that way except in rare corner cases, e.g., if you're one of those riders who can stay upright at a standstill.
• Qualia. What is this subjective experience that I know as consciousness? I've gone through Wiki, SEP and a fair number of books on philosophy and a few on neuroscience but I still don't understand what it is that I experience as the color "red" when in reality it's just a bunch of electric fields (photons). Why can't I get the same experience — i.e., color — when I look at UV or IR photons? These too are the very same electric fields as the red, blue, green I see all the time.
• Photographic composition. I'm a designer. I know them. I use them. But only empirically. I just do not understand them at a neuroscientific level. Why does rule-of-thirds feel pleasing? Is the golden ration bullshit? My gut says yes but I'm unable to come up with a watertight rebuttal. Why do anamorphic ultra-widescreen shoots feel so dramatic/cinematic? Yet to see an online exposition on the fundamental reasons underlying the experience. Any questions to artists are deflected with the standard "It's art, not science" reply.
• Wave-Particle duality. "It's a probability wave that determines when a particle will pop into existence out of nothingness." okay, where exactly does this particle come from? If enough energy accumulates in a region of empty space, a particle pops into existence? What is this "energy"? What is it made of? What even is an electron, really? I've followed quite a few rabbit holes and come out none the wiser for it.
• Convolution. It's disappointing how little I understand it given how wide its applications are. Convolution of two gaussians is a gaussian? Convolution in time domain is multiplication in frequency domain and vice-versa? How do these come out of the definition which is "convolution is sliding a flipped kernel over a signal"?
Both of you may be correct. I think the person you responded to may not have been precise in their framing.
I suspect that you had N95 masks in mind when you wrote masks, which doesn’t negate the point of the person you responded to, if they had surgical masks in mind when they wrote masks. Surgical masks are far more common than N95 masks since they are cheaper and do not provide protection against viral particles for the wearer.
If you want to feel the truth of evolution in your bones, you really do need to be familiar with biology on both the molecular and cellular level. You can get a feel for it with less, but it won't ever be obvious how and why it works unless you know it at that level. I don't mean to sound exclusionary - it really just does require a ton of background knowledge.
The idea is that you structure the QC system such that the computation is done using entangled states, but when it comes to measuring the qubits (to get the result of the computation) the state is such that you'll get meaningful results. This means the quantum state at the end of the calculation would ideally be along whatever axes you're measuring, so you get the same answer 100% of the time.
Trust (knowing the chemist directly, indirectly, ...) in specific individuals > a largely unknown (but known to be imperfect) system, for many people anyways. Obviously this isn't practical for the not well connected, but it's all we got for now.
But as for your question, I've seen little to suggest it's anything more than war on drugs propaganda and hearsay.
More precisely, given f: 2^n -> {0,1} which is guaranteed to hit 1 exactly once, Grover finds the one input which hits 1, and it does so using about 2^{n/2} queries of f; but the constants happen to line up so that when n=2, exactly one query is required.
Some fundamental concepts that might fly over someone without a biology degree might include the absolutely fundamental requirement of protein binding for any biological process (akin to a transistors function for a computing device).
I'm thinking that step one of communicating the entire immune systems complexity is probably omission, of anything that's not absolutely required for comprehension of the basic concept. In that regard I'm not convinced there's any need for bringing up the innate system first (there's a reason we discovered it quite recently, perhaps?). Other details can also be similarly "omitted" for simplicity perhaps. What are your thoughts?
I've been watching a lot of documentaries lately, and I can't figure out how a star that _radiates_ light, collapses and suddenly light can't escape? Doesn't that mean the blackhole has more mass/gravity then the star that created it?
Since the outbreak of COVID-19, demand for the kind of services offered by those 3 Internet businesses have in fact skyrocketed. Increasing demand imply those businesses still have room to grow revenue. Shopify [1] for instance is now seeing huge Black Friday-like traffic during the shelter-in-place and a lot of these small businesses are first-timers on their platform who will likely stick around after the pandemic.
1: https://mobile.twitter.com/jmwind/status/1250816681024331777
I understand flight from a mathematical point of view. I've actually read a few books on the subject, and I could explain how flight works to someone. However, I'm still fishing for an explanation that "feels" more satisfying though. Per the question, I still want it explained better.
EDIT: There's already a thread about flight. I asked the same question there, but phrased a bit differently: https://news.ycombinator.com/item?id=22993460
Spinors are difficult to describe in an HN post since they require a good amount of linear algebra, but my favorite explanation is probably here: http://www.weylmann.com/spinor.pdf
Sure, some ("plenty", in absolute numbers) will tell you this, but I don't recall being in many forums where that attitude doesn't get significant pushback (as opposed to the anti-drug community). The modern "pro drug" community has a fairly significant culture of safety within it, unlike back in the sixties.
> The truth is almost impossible to find.
There is plentiful anecdotal evidence online. Any clinical evidence, if they ever get around to doing it in any significant volumes, will be utterly miniscule (and I highly doubt more trustworthy, considering what you're working with, and the size of the tests that will be done) to the massive volume of trip reports and Q&A available online, much from people who know very well what they're talking about, not unlike enthusiasts in any domain.
> Now from what I gathered about LSD (and psychedelics in general): these are very random.
Depends on one's definition of random.
> If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more.
Effects vary by dose of course, but I've seen little anecdotal evidence that suggest high doses have a different outcome, and plenty that suggests the opposite.
> But it can also fuck you up for years, or maybe bring significant improvement in your life.
See: https://rationalwiki.org/wiki/Balance_fallacy
> The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.
I believe this to be true, but don't forget the fallacy noted above.
That said, these things are not toys - extreme caution is warranted.
Note that the 4th edition is (sortof) freely available at the NIH website. The way to navigate through that book is bizarre though, as the only way to access its content is by searching.
I think this was a pretty neat explanation:
https://sites.google.com/site/butwhymath/m/convolution
The problem with convolutions, like many things in science, is that how you learn it, depends on what you're studying. Same theory, but with N different explanations, which can cause confusion if some of them are very different and tough to connect (i.e learning convolutions in a physics class vs leaning one in a statistics class)
Put another way, weight pulls it down, thrust moves it forward, the resultant lift keeps it up, and drag limits its speed. Only rocketeers and fighter/aerobatic pilots need to really worry about the thrust to weight ratio as a constraining factor, because the vertical flight regime matters to them. From your average bugsmasher to your commercial airliner, it's not a factor (to the disappointment of pilots everywhere).
Consider that a Cessna 172 has a glide ratio of about 9:1, so it can go 9 units forward for every 9 of altitude it gives up. If that's hard to intuitively grasp, consider that it's traveling through a fluid. Surfing, even. The interaction with that fluid is why it works.
That any more satisfying?
What they don't tell is that once inside the sphere the force decreases linearly, because the planetary mass ahead of you is partially balanced by the mass behind you.
With this you can see that the gravitational pull of a planet/start/spheroid is largest at its surface. So, if something happens to make a start shrink by some factor, the gravity pull at its surface is increased by the square of this factor, even if the mass of the star remains the same.
Actually I believe starts eject a lot of mass when they become black holes, and this is just a Newtonian argument for a phenomenon intrinsically relativistic, but I hope you get the gist of it.
I want to go up. I want to use the thrust I have available to achieve that. Would not the most efficient use of the thrust available be the direct and naive approach, of pointing the engine straight up/down? Nope.
Instead, we point the engine horizontally; literally orthogonal to our desired goal. Then we use these "wing" things - they're not complicated, they're just rigid bodies with a shape, which honestly isn't even that unusual of a shape. Now we're not only able to go up (we finally achieve our goal), but we get to go fast in some horizontal direction as well.
I haven't found an explanation for this that feels satisfying to me.
I found an explanation of schor's algorithm by my colleagues quite helpful. In my experience math seems to be more useful here than computing science.
I build this at school, using the same principle: https://no.wikipedia.org/wiki/Sivilingeni%C3%B8r#/media/Fil:...
https://www.youtube.com/watch?v=B_zD3NxSsD8&t=3m17s
The artistic director has a ted talk where he talks about how beautiful biological processes are, and it's like no, man, you made it look that way.
If you want a really fantastic video that captures just how messy and random it is I recommend the wehi videos, like the one on apoptosis, where the proteins look way more derpy than the secret life of the cell: https://www.youtube.com/watch?v=DR80Huxp4y8 There's a couple of places where they have a hexameric protein where things magically snap into place, but I give them a pass because the kinetics on that are atrociously slow. Let's just say for the sake of a short video the cameraman happened to be at the right place at the right time.
So if I have a futures contract for 1000 barrels of oil for $10 a piece, when that expires, no matter what the price of oil, $10000 will be taken out of my account and someone will contact me to come pick up those barrels (there's some nuance here, but let's ignore that)(funny story about this at the bottom). If I have an 10 options contracts for 1000 shares of stock XYZ at $10 a share and XYZ is at $9 a share, I can just let the contract expire worthless.
From here on out, I'm going to talk about options, because it's closer to what you are asking.
> I don't have enough money sitting around to buy those 100 shares, so let's say XYZ is trading at $200 by Jan 1st 2021, That means I have a contract where I can buy 100 of them at $100 and immediately sell them for $200, so in theory it's a $10k profit.... but because I don't have enough money to actually do that, do I just sell my future for something close to 100* $200 (because someone with enough money will buy it and do the actual trade?)
Contracts are exercised after hours. If you don't have the money you will usually collect the shares and be put in a margin call (you owe your broker money) and then the shares will be sold first thing the next day to cover the margin call. Some brokers will try and sell your contract for you before the end of the day the contracts expire if you don't have the money. So you may only get $9950 or something for your contract and it will be sold to someone else a few hours before the market closes.
> What happens if it's already well over $100 long before Jan 1? Can I just set a price and sell it whenever I want like a can with a regular share?
Yes, the contract itself cost money and can be bought or sold.
This isn't a perfect analogy, but think of the contract like a coupon. If I have a coupon for to buy a TV for $100 and the cheapest anywhere is selling that TV is for $200. That coupon has an intrinsic value of $100.
The analogy breaks down, because in the real world, you probably couldn't get $100 for that coupon. You'd probably get slightly less than the price difference. With options on the other hand, you usually pay slightly more than the price difference, because of the volatility of the underlying. Basically, someone might pay you $105 for that coupon because they think they the TV price will go up and they can sell it to someone next week for $110.
========
* As a side note, when people were saying oil prices went negative last week, it's because the May futures contracts were about to expire. Basically, it was the second to last day people could trade the contracts before they had to take delivery of the oil and, since so few people are using oil right now, many of the places where people would normally store oil are full. Since you can't just dump oil down the drain, people holding oil contracts were willing to pay other people to take over the contracts so they didn't have to take delivery of the oil.
At the very lowest level, it could be gut feelings from a potential buyer. They see electric cars more frequently, combustion engines going out of fashion, and simply wonder "Hey, why does that [electric car company] trade so low, when they'll probably be market leaders in 5/10/15 years?", or conversely, "Hey, why does that [petroleum] company trade so high, oil prices are shot, and the industry will lose relevance in 10/20/30 years"
On a higher level, some potential buyer will look at the companies financial statements, and figure out if the share price is too high / low for how the company is performing, from a financial standpoint. This is called "fundamental analysis", and you can easily find step-by-step analysis reports of such on various companies.
But the market is one big hodgepodge of beliefs, with probably thousands of different rationales behind their prices, and motives for sales / purchases.
https://www.youtube.com/watch?v=DR80Huxp4y8
here's the artistic director for the inner life of the cell (the worse one) going on and on about how "beautiful" the science of biology is:
https://www.ted.com/talks/david_bolinsky_visualizing_the_won...
A "stall" happens when the wing is no longer directing air downwards (and thus not providing lift), and is instead just chopping up in the air into turbulent chaos without any consistent direction.
Tidal forces occur much more due to the difference in the direction of gravity than due to the difference in magnitude.
I've been having conversations about viruses recently and in those conversations / thought experiments I keep coming back to a point someone made to me.
Someone this person knows, with extensive medical expertise, explained that the "membrane" of the cell contains a ridiculously large number of unique types of proteins.
Understanding, in vague terms, how viruses penetrate cells the question I pose is "is this true because each of those proteins has a unique and distinct function in the cell membrane? Or is it more a matter of scale and utility?" In other words, does the observation simply indicate that our bodies are not as perfect as we'd like to think they are and the body's process for creating / repairing cells is more a utilitarian function where the "rules" of cell construction are extremely flexible such that these molecules are constructed in various ways where our cells are using materials available to them at the time?
If this is the case it starts to make a lot of sense to me at a molecular level why certain people tend to be more susceptible to contracting certain diseases. Could a lot of it really just come down to diet, along with probably a hint (or more) of DNA's interaction with those proteins we're providing to our bodies? And to what extent does each of those play a role? DNA and the proteins.
For example, gyroscopic-forces-as-stabiliser don't need the Schwab paper's "ton of math" to be undermined. A simple counter-rotating wheel was used empirically at Cambridge to show as much, alongside notes that gyroscopic forces are relevant to the dynamics of a loaded bicycle, but misconstrued; far from assisting to hold it upright whilst ridden, they induce instability at the beginning of a change in direction, and more so at speed, a phenomenon (counter-steering) familiar to cyclists and relied upon by motorcyclists.
Then there's a simpler observation that can be made: people have to learn to ride a bicycle. The fact it stays upright when rolling unloaded, but not when loaded, is indicative of how small the gyroscopic effect is, not how significant it is. Ergo, that argument would suggest, it is tiny shifts body position that contribute all of the stability.
And then others, proposing further explanations, etc etc ad nauseum.
I have come around to the view that in fact they don't stay upright, and they are almost always falling over, but in a many-branched configuration of the universe our observer effect sends us preferentially down the vanishingly unlikely path where they didn't, and there are an uncountably many alternatives of Me that have nothing but knee scars to show for it.
How does this work if a company doesn't pay out dividends? There's no investment to return unless someone buys from you at the same or higher price... right?
I understand the physical properties of the coin make it so it is an independent event but if I were to run the experiment multiple times the number of times it would be heads after 5 heads would not be an even probability, it would be unlikely since 6 heads in a row is a rare event.
The universe is actually made of quantized fields. Both particles and waves are imprecise models/approximations. There's no such thing as a particle, instead there are just excitations of this field which we cannot measure with complete accuracy.
Note that matrix multiplication takes O(n^2) time with a quantum computer, but O(n^2.807) time on a classical computer.
http://www.righto.com/2011/07/cells-are-very-fast-and-crowde...
But in a nutshell, the animations are heavily idealized, showing the process when it succeeds, slowing it way, way down, and totally ignoring 90% of the other nearby material so you can see what's going on. Then you remember that you have just a bajillion of cells within you, all containing this incredibly complex machinery and... it's really kindof humbling just how little we actually know about any of it. Not to discredit the biologists and scientists for whom this is their life's work; we've made incredible amounts of progress over the last century. It's just... we're peeking at molecular machinery that is so very small, and moves so quickly that it's nigh impossible to observe in realtime.
I can't say this will necessarily assuage your curiosity about consciousness, but I mostly stopped being overly curious about this once I realized that, it's likely only a manifestation of the aggregation of all of the individual sensory experiences our bodies have.
In other words, when you think about planet-scale phenomena such as how humans more or less all feel "connected" and non-hostile because civilization (in the most advanced countries) has reached a point where hostility is no longer essential for survival. That "experience", for each of us, is ours alone but it seems to be so ubiquitous that we can't take credit for that experience or insight as individuals. It leads me to believe a large part of our conscious experience of the world is shared and independent of the brain's capacity. More precisely, humans are (universally) experiencing phenomena that are independent of our brain's capacity to process and understand them.
My good friend took "something" once (hard to tell what the dealer is selling you) and ended up in a mental institution, and is now in fact officially mentally disabled and on drugs for life. The drugs keep him stable enough that he's able to work, although he's still just a shadow of his former self.
A tRNA molecule at body temperature travels at roughly 10 m/s. Assuming a point-sized tRNA and stationary ribosome of radius 125 * 10^-10 m, the ray casted by the moving tRNA will collide with the ribosome when their centers are within 125 * 10^-10 m of each other. The path of the tRNA sweeps a "collidable" circle of the radius of 125 * 10^-10 m, for a cross-sectional area of 5 * 10^-16 m^2. Multiplied by the tRNA velocity, the tRNA sweeps a volume of 5 * 10^-15 m^3 per second. Constrained inside an ordinary animal cell of volume 10^-15 m^3, the tRNA would have swept the entire volume of the cell five times over in a single second. Obviously the collision path would have significant self-overlap, but at this rate it's quite likely for the two to collide at least once any given second.
Now, consider that this analysis was only for a single ribosome/tRNA pair. A single ribosome will experience this collision rate multiplied by the total number of tRNA in the cell, on the order of thousands to millions. If a ribosome is bombarded by tens of thousands of tRNA in a single second, it's very likely one of those tRNA will (1) be charged with an amino acid, (2) be the correct tRNA for the current 3-nucleotide sequence, and (3) collide specifically with the binding site on the ribosome in the correct orientation. In actuality, a ribosome synthesizes a protein at a rate of ~10 amino acid residues per second.
Any given molecule in the cell will experience millions to billions of collisions per second. The fact that molecules move so fast relative to their size is what allows these reactions to happen on reasonable timescales.
Every time I read an introductory QM book / article, the complex numbers just come out of nowhere and no one bothers to explain how that makes any kind of physical sense.
Then it's just a "simple" matter of stacking up a billion of the things to get them to do complex programs.
https://www.nand2tetris.org/ may also be insightful, but I did not look further into to.
> Permanent schizophrenic zombie, maybe a bit extreme, but severe and traumatic long-lasting psychological damage is a not-uncommon phenomena.
https://english.stackexchange.com/questions/6124/does-not-un...
https://towardsdatascience.com/an-introduction-to-multivaria...
HOW PSYCHEDELICS REVEALS HOW LITTLE WE KNOW ABOUT ANYTHING - Jordan Peterson | London Real --> https://www.youtube.com/watch?v=UaY0H9DBokA
Jordan Peterson - The Mystery of DMT and Psilocybin --> https://www.youtube.com/watch?v=Gol5sPM073k
> LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.
https://en.wikipedia.org/wiki/Hallucinogen_persisting_percep...
I have a close friend who had the same experience with excessive use of marijuana, but my money would be on psychedelics being far more likely to produce the outcome you unfortunately experienced. He's much better today, but not entirely "ok".
> But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.
This sounds rather similar to my friend's story.
Can Taking Ecstasy (MDMA) Once Damage Your Memory?
https://www.sciencedaily.com/releases/2008/10/081009072714.h...
According to Professor Laws from the University’s School of Psychology, taking the drug just once can damage memory. In a talk entitled "Can taking ecstasy once damage your memory?", he will reveal that ecstasy users show significantly impaired memory when compared to non-ecstasy users and that the amount of ecstasy consumed is largely irrelevant. Indeed, taking the drug even just once may cause significant short and long-term memory loss. Professor Laws findings are based on the largest analysis of memory data derived from 26 studies of 600 ecstasy users.
> (from your comment below) I took 300ug of LSD recklessly on a particularly bad day for me, in a particularly uncomfortable setting.
https://www.trippingly.net/lsd/2018/5/3/phases-of-an-lsd-tri...
Lots of details, plus dosage guide (25 ug and up) & typical experinces
https://www.reddit.com/r/LSD/comments/34acza/do_you_guys_bel...
imo 300ug is the point where you need to have some serious experience with tripping to be able to handle yourself. because if you're coming up, the acid is already circulating your bloodstream, and you get that horrible sinking sensation of thinking you've taken too much... you're in for a really bad time if you don't know how to control the trip.
I think it's difficult to say how big a dose really is until you've had a bad trip on it. only then can you see how insidious everything can get and as such just how intense 300ug can be. the reason people say not to start on doses like that is so they will AVOID those horrible experiences. so yeah, 300ug is a large dose, just because if shit goes wrong on it then you're fucked.
I very much dislike this phrasing, because it suggests that it is just us that are not capable of building an apparatus to enable us to do so.
Imagine a gear transmission or a lever: You can transform distance into force and vice versa. It is up to your choosing if you want to go with more speed or more force by changing the point along the lever, where your transmission happens. It is not possible to build a transmission, which gives you the most distance and the most force simultaneously. In this system of transmission, one is the other, just a different perspective.
And it is the same with the location and impulse of a quantum. You can choose to have more information in the shape of location or more in the shape of impulse by changing your measurement (like the point along the lever). But you can't have both, because there is only a constant amount of information which is represented in a combination of location and impulse.
Actually, the uncertainty part of heisenberg uncertainty principle is a purely mathematical limitation (called Gabor limit) and only the Planck constant makes it physical.
Gabor limit: a • b >= 1/(4•PI)
Heisenberg's uncertainty principle: a • b >= h/(4•PI)
So the Planck constant is kind of the maximal sampling resolution of the fields / signals in our universe.
I've found that to be the clearest way of understanding what qcs do.
I have arthrtic knees, and I'd like a better understanding of how joints work, and where the various clicks, pops and swellings come from.
It's easy to find really simple things, but harder to understand "how things go wrong".
If you actually meant your original wording, theoretically whichever order came second would fill at the best price.
If company shares are cheaper than other source of money, then company can buy back them, or other potential owner can buy them. It also sends strong signal to owners and investors about future of the company.
If company shares are costing more than other sources of money, then company can sell more of them to return costly money back or to expand business.
For example, bank rent rate is 10% per year. Company worth is $1 million. Company created 1 million of shares. 1 share is $1 of company worth. Company pure profit is 10% per year. Company needs $1 million in circulation to operate.
If company will rent money from bank, it will have 10%-10%=0% of profit. If company will sell 100% of shares, it will have 10%-0%=10% of profit, but this profit will go to someone else. If company will sell 50% of shares and will rent $0.5 million from bank, it will have 10%-5%=5% of profit, 2.5% will go to company, 2.5% will go to other owners. By reinvesting this profit, company can reduce bank money or expand company to worth more, increasing company profit share.
Optimizing matrix multiplication for classical computers is an open research problem, and according to wikipedia there are algorithms with O(n^2.37) running time. Also according to wikipedia, it is not proven that matrix multiplication can't be done in O(n^2).
The photon (as a field excitation) goes through both slits, but is quantized so only has enough energy to trigger a mark at 1 spot on the photo paper.
> Why do we not still have to consider interference in outcomes after the photon makes its mark on the paper?
If we want to be completely accurate, we should. However so many interactions happen so quickly that the law of large numbers quickly takes over and obfuscates the quantum reality. Technical term for this is decoherence.
> Why are quantum computers hard?
Exactly because of this decoherence. It is very difficult to keep the qubit state isolated from the environment throughout the computation.
That's another really interesting axis of this topic -- why was early Earth special?
There may be no environments on Earth today that are like early Earth, but we could probably recreate them. Wouldn't we then be able to witness abiogenesis?
Or, early Earth wasn't special and abiogenesis happens today. If so, where do we look?
It renders an interactive model of the human body and you can toggle different layers, from bones to nerves, to various layers of muscles and ligaments. It also contains animations of treatment exercises/stretches, surgeries and highly detailed models of various biological components.
It helped me understand my injury and why certain exercises help. It's paid software but it comes with a free trial.
The basic idea is that by making the amplitudes of the qubits destructively interfere with each other in certain ways, you can eliminate all of the wrong answers to the question you're trying to answer.
https://smartairfilters.com/en/blog/n95-mask-surgical-preven... https://smartairfilters.com/en/blog/coronavirus-pollution-ma...
What? How is it clear? As you wrote yourself, correlation is not causation.
Long answer: You need to understand how the Limit Order Book works. I wrote up something about this here [1]. It also goes into different definitions of price.
> If a very low-volume stock is listed at $4, and then I offer to buy a share for $100, does the NYSE suddenly start listing its price at $100?
If you trade actually absorbs the order book and pushes the asks to $100 then yes, that could be case depending on the exchange, but I'm not sure about NYSE specifically. Most likely that could never happen due to various hidden order types and HFT market makers though.
[1] https://www.tradientblog.com/2020/03/understanding-the-limit...
Assume there are no other orders in the order book.
Scenario 1: Seller submits a limit sell order for $90. Since there are no buyers, this order goes into the book. Then a buyer submits a limit buy order for $100. The order would be filled at $90 (the best ask) and the buyer only pays $90. Here, the seller is the maker and the buyer is the taker.
Scenario 2: Buyer submits a limit buy order for $100. Since there are no sellers, this order goes into the book. Then a seller submits a limit sell order for $90. The order will be filled at $100 (the best bid) and the seller gets $100. Here, the buyer is the maker and the seller is the taker.
Market makers are responsible for setting prices and providing liquidity. If you want to understand this in more detail, check out this post [1] I wrote up a while ago.
[1] https://www.tradientblog.com/2020/03/understanding-the-limit...
When you learn to ride a bike, you're simply training the feedback mechanism. Much like a PID controller, your brain has to keep track of the amount of error and null it out with proportional and integral terms (at least). Once those constants are dialed in, it's "just like riding a bike" -- they're yours for life.
Then there's the matter of learning which way to lean so that the gyroscopic instability inherent in turning doesn't send you into the nearest ditch...
I gifted myself The Vital Question in 2015 December. While Lane writes effectively without any mind-numbing jargon, the book still has quite a bit of technical chemistry (understandably). After the excellent first 80 pages, it took me a lot more will power to plough through. (I paused at page 112 to get back later.)
Once when I was reading the book on a plane, a seasoned biologist happened to be sitting next to me. When I told that it's the first book of Nick Lane that I picked up, he said: "I'd rather suggest you to pick up Laine's other book, Life Ascending, and only then get back to The Vital Question."
PS: FWIW, I've previously mentioned the above in an older thread, where an ex-biochemist chimed in to confirm the above advice: https://news.ycombinator.com/item?id=18714115
To pick a couple fo them randomly, understanding amyotrophic lateral sclerosis or alzheimer would be terrific starts.
You really didn't have to apologize! I thoroughly enjoyed your explanation. I can't recall how many times I've searched Google for "How are stock prices determined?" and come back with nothing. Your answer was better than 100% of everything else out there.
I'd love to learn more about this. Are there any books, blogs, etc. that you could recommend? Also, YOU should really consider blogging about this!
The idea is, you can transform a normal [0] wormhole that isn’t a time machine into one which is by:
1) accelerating one end to high speed relative to the other
2) keeping on end in a lower gravitational potential than the other
Why are either of these considered meaningful statements, never mind correct?
In the case of 2 in particular, isn’t GR supposed to require smooth values? So any time dilation effect would be almost identical on a pair of points +δ and -δ from the throat? Making it similar to the case of a gravitational potential without a wormhole?
And in the case of 1, the more I think about it, the less I understand the concept. What is being moved? An imaginary clock that would’ve been in the part of the wormhole at the far end? The apparent speed as measured going through the throat will be zero regardless of the apparent speed of the same as measured when going the long way around.
[0] yes, I know
1) The state of an n-qubit system is a 2^n dimensional vector of length 1. You can assume that all coordinates are real numbers, because going to complex numbers doesn't give more computational power.
2) You can initialize the vector by taking an n-bit string, interpreting it as a number k, and setting the k'th coordinate of the vector to 1 and the rest to 0.
3) You cannot read from the vector, but exactly once (destroying the vector in the process) you can use it to obtain an n-bit string. For all k, the probability of getting a string that encodes k is the square of the k'th coordinate of the vector. Since the vector has length 1, all probabilities sum to 1.
4) Between the write and the read, you can apply certain orthogonal matrices to the vector. Namely, if we interpret the 2^n dimensional space as a tensor product of n 2-dimensional spaces, then we'll count as an O(1) operation any orthogonal matrix that acts nontrivially on only O(1) of those spaces, and identity on the rest. (This is analogous to classical operations that act nontrivially on only a few bits, and identity on the rest.)
The computational power comes from the huge size of matrices described in (4). For example, if a matrix acts nontrivially on one space in the tensor product and as identity on nine others, then mathematically it's a 1024x1024 matrix consisting of 512 identical 2x2 blocks - but physically it's a simple device acting on one qubit in constant time and not even touching the other nine.
Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..) and why not others (e.g. stocks or put money into startups)? And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again (just like it's been created)? What happens if a debtor defaults on its debt? Does that money then just stay in the economy, impossible to drain out? What is the general expectation of the central banks? What percentage of the debt is expected to default and how much is expected to be paid back?
And specifically in the case of central banks buying govt. debt: Are central banks considered "easier" creditors than the public? What would happen if a country defaults on a loan given by a central bank? Would the central bank then go ahead and seize and liquidate assets of the country under a bankruptcy procedure to pay off the debt (like it would be standard procedure for individuals and companies)?
The force that would be exerted from acceleration versus gravity is different. The force you we think of as gravity comes from a center point that changes with your position while acceleration comes from a uniform direction without regard to your position.
https://www.amazon.com/Trading-Exchanges-Market-Microstructu...
I don't know if this was it, but an explanation nonetheless https://medium.com/@omaraflak/automatic-differentiation-4d26...
> Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. But what are the consequences?
Nobody quite knows; it's still hotly contested between left wing lovers of Keynes and right wing believers in austerity.
> What is the thought process that central bankers have gone through to make these decisions?
Probably largely a political one. Central banks may be trying to fulfill a remit set by law (e.g. bank of england: keep inflation below x%) and are trying to deliver on that. (why? too much or too little inflation both cause problems, I guess we somehow reached consensus on a "sane" amount that keeps pace with genuine growth of wealth within the economy)
> Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..) and why not others (e.g. stocks or put money into startups)?
I think this is about distributed decision making. The central bank does not have the expertise to decide which stocks or startups represent the best investments. The examples here involve lending money to government, presumably the idea being the latter is better placed to decide what to do with the money. Another example is buying assets from other banks, which are again better placed to decide which businesses/homeowners/etc represent a more sound investment as they do it on a daily basis (from a profit/loss point of view ... of course we debate whether or not that's the case on a societal level).
> What would happen if a country defaults on a loan given by a central bank?
Internally it would depend on laws/balance of political power within the country. Between countries, depending on the currency the country could do crazy stuff like print excessive amounts of money to repay the loan (Germany did this in the 1930s leading to hyperinflation) or they could just as you say, default. The country's credit rating would then be downgraded, making it harder for them to raise credit in future.
> Would the central bank then go ahead and seize and liquidate assets of the country under a bankruptcy procedure to pay off the debt (like it would be standard procedure for individuals and companies)?
Not the bank, but the country making the loan, may first negotiate some debt relief with strings attached e.g. preferential trade agreements. Beyond that, I have no idea what precedent exists.
Quantum computing is all about finding ways to hack the interference process to compute more than you otherwise would have.
The three-reference frame example is the easiest, because you can start with a frame where two events, A and B, happen simulateously. A reference frame (say, a spaceship), flying along a line in the A to B direction will observe A happen, then B. A ship flying the opposite direction will experience the opposite, B then A.
So whose observations were correct? All them are perfectly valid. The problem is if we allow A to cause B, in which case the B then A frame has the effect happen before the cause.
It's like how the source code to `ls` is simple because it's one of the most basic Unix programs, or something like that.
What that image drove home for me is:
1) that DNA transcription isn't something that happens rarely, or once-at-a-time. DNA is constantly being transcribed; proteins are constantly being built. The scale and rate isn't something I'd ever been taught.
2) How RNA polymerase works must be taking into account a hell of a lot of congestion. Polymerase molecules must constantly be bumping into each other.
3) How the picture would make no sense whatsoever unless you already know what the mechanism is.
I think it does make sense to start with the idealised process, as long as you follow up with messy reality.
It's true, but you need to realize that you're qualified enough only when you understand that you shouldn't roll out your own crypto.
In my opinion, the only person who has credibly demonstrated being able to roll his own crypto is djb (http://cr.yp.to/)
> but isn’t all security obscuring something,
Keeping a secret isn't "obscuring" something, it's hiding it entirely. Security through obscurity is bad because it relies on attackers being dumb. The smartest person in the world cannot be expected to guess a well chosen and kept secret.
Here's what I would have thought happens: After the first lens, you get polarized light, 90deg offset from the last lens, so no light passes. Then you introduce a 3rd lens in the middle, 45deg offset. This could alter the polarization (maybe it widens the band, or introduce some greater variance, shifts it who knows), and this is why now some light will pass through number 3. No need to create any light
The idea that some some compounds didn't just contain some fire was still common when the first list of elements was put together. A big leap forward was realizing air had two principle components, burn-y air and not burn-y air.
They figured out water wasn't an element when the burn-y air and some mystery gas burned to make it.
Basically, everything was maybe an element until they either broke it into pieces, or made it out of other stuff.
> Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. What is the thought process that central bankers have gone through to make these decisions?
The general consensus is that central banks should stay passive and keep prices stable. However, in periods of crisis, like the one we live in, the central bank could support economy. In ordinary times, creating trillions would lead to inflation. But here the idea is more to save the economy in the short term because it's always cheaper than reparing it. Central bankers agreed to create trillions such that banks do not go bankrupt like they did in 1929. By creating trillions, they also keep interest rates low for government such that they can still borrow.
> But what are the consequences?
Some inflation. Another consequence is that investors will invest in riskier assets afterward to keep their profitability target. (Again, because lending trillions will lower the interest rates)
> Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..)?
They usually buy low risk, higly liquid assets. Putting trillions in startups is infinitely infinitely complicated for a central bank because it implies high monitoring costs, and it also takes a lot of time to create those kind of contracts. Remind that the goal is to provide lot of liquidity to the economy as fast as possible. There is also an academic debate about giving money directly to the general public (known as "helicopter money"), but with little attention from central bankers.
> And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again ?
Yep, pretty much... Appart from fiat money, money is constantly created and destroyed. It is mostly created by private banks when they grant loans. And it is destroyed when you repay it. Of course they cannot do whatever they like and create at will, but remember that "Deposits DO NOT make the credits (but in some way it defines how much you could create)"
> What percentage of the debt is expected to default and how much is expected to be paid back? What would happen if a country defaults on a loan given by a central bank?
Central banks buy bonds, and bonds are pretty much always paid back. And if not, the central bank will not suffer much. Cases of countries not reimbursing are very scarce and exceptional (I can just think of Argentina). Anyway, a country CANNOT go bankrupt like a person. And in general, comparing countries with individuals or companies is not a good idea. Countries are here pretty much forever (in a financial sense), you don't. Countries can waive taxes, individuals cannot.
A lot of these things are kind on unknowable because they depend on future human behaviour in ways you can't really predict. A lot of George Soros's theory of reflexivity is along those lines. People think they are calculating on the basis of fundamentals but the things that look like fundamentals are actually functions of human behaviour so the system is inherently unstable. He's made a few bob from that.
At the subatomic level, we observe that electrons have some extra angular momentum, beyond what we'd expect from their "orbits". We call that spin, because it's intrinsic, like the spinning of a macroscale object.
https://metacpan.org/pod/Quantum::Superpositions
As far as I can tell this one still outperforms all existing "hardware implementations".
Electricity is always explained by its effects, but never by its actual nature. I'd like better explanation :-)
>Yep, pretty much...
I would add that if the system is fractional reserve then it increases the proportion of the bank's reserve allowing more money to be created. So while it's technically true that it's destroyed you could see the next loan as its reincarnation, no..?
I didn't go here in my response above because my vague understanding is that we're not strictly a fractional reserve system any more, though I don't understand how.
Edit - I just lost 20 mins reading the start of https://en.wikipedia.org/wiki/RNA_world which is interesting on that stuff
You can build logic gates out of 2 or 3 transistors, and combine those logic gates into more complex gates until you've got a computer.
But how does a transistor work? Basically, you've got semiconducting materials of two types (Phosphorus or Boron doped silicon), one of which wants a few more electrons to be a conductor, and the other wants a few less electrons to be a conductor. If you stick the two types next to each other, the electron-wanting (N) one snatches up the electrons from the electron-offering (P) one, and you get an Electric field going from the P to N. Now, that alone makes a diode. Already cool, nonlinear electronics. But what if we go N-P-N? Now we've got two electric fields, going opposite directions. With three leads, you can adjust the strength of those electric fields with one, creating a variable resistor, a transistor.
I'd love to see this properly explained, because it definitely has a counter intuitive ring to me.
I'm pretty ignorant in this field, and usually I've been a day or so behind the posts (missing the window to press for more information), but I feel like there's definitely some contention there.
But there is a tradeoff between force and displacement. Larger force = smaller displacement.
Same with a wing. The thrust force is lower than the lift force, but the horizontal displacement (velocity) of the wing is much greater than the vertical displacement (velocity) of the deflected air.
i.e. small force x large velocity = large force x small velocity
A better but harder resource is I believe by Piketty (the inequality guy): from balance sheet recessions there's usually only a few ways out. He and his co-authors go through every single knkwn recession in every single country (obviously biased towards recent and Western). What I took away from it is that without population growth the US needs hyperinflation, to default on its debt, or to increase tax revenue's sharply and/or decrease entitlements sharpy. It's up to you to guess which one of those three is most likely.
But the budgetary situation is not tenable.
If you or a friend wants a crash course on econ, check it out.
At least for non private companies ...asset prices go up....auctions clear at values with maximum leverage...recession....monetary stimulus....repeat.
If Central Banks can create money without negative effects, then
- why tax people?
- why even work? Can't we just print enough money for everyone and live happily ever after?
I realize these questions are quite provocative and their answering only explains if it will work but not how or when it will fail.
The idea is that it is in this waveform until you do something to observe it. Observation requires an exchange of energy, i.e., an interaction. This is why there is always uncertainty, because in order to observe something, that which observes has now intimately interacted with the waveform in order to cause its "collapse" into what we consider to be a particle. A particle very well may just be a highly localized energy that we perceive to be "solid"
You can't, for example, try to get a measurement of the location of a photon without putting a measuring device which absorbs the energy of the photon, modifying its wave function and thus the probability of where it will decide to reveal itself at that point in spacetime.
note: I consider energy, fundamentally, to simply be the consequence of fluctuating. The fluctuation of one thing can interact with the fluctuation of another and, minding conservation, transfer "fluctuation". The direction of that transfer of energy relying may be due to the fact that entropy always increases along the arrow of time, i.e., energy likes to spread itself out just as heat goes from high concentration to low concentration.
Any hypothesis that I invent at this very moment, is from this perspective in the best position a hypothesis can ever be. There is no disproof. There is even no coherent argument against it, because I literally just made it up this second, so no one had enough time to think about it and notice even the obvious flaws. This is the best moment for a hypothesis... and it can only get worse.
I understand that there is always a chance that the new hypothesis could be correct. Whether for good reasons, or even completely accidentally. (Thousand monkeys with typewriters could come up with the correct Theory of Everything.) Yes, it is possible. But...
Imagine that there are two competing hypotheses, let's call them H1 and H2.
Hypothesis H1 was, hundred years ago, just one of many competing options. But when experiment after experiment was done, the competing hypotheses were disproved, and only this one remained. For the following few decades, new experiments were designed specifically with the goal of finding a flaw in H1, but the experimental results were always as H1 has predicted them.
Hypothesis H2 is something I just made up at this very moment. There was not enough time for anyone to even consider it.
A strawman zealot of simplified Popperism could argue that a true scientist should see H1 and H2 as perfectly equal. Neither was disproved yet; and that is all that a true scientist is allowed to say. Maybe later, if one of them is disproved in a proper scientific experiment, the scientist is allowed to praise the remaining one as the only one that wasn't disproved yet. To express any other opinion would be a mockery of science.
Of course, there always is a microscopic chance that H1 might get disproved tomorrow, and that H2 might resist the attempts at falsification. But until that happens, treating both hypotheses as equal is definitely NOT how actual science works. And it is good that it does not.
In actual science, there is something positive you are allowed to say about H1. Something that would make the strawman zealot of simplified Popperism (e.g. an average teenager debating philosophy of science online) scream about "no proof ever, only disproof". The H1 is definitely not an absolute certainty. But there is something admirable about having faced many attempts at falsification, and surviving them.
The feedback mechanism you’ve described is likely correct, and also a complete furphy, since the central nervous system is not part of the bicycle.
All of which is par for the course and rather confirms the point, viz. that people will happily hold forth on any explanation they care to latch on to, secure in the knowledge that the total absence of consensus makes it impossible to say, definitively, “that is wrong”
That's right. It's not a vector because it doesn't "transform" like a vector.
If you take a vector and rotate about an axis by 360 degrees, you get the same vector.
If you take a spinor and rotate it by 360 degrees you get a spinor which is "flipped". You have to rotate the spinor by 720 degrees to get back to the same spinor.
This is intrinsically weird, but that's QM.
The main purpose of a central bank imo is to keep money creation at arms length from government, so a rubbish government can't fiddle with the financial system too much.
With the addition of async to django core, I felt its time to finally learn the concept. I first took interest in async early last year when I re-read a medium post on Japronto; an async python web framework that claims to be faster than Go and Node.
Since then, I've been on the lookout for introductory posts about async but all I see is snippets from the docs with little or no modifications and a lame (or maybe I'm too dumb) attempt at explaining it.
I picked up multi threaded programming few weeks ago and I understand (correct me if I'm wrong) it does have similarities with asynchronous programming, but I just don't see where async fits in the puzzle.
I would say that some mathematicians went into economics to win Nobel prizes (which they did win) and I guess they would probably be quick to point this out at well.
Suppose I'm a bank, and I lend you $10 to buy apple tree seedlings. You spend all $10 on seedlings as promised.
The person who sold you the seedlings has $10. You have the seedlings. I have an expectation of getting $10 in the future, presumably from your sales of apples.
Because most people repay their loans, I'm confident I'll get the $10 back, and being a bank, my business is lending money. I might treat the $10 loan as $7 on my balance sheet when I decide how much money is safe to lend out.
Then the price of apples crashes. You come to me and say, 'look, there's no way I'll make $10 selling apples in the time I promised to repay you. Best I can do is deliver you the seedlings or sell them to my neighbor for $3 and give you that'. I grumble a little, but take your deal.
The person you bought the seedlings from still has $10. Your neighbor now has the seedlings and $3 less. I now have 3 real dollars instead of 7 hypothetical dollars. In other words, 4 hypothetical dollars disappeared. When I decide how much to lend out, I'll be basing that on $3 I know I have, instead of the $10 I thought I'd probably get back. I don't lend as much money to aspiring orchardists (orchardeers?), and the price of apples rises.
Edit: This fragility is probably a major factor why some people are so against fractional reserve banking (my counting hypothetical dollars as having value) but without that hack, there's no saying I could have lent you the original $10, so it's a bit of a double-edged sword.
Visuals help: [1] https://aviationphoto.org/wp-content/uploads/2016/11/Paul-Bo... [2] https://www.popphoto.com/sites/popphoto.com/files/import/201... [3] https://imgur.com/gallery/EHW7D [4] https://www.youtube.com/watch?v=dfY5ZQDzC5s&t=192
Now our Bank loans out 50,000 of those hackerbucks to Customer B. It does this by crediting her account with 50,000 hackerbucks, but notice that Customer A still has 1 million in his account - so now there's 1,050,000 hackerbucks in apparent existence - we've created 50,000 hackerbucks from thin air. If Customer B withdraws the loan money to go spend it, the Bank will have 950,000 in reserves and an asset worth 50,000 (the loan). Customer B will have 50,000 in cash.
What we've actually done is increase the "M2", one of the measures of how much money is in the economy.
If Customer B either repays the loan or defaults on it, that new money disappears. In the loan repayment case, the Bank goes back to having 1,000,000 in reserves, and in the loan default case the loan asset becomes worthless and it is left with only 950,000 in reserves (the other 50,000 is out there with wherever Customer B spent it).
So that common services (eg. healthcare in Canada, education everywhere, roads) can be collectively paid for.
- why even work? Can't we just print enough money for everyone and live happily ever after?
Because there wouldn't be enough resources. (Fiat) currency is just a medium of exchange for real stuff. Instead of growing wheat and trading it for your wooden furniture, the state provides a medium of exchange so we can both just transact in money. eg. when I don't need more furniture but you still need wheat.
Money =/= wealth
Printing money is actually more or less equivalent to a tax, because it reduces the value of the existing money supply.
> - Can't we just print enough money for everyone and live happily ever after?
No, because printing money redistributes wealth, it doesn't create it.
Legacy V-cycles (needs - spec - code - test - integration - product) were such that everything was written down and planned in advance for months/years. So, if the customer had made an error, his/her needs had changed... you were basically screwed.
Agile advocates for Short V cycles while getting often user feedback. But it's a V cycle.
- PO speaks to the customer = get needs - PO writes tickets, UX design something = specification - And then it follows the classical cycle : develop, test, integrate, deliver.
What's remain around agile (ceremonies & co) feel much more like bullshit to me, and people follow it religiously without understanding the core idea of agile as they think V cycle is an insult.
I just can't grok it.
I can't understand how time would flow differently depending of your speed.
I don't get why C is a constant no matter the referential, for any other object the speed is relative to your referential. I just don't see how those 2 are compatible.
The liquidity injected is supposed to be taken out later, thus removing the inflationary distortion. Whether it will or not is anyone's guess. 2008's injections have yet to be taken out.
Central banks are easier creditors because, while autonomous, they are the same country as the government! So it's technically like owing yourself money. A central bank that cooperates with the debtor country (itself) would never force a default, and is thus never an acute problem. Of course, infinite money printing should lead to dangerous inflation.
I don’t know anything about Feynman beyond vaguely associating his name with science, but watching this makes me want to seek out more from him.
The default is a side effect of that outcome, not its cause.
Can the properties of the elements be computed from the first principles of particle physics, or do you need to observe the atoms in real life to figure them out? For example, some isotopes are stable and others have a finite half-life. Can you know beforehand or you have to observe the decay? Can you compute exactly the mass of each atom without measuring it? Can you know compute its electronegativity? Etc.
not having a reliable way to know exactly what you took can amplify the anxiety, when your brain starts filling up with seratonin and whites everything out just like people on their deathbeds report, are you supposed to let go? when your sense of self has been obliterated and the next moment you are in the body of another mammal lost and confused in the forest for an entire lifetime before being transported back into your body and only a minute has gone by - but your trip is to last another 9 hours, should you fight it? Distinct neural networks in your mind that never communicate are now connected, vestigial components of the mind are now being expressed, are you being replaced in a firmware dump and flash?
a lot of people have a friend with them to guide them through an acid trip because trips can be steered with sounds and words, simple chimes, melodies.
would it have helped? very hard to say. but as the author wrote, the bad day and uncomfortable setting did not help. It is similar to a dream state (just radically more intense), where the things on your mind and also happening around you can affect the direction of your dreams.
I don't think I've ever seen a mainstream economic prediction that was actually correct.
It's not hard to understand why. Reducing all the sectors of a complex economy to crayon-drawn measures like "inflation" and "unemployment" - which aren't even measured with any consistency - is like trying to predict the weather in the Bay Area using a single weather station for the entire continental US, which is conveniently located on the wall of a cow shed in Kansas.
The famous twin thought experiment where one gets in a spaceship, accelerates away from the planet, turns around, and comes back.
The twin that stayed on earth is old and the traveling twin is young still.
On one hand, I know that time will "pass differently" for each twin....but why is it the twin in the spaceship that ages less? Why isn't it true that the entire universe accelerated away from the spaceship and then returned, leaving the entire earth young?
(Also it's strange that sometimes there is disagreement about the mechanics between actual practitioners, see the recent confusion about whether fractional reserve banking is true.)
Let's say someone owns shares in a very low-volume stock — one that gets a couple trades a day, at most. Could they artificially increase the share price by offering their shares at a high price, then using a second account under their control to immediately buy them at the inflated price?
So sure, the loaned money might still be in the system in some naive sense, but value has been destroyed in the asset price? Suddenly a lot less money buys a lot more asset and that's where we find the deflation.
If I borrow 1M for an asset in good times and can't pay it back, the creditor gets the asset and probably gets a good portion of that 1M back. If that same scenario plays out in bad times and my whole street defaults on the same asset at once, there's a resulting fire sale and far more value is destroyed (including being wiped off neighbouring, non-creditor-owned assets of the same type) than money added by leaving the loan sloshing around somewhere else in the economy.
even in this very thread there is someone that has been in the mental hospitals and seen problems "with their own two eyes", but is unwilling to name names as part of a code to remove any social/legal/professional consequence for themselves or the "crazy" people there
My current understanding of colour is that the colour of an object is defined by the ability of the electrons in the compound jump different energy levels. I don't know if that in itself is enough to result in all the colour we see.
My current understanding of reflection is that because of wavyness of light when lots of light gets absorbed (to my understanding a single photon exciting a single electron to jump some amount) and reemited (the electron falling back down) together the light ends up forming that angle pattern. Under than understanding single photons don't bounce in the same way rays of light do?
I don't know how correct either of those understandings are, but my understanding has been put together from so many places and I've never heard any source explain either like that so I don't trust they are correct.
> when your sense of self has been obliterated and the next moment you are in the body of another mammal lost and confused in the forest for an entire lifetime before being transported back into your body and only a minute has gone by - but your trip is to last another 9 hours, should you fight it?
There was a lot of this, during that out-of-body-period. I existed in multiple places/points in time at once as different people of various ages/genders/nationalities and then occasionally as animals, and lived entire simultaneous lifetimes. At one "time", in places + times A, B, C, D as different living things. Really does a number on your sense of self for a bit, heh.
https://python.readthedocs.io/en/latest/library/asyncore.htm...
'There are only two ways to have a program on a single processor do “more than one thing at a time.” Multi-threaded programming is the simplest and most popular way to do it, but there is another very different technique, that lets you have nearly all the advantages of multi-threading, without actually using multiple threads. It’s really only practical if your program is largely I/O bound. If your program is processor bound, then pre-emptive scheduled threads are probably what you really need. Network servers are rarely processor bound, however.'
'If your operating system supports the select() system call in its I/O library (and nearly all do), then you can use it to juggle multiple communication channels at once; doing other work while your I/O is taking place in the “background.” ...'
> In physics, the twin paradox is a thought experiment in special relativity involving identical twins, one of whom makes a journey into space in a high-speed rocket and returns home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin sees the other twin as moving, and so, according to an incorrect[1][2] and naive[3][4] application of time dilation and the principle of relativity, each should paradoxically find the other to have aged less. However, this scenario can be resolved within the standard framework of special relativity: the travelling twin's trajectory involves two different inertial frames, one for the outbound journey and one for the inbound journey, and so there is no symmetry between the spacetime paths of the twins. Therefore, the twin paradox is not a paradox in the sense of a logical contradiction.
There's multiple explanations included to resolve the "paradox" from different lines of argument; I particularly like this one: https://en.wikipedia.org/wiki/Twin_paradox#A_non_space-time_...
So the bank's speculative asset loses value: I struggle to see this as money being destroyed as it wasn't actually money, it was an asset with a price attached to it which has now changed. In contrast to money sat in your bank account, the price was never redeemable (you couldn't go spend it on beer) unless you used that asset to get the debtor to pay you back (or convinced someone else it was worth buying from you as a speculative asset). You might as well say money is destroyed when share prices tumble. Maybe this is the point of such arguments, to make the case that money is no different to any other asset, but we don't tend to treat it like that in reality. Or do we?
a) The ability to observe an instance of a multidimensional universe, each instance a mapping of a point moving along a continuous trajectory whose direction is constrained by the arrow of time (i.e., increasing entropy)
b) the ability to impart a force to change the direction that trajectory follows
Now why not use a propeller pointing directly straight down? Well, you just made a helicopter. Helicopters are great, but they are not as fast as airplanes, the main reason is that as it goes forward, one part of the rotor is advancing and the other is retreating, this causes a whole lot of difficulties that doesn't appear when the propeller is mounted sideways.
Now propellers aren't the only way of producing thrust. There are jet engines, but these require significant airspeed in order to be efficient, and you usually have much more airspeed horizontally than vertically.
You can have rocket engines, which are great if you want to get really high, really fast, but they have to carry their own reaction mass, which is impractical in most situation.
Also you can use buoyancy as a form of "thrust", you now have an airship. Efficiency-wise, it is unbeatable. Unfortunately airships are big and slow and not very suited to modern requirements.
As you can see, there is absolutely nothing preventing us from thrusting downwards, it is just that airfoils are very efficient.
Back to your first question: how can 100 pounds of thrust keep a 1000 pound aircraft in the air. Without going into details, it is the same idea as a lever or gearbox (mechanical advantage). We rarely think of it this way for the wings of an airplane, but for propellers, it is a more apt comparison. A variable pitch on a propeller is like a gearbox for your car, and as seen earlier, propellers work exactly like wings.
As for what "thrust" is, it is really just a force, often shown together with with drag, lift and weight, it is provided by the engine. But in the end, there is nothing special about thrust, you can reorganize your forces anyway you want using simple vector math. For example gliders don't have thrust, and they still fly, taking advantage of updrafts.
The way I think about it, is asynchronous programming gives you the tools to write programs that don't stop doing useful work while they're waiting for something to happen. If parallelism gives you more effective use of your CPU, asynchronous programming gives you more effective use of your time. Let's presume you have a program that does some things, makes several requests to the network or requests several things from the file system, collects the results and carries on.
In a synchronous program, you would make each request, wait for it to come back (the program would block at this point), then when it's complete, proceed with the next request. If each request takes ~2 seconds to complete, and you've got 15 to make, you've spent most of that 30 seconds just idling, not actually doing anything.
In an asynchronous program, you could submit those requests all at once, and then process them as they came back, which means you only spend about ~2 seconds waiting before you start doing useful work processing the results. Even if your program is single threaded and you can only actually process one item at a time, you've made more efficient use of your time.
Some murkiness comes in the intersection of the 2 and how it's implemented in various languages. For example, you could also dispatch each of those requests out to a thread, and if you returned all the results to the main thread before processing them you'd have the same result and near the same performance as the async example (+- thread dispatch overhead etc etc). The power and advantage comes when you can use both to their advantage: you can't necessarily dispatch threads forever, because the overhead will impact you, and you can saturate your CPU. On the flip side, making something asynchronous that actually requires CPU work won't net any benefits because the work still has to be done at some point. Asynchronous programming gives you a way to move things around to maximise your efficiency, it doesn't actually make you go faster.
JS and Python are single threaded with event-loops, Rust organises chains/graphs of async code into state machines at compile time and then lets the user decide exactly how it should be run (I'm fairly this is correct, but if I'm wrong someone let me know). Dotnet to the best of my knowledge lets your write "tasks" which are usually threads behind the scenes (someone please correct me here). I don't know what Java uses, but I imagine there's a few options to choose from. Haskell performs magic as far as I can tell. I don't know how it's model works, but I did once come across a library that appeared to let you write code and it would automatically figure out when something could be async, rearrange calls to make use of batching, automatically cache and reuse similar requests and just generally perform all kinds of Haskell wizardry.
In the end though I reckon the most obvious reason is that speed is a property that directly corresponds to energy, therefore for each region of space to have a well defined energy (which is required for e.g. general relativity) every region of space needs to have a well defined distribution of speeds.
I suppose this does leave open a small loophole, as you can easily correlate speed with position in order to get a distribution that is uniform in both (but correlated). But this goes against our assumption that the universe is uniform everywhere (which might turn out to be false, but so far it's holding up well).
Japronto does not seem to be under active development any more, but async programming is definitely the way to go in order to squeeze the most performance out of the hardware at ones disposal.
I put down some thought round this that tracks my own journey to understanding the concept (sorry if this is too basic for you, take it or leave it, and please note that I'm not an expert by a long shot).
It's not guaranteed that you have the same way of picturing things, but here goes: programs normally run in one direction, executing one line at the time from top to bottom (vertically). But one or more of those 'vertical' commands may send the computer off in a horizontal direction too (async calls), that have a 'horizontal' chain of commands.
The problem that I (and I think many with me) have had a hard time grokking at first is that the 'vertical' flows continue immediately after having issued a 'horizontal'(async) call. The computer doesn't wait for the async call to come back. To do something after the async call has finished you have to tack a new call onto the result of the async call in the 'horizontal' chain of events, previously often leading to what was called 'callback hell' in Nodejs programming.
Not sure about PHP but one may get round the problem of callback hell in the JavaScript world by using async/await and promises which mimics synchronous programming, i.e. program flow in the 'vertical' direction is actually halted until the async calls return a result. Personally I find that this adds another level of abstraction that sometimes may make things even more difficult to understand and debug. I prefer wrapping async calls in some queue construct instead (which takes care of the chaining of consecutive async calls), works for me.
In short, synchronous commands are automatically 'chained' top to bottom in code, asynchronous commands have to be chained manually after the completion of each async bloc of code. I believe multi-threaded process programming is just a more advanced case of async calls that often need to be 'orchestrated', i.e. coordinated in a way that simple async calls usually don't need. But all types of async programming comes with some special issues, of which race-conditions is maybe the most common, i.e. when several async processes are trying to change the value of a shared asset in an ad-hoc manner.
If this claim was true, it would disallow science to make true claims, because no experiments can disprove such claims. Truth is a delicate matter and can't be handled by simple methods. Questions may not be settled, but they can be difficult to challenge.
Yeah. One might for example reduce reinforcement of the big-empty-cell misconception by briefly showing more realistically dense packing, eg [1], before fading out most of it to what can be easily rendered and seen. But that would be less "pretty". Prioritizing "pretty" over learning outcomes... is perhaps a suboptimal for education content.
> better
But still painful. Consider those quiet molecules in proteins, compared with surrounding motion. A metal nanoparticle might be that rigid, but not a protein.
One widespread issue with educational graphics, is mixing aspects done with great care for correctness, with aspects that are artistic license and utter bogosity. Where the student or viewer has no idea which aspects are which. "Just take away the learning objectives, and forget the rest" doesn't happen. More like "you are now unsalvageably soaked in a stew of misconceptions, toxic to transferable understanding and intuition - too bad, so sad".
So in what ways can samplings of a protein's configuration space be shown? And how can the surround and dynamics be shown, to avoid misrepresenting that sampling by implication?
It can be fun to picture what better might look like. After an expertise-and-resource intensive iterative process of "ok, what misconceptions will this cause? What can we show to inoculate against them? Repeat...". Perhaps implausibly intensive. I don't know of any group with that focus.
It may happen quite infrequently, and only because it happened in a large, lifeless (but 'nutrient' rich bath) did it have a chance to amplify. What's interesting to me is that it only has to happen once.
Yes, Earth's specialness is interesting, too, and counts for what I believe are the best reasons to believe in God. Earth has so many amazing qualities: it is a cozy distance from the Sun (temp), tilted quite a bit (seasons), with a molten core (cosmic ray protection) and a huge moon (tides, nocturnal light). All of these may be necessary conditions for life to arise, and they are all, as far as we know, quite rare individually, and astronomically unlikely in combination.
SELLING 100 shares @ $10.02
SELLING 200 shares @ $10.01
SELLING 100 shares @ $10.00 <--
BUYING 100 shares @ $10.00 <--
BUYING 100 shares @ $9.99
BUYING 200 shares @ $9.98
BUYING 100 shares @ $9.97
Wouldn't the SELL and the BUY @ 10.00 get matched immediately in this case?So the aim is to take advantage of this transient fluctuation and the way the ripple propagates.
But even if in a perfect (or future) world, everybody reprices instantly all the goods relatively to the new amount of available currency, there is still an effect which is how is distributed the newly "created" currency: who get the new shiny coins? So this is equivalent (if the repricing is done) to a kind of global instantaneous tax-and-subsidies. They tax everybody by the percentage of currency created (relative to the total existing amount), and the lucky ones receiving the fresh money are getting thus a subsidy.
I say this as someone who passed 2 semesters of graduate QM.
- "as soon as iron starts to be produced in the core of a star it instantly collapses" - I get that fusing iron costs energy rather than produces it and this causes a collapse.. but can it really be that quick? There are other fusion reactions that are still producing energy, right?
- dark matter / energy - I understand we have observations that indicate there is some type of matter we can't see but it feels a lot like saying "magic" or "the ether".
- how different size stars form - if there is a critical mass where a star "ignites" and after igniting starts pushing away from itself with the energy being produced, how do we get stars of such varying masses? Like, why didn't this 100x solar mass star start fusing and pushing the gases away before they were caught in its gravity? Do the more massive stars ignite on the same schedule but continue to suck in additional matter anyway, gravity overcoming the solar wind?
How does that work practically? A very simple example:
Let's say someone is sued for the tort of battery. It's the first tort you learn in law school. The mnemonic we learn is "IHOC" which stands for intentional harmful or offensive contact. So we now have what we call the "elements" of the tort, which must be proven individually.
1. Intentional 2. Harmful or offensive 3. Contact
This is where the question of law comes in. What does "intentional" mean? Did I intend to take the action, or did I intend to cause harm? That's a question of law that courts have determined to be that you intended harm. Did I actually have to know the harm that would result? Or maybe I just need to know that it's likely to cause some harm. Questions of law. Another potential question of law -- what does "harm" or "offensive" mean? What if the contact is with my backpack and not my body? These are issues that judges decide as questions of law.
Once we decide on the correct legal standards to apply (which is where lawyers do their most arguing and briefing), then the "trier of fact," generally a jury in the US, determines how the facts of the case fit the law. Did the defendant "intend" the harm? Did the defendant engage in "harmful or offensive" contact? Did the defendant "contact" the plaintiff? The jury decides those things based on a preponderance of the evidence in a civil case, meaning more likely than not.
In your case, you have an issue of evidence. When someone testifies, both sides get to question the witness so that the trier of fact can judge the issue and make determinations of credibility. You can testify that you kept a lot of temperatures, and you'll explain how you did it and that you have personal knowledge of what you wrote. Then the other side gets to try and poke holes in your log. Did you actually do it contemporaneous to your observations? Did you write it down the next day? How do you know your thermometer is accurate? Did you open the window just before taking the temperature? Maybe you have a history of lying to people. Then the jury gets to take your testimony in consideration. They can do with it what they like.
I hope that helps!
I think eventually it will have bigger consequences, but it will take some time for these trillions to filter on down.
I remember once outcome of the 2008 crisis was that consumer goods like cereal boxes stayed the same size, but the bag inside the box got smaller.
Isn't that exactly how science work? It does not make true claims. It produces statements with disclaimers. If this and this then Y is true, as long as we don't observe Y.
You cannot use the scientific method to definitely say: "X is true".
The bad news is that by ignoring inequality, they may be just causing it.
(I don't exactly understand if this 'is' a knot, in a sense. I guess it is.)
Oh, by all means, roll your own crypto, break it, and roll it again. Just do not use it.
Also, break other people's crypto and study theory.
By the way, the advice is not "unless you are qualified". Nobody is qualified to just roll their own. Good crypto is a community project and can not happen without reviewers.
My goal was to explain quantum computing in a way that is mathematically precise but doesn't require one to learn linear algebra first. To do this, I implemented a quantum computer simulator in Javascript that runs in the web browser. Conceptually (in mathematical language), in each simulation I present, I've started by enumerating the computational basis of the Hilbert space (all possible states the qubits could be in) and represented the computational state by putting an arrow beside each of them, which really is a complex number. (This similar to how Feynman explains things in his book QED.) The magnitude of the complex number is the length of the arrow, and its phase is the direction it points (encoded redundantly by its color). I've filled out the amplitude symbol with a square so that at any given point, its probability of a measurement resulting in that outcome is proportional to the area of that square. Essentially, in this language, making a measurement makes the experimenter color blind -- only the relative areas of the amplitudes matter and there is no way to learn directly phase information without doing a different experiment.
I could make a further document explaining along these lines if people are interested. The source is on github too: https://github.com/garrison/jsqis
The cool thing about this insight is that the converse is true. You can disaggregate any waveform into its additive harmonics. This means you can jam multiple signals into a single channel (eg a fibre optic cable) and then apply a fourier transform at the other end to "untangle" them.
Read "Communicating Sequential Processes" by Tony Hoare https://www.cs.cmu.edu/~crary/819-f09/Hoare78.pdf
There's also a book: http://www.usingcsp.com/
Suppose I buy a painting. I believe it to be an original Van Gogh so I pay $10 million for it. I then find out it is fake, and worthless. Was $10 million (of money) destroyed? Of course not, I just mis-valued an asset. Suppose it then turns out to be real after all. Owing to the fascinating history of this painting it is now valued at $20 million. Was $10 million of money created (relative to the moment when I originally thought it was a Van Gogh)? No. Was $10 million of wealth created? Yes as the world now has one more thing worth $10 million in it.
Money != wealth, even in the materialist sense where wealth consists purely of goods and services. Money is a metric we use to keep track of wealth, and in general it's considered helpful if that relationship holds, so if we're trying to maintain that relation rigorously the central bank should print another $10 million (or create it by making loans) to reflect our knowledge and appreciation of the Van Gogh - if it doesn't then the existing fixed quantity of money in the system will now be representing a greater quantity of wealth, causing deflation.
As I said in my other post I am not an economist by training. If the economists want to call this thing that got created/destroyed here "money" then I guess I should let them, but I would like to hear a good reason why it makes sense to do so, and I haven't heard one. Absent of a good reason I might as well call it haddock. Or, considering the OP was asking which things that could be explained better, we could acknowledge what any good programmer knows: part of a good explanation is choosing the right names for things.
If you live in a big enough city, there should be meetup groups where you could meet some people and have long discussions.
Is your interest only curiosity, or is it medical related? Sorry, on mobile and pressed for time so can't scroll through the thread..
It is not scientific to substitute one bad explanation for another. The scientific approach is to say we don't know, and then look for a good explanation.
THat's funny because my EE math concentration was on advanced calculus. I took two semesters of a-calc and got A's, but I only know how to compute a Jacobian and apply it, not its origin story. It's a very weird feeling to understand the motions but not the ... depth?
This only works if the beat you're hearing is sufficiently stable.
[1] https://cdn.rcsb.org/pdb101/molecular-machinery/ [] http://pdb101.rcsb.org/sci-art/goodsell-gallery [] http://pdb101.rcsb.org/motm/motm-by-date [] https://cdn.rcsb.org/pdb101/molecular-machinery/
and yes, I absolutely was meaning to ask about options, not futures (I know so little, I used the wrong word!)
How do options get priced?
i.e. if I look in my online brokerage thing I can buy XYZ for $100 on Jan 1 2021, but what if I want to buy it for $125 on Jan 1 2025?
Who decides what dates and prices are set?
Can I just go insane and get options on AAPL for $3000 (or $2) in 2030?
There are so many odd facets of it with interesting implications; like did you know exposure to sex hormones actually contributes to the atrophy of the Thymus over time? This is posited to have some relation to the increased likelyhood of developing autoimmunity 0roblems as we get older.
Also, not all T cell lines undergo adverse selection in the Thymus. There is a smaller population of more autoimmune sensitive cells that develop and specialize in the extremities. It is theorized this is evolutionary selected for because there is a tradeoff between being able to develop to a wide variety of pathogens, and being free of auto immunity. So you keep a small group of possibly autoreactivr immunity cell lines just in case. This is theorized to explain the prevalence of autoimmunity issues in the extremities being relatively common.
- Assets that they are buy? The idea is to keep the banking system solvent and to prevent a domino effect where the liquidation of one big bank will result in a run on other banks. The big banks got into trouble because they took depositors money and invested in junk which went belly-up. The federal government insures everyone's bank deposits; if the banks went bellied up the FDIC would have to pay out. Better that the banks stay solvent.
- There are cases like Greece defaulting on its international loans. The EU forced Greece to agree to an austerity plan lowering Greece's payments if Greece changed its national spending which is deeply unpopular in EU and Greece. But there is no other alternative. Well the alternative is what happened to Weimar Germany after WWI: hyperinflation, economic destruction, and the longing for a savior.
1. There is one substance that rewrites your mind.
2. There is more than one substance that rewrites your mind.
Are very different postulates.
That being said, it doesn't answer the question posed in this forum: "What scientific phenomenon do you wish someone would explain better?"
Instead, it answers the question: "What scientific phenomenon do you wish someone could explain?"
Sure, a bike could work some other way, but my point is, it doesn't need to. Anyone who has ever picked up a hard drive should understand how a bicycle remains upright. What else is there to know? It's not like an airplane wing, where the "obvious" conventional wisdom is inadequate, misleading, or incomplete.
So: how to picture this? Is the signal made of discrete 'photons' overlapping, or combined somehow? Or is it that the 'wave-like' aspect of these photons is so predominant at these frequencies? (I've grappled with this one for a long time.)
More detail:
Say you have an AM station that transmits at frequency f = 650kHz and uses power P = 50kW.
The equation for a photon's energy is E = hf, where E is the energy, h is Planck's constant and f is the frequency. Here h = 6.6310^(-34) Js and f = 6.510^5 Hz. Thus the photon's energy is E = 4.3110^(-28) J. This is very tiny number.
The number of photons per second is n = P/E = 1.1610^32.
Let's try to visualize this. Avogadro's number is 6.022*10^23 for each mol of something, so if we divide it out from n, we see that there are almost 200 million mol of photons being released every second!
Water is 18g/mol, which takes up about 16 cm^3. 200 million moles of water is about a million gallons. If a photon was like a water molecule, a "water AM station" would be releasing about a million gallons of water per second.
The core of the star is the hottest and most dense part. Greater heat and density make it easier for fusion reactions to run. If suddenly the core is made mostly of iron, then the amount of energy it produces rapidly drops. Even if there are nice, easily fusible hydrogen atoms farther out from the core, they will not be fusing at a very high rate, because the temperature and pressure is lower where they are. Also, the more easily fusible atoms remaining outside the core can't diffuse into the core fast enough to refuel it. The only possible outcome is collapse.
In some sense "dark matter" and "dark energy" are just placeholder words for "whatever thing is causing all this weird stuff to happen". This is actually very analogous to how "the ether" was a placeholder term for "whatever thing that radio waves are waves in". (Now we refer to it as "the electromagnetic field". The "ether" terminology was associated with some incorrect assumptions, such as a privileged reference frame, which is why people sometimes say it was an incorrect hypothesis. But the electromagnetic field is certainly real, it just didn't turn out to work like some people thought it did.) Scientists have observed so far the dark matter seems to behave pretty much like ordinary matter, except that it just happens to ignore the electromagnetic and strong nuclear forces. Not only does it hold galaxies together, but its gravity also bends the paths of light rays, just as we expect of anything massive. So calling it "matter" isn't too much of a stretch. It's still very mysterious, though.
Radiation pressure actually does limit the mass of stars, to something on the order of 100 to 200 solar masses, see this stack exchange question: https://astronomy.stackexchange.com/questions/328/is-there-a... That doesn't stop smaller clouds of gas from collapsing to form smaller stars, though.
Imagine you have two spaceships in London. At the fire of a starting gun, they both take off, and fly around, each one taking a different path. Each ship has a clock that records the time elapsed since it took off. Eventually, they both return to the same landing pad in New York, landing at the same time. Thanks to time dilation, the readings on the clocks of the two ships might be different, even though they started and landed at the same places and times. Imagine we start both ships from a space station in deep space. One ship doesn't leave the station at all, it just stays in its docking bay, with its clock ticking along. The other flies down to the surface of Earth, sits there for a few years, and then flies back up to the space station. Thanks to the gravitational field of Earth, the ship that stayed home has more time elapsed on its clock than the ship that went to Earth.
Now suppose that each ship is carrying one end of a wormhole. The clocks on either end of the wormhole must stay synchronized. Someone sitting in the middle of the wormhole would be able to see the inside of one ship by looking to their left, and the inside of other ship by looking to their right. The clocks start out synchronized. As you point out, no matter how the ships move about, this does not change as the ships fly around. Anyone standing in the middle of the wormhole always sees that the clocks on the wall of each ship are in sync.
So: Entering the wormhole from ship A when ship A's clock reads X means you exit at ship B, at the time when ship B's clock reads X. And vice-versa for going from B to A. Now thanks to time dilation, ship B might arrive back at the station when its clock reads X, while ship A, which stayed behind, has a clock reading of (X + 20 minutes). If you are the station master, you can go into each ship to look at the clocks, and you will find that ship A's clock is always 20 minutes ahead of ship B's. But suppose that instead of walking between the ships through the station, you use the handy wormhole that connects them directly.
Suppose you enter ship B when its clock reads Y, and walk through the wormhole. You exit at ship A when its clock also reads Y. Then you step out of ship A, and walk through the station to ship B. It's clock reads (Y - 20 minutes), since according to people on the station, ship A's clock is still 20 minutes ahead of ship B's. When you originally entered ship B, its clock read Y. It now reads (Y - 20 minutes). Time travel. By retracing your path in the opposite direction, you can also travel 20 minutes into the future.
To clarify: a "point particle" is an object with no internal structure, that is, it can be fully described by its coordinates wrt time (ignoring relativity for now). This is a concept, a model which explains many phenomena, a model on top of which you can build many theories. It does not, however, explain the conjunction of QM with special relativity.
I think my contention with the iron is the tipping point and how quickly it goes. Pop-sci tv makes it seem like you fused a single iron atom and bam. Maybe it is you fused an iron atom and it is like a day, a year or a thousand years and that adds up; still bam in terms of cosmic timelines but it is not what I hear when I listen and hear "instant collapse".
Thank you for the thoughts on dark matter and energy as well, and the link on radiation pressure, I will read it.
I'm in Taiwan where masks are ubiquitous, and have been upset reading about the slow adoption of masks in the West because it was always from a selfish perspective ("do masks protect ME?") whereas here they're worn for a communal purpose ("how do I protect others?"). How effective they are at blocking incoming infection always seemed like a big distraction to me, since it's been clear from the start that it reduces spray from spreaders talking and coughing, which alone is enough of a reason to adopt it widely.
But IIUC, one of the remarkable things about MWI is that it would be a local hidden variable theory!
This is a very important property to have because the principle of locality is deeply ingrained in the way the Universe behaves. Note that (almost?) no other quantum interpretation is both realist and local at the same time.
Maybe you wonder, how is it possible that MWI can be considered a local hidden variable theory if Bell's theorem precisely shows that local hidden variable theories are not possible?
I think that it was Bell himself who said that the theorem is only valid if you assume that there is only one outcome every time you run the experiment, which is not the case in MWI.
This means that MWI is one of the few (the only?) interpretation we have that can explain how we observe Bell's theorem while still being a local, deterministic, realist, hidden variable theory.
When that loan asset is written down the bank has to make up the difference from its equity - this ends up reducing the amount of loans it can write, so you get a contraction in the monetary supply.
So in the end, I guess the shorter answer is that a default destroys money in the same way that writing a loan creates it - you might well complain that no actual currency has been created or destroyed, but the argument is that it has a similar overall effect.
I know 4 billion years is a long time and the earth has a lot of matter rattling on it at any given time, but if every atom in the universe was a computer cranking out a trillion characters per second, you'd only have a 1 in a quarter quadrillion chance of making it to 'a new nation' in the first sentence of the Gettysburg address. Seeing the complexity in even the most trivial biological system just makes me scratch my head and wonder how its possible at all.
I'm not invoking God here. I just see a huge gulf in complexity that is difficult for me to traverse mentally.
It doesn’t fully solve my confusion, but I suspect I need to study more before I can even ask the right next question. :)
Sorry for the questions, we can talk about it somewhere else, just add an email or protonmail account to your hackernews account I'll mail you there
* The reflection angle laws are due to the laws of conservation, see https://en.wikipedia.org/wiki/Snell%27s_law
* For a pure colour, the colour is simply the energy of the photons. Atoms have discrete stable electron orbits, and electrons moving between these levels will absorb or emit discrete levels of energy in the form of photons, which is why we have spectral lines. Reality is more complicated because part of the energy may be converted to vibrations of the atom itself (phonons).
* Another factor is the perception of colour. In physics to characterize light one measures its spectra, the intensity of the light versus its wavelength (wavelength = speed of light in vacuum / frequency). The perception of colour of these distributions isn't always always what one would expect.
But, I wonder if you can describe H1 as being a stronger hypothesis than H2 by virtue of withstanding more and higher quality attempts to disprove it?
If you take the Bell test experiment where Alice and Bob perform their measurements at approximately the same time but very far apart, I think you and I both agree that when Alice does a measurement and observes an outcome, she will have locally decohered from the world where she observes the other outcome.
But I don't see why the decoherence necessarily has to happen faster than the speed of the light.
It makes sense that even if Alice decoheres from the world where she observes the other outcome, the outcomes of Bob's measurement are still in a superposition with respect to each Alice (and vice-versa).
And that only when Alices' and Bobs' light cones intersect each other will the Alices decohere from the Bobs in such a way that the resulting worlds will observe the expected correlations (due to how they were entangled or maybe even due to the worlds interfering with each other when their light cones intersect, like what happens in general with the wave function).
I admit I'm not an expert in this area, but is this not possible?
This project involves a minisatellite (capable of generating entangled photons in space) to establish a space platform with long-distance satellite and ground quantum channel, and to carry out a series of tests about fundamental quantum principles and protocols in space-based large scale
I've gone decades without hearing it explained that clearly and simply. Thank you (sincerely).
No idea, sorry.
> favorite books on how things work at that scale
I've found the bionumbers database[1] very helpful. Google scholar and sci-hub for primary and secondary literature. But books... I'd welcome suggestions. I'm afraid I mostly look at related books to be inspired by things taught badly.
The bionumbers folks did a "Cell Biology by the Numbers" book... the draft is online[2].
Ha, they've done a Covid-19 by the numbers flyer[3].
If you ever encounter something nice -- paper, video, text, or whatever, or even discussion of what that might look like -- I'd love to hear of it. Sorry I can't be of more help.
[1] https://bionumbers.hms.harvard.edu/search.aspx [2] http://book.bionumbers.org/ [3] http://book.bionumbers.org/wp-content/uploads/2020/04/SARS-C...
It's not circular, it's a simple flowchart.
Are you writing an app or are you trying to invent more advanced crypto?
"writing an app" -> dont roll your own crypto
"invent more advanced crypto" -> go learn and research crypto history, math, etc..
I think that when people are essentially honest and trying to find out truth, they can agree on reasonable rules. But there is no way to make the rules simultaneously philosophically satisfactory and bulletproof against people who are willing to lie and twist the rules in their favor.
For example, in real life you usually cannot convince crackpots about being wrong, but that is okay because at some moment everyone just ignores them. If you try to translate this into a philosophical principle, you end up with something like "argument by majority" or "argument by authority". And then you can have Soviet Union where scientific progress is often suppressed using these principles. But what is the alternative? No one can ever be ignored unless you disprove their hypotheses according to some high standard? Then the scientific institutions would run out of money as they would examine, using the high standard, the 1000th hypothesis of the 1000th crackpot.
> - Maybe I don't fully understand why LIGO needs two arms. If you had a clock that could accurately measure light wave crests, could you do it only with one arm?
Yes, that's absolutely right. The two arms cancel out the frequency fluctuations of the laser itself. If we had a perfectly stable laser, we could make do with just one arm.
Regarding the question about what stretches and what doesn't, I think the general rule is that rigidity prevents "stretching". For example, a hydrogen atom in expanding space would lose momentum over time, because it redshifts, but the atom itself wouldn't get any bigger. There's no need to invoke a higher dimension here, just some things are rigid (like laser cavities and the Earth) and some things aren't (like electromagnetic waves). In fact, in general invoking higher dimensions without a strong reason to is discouraged when discussing general relativity, simply because the math is already very complicated in 4D.
That doesn't sound surprising when all that injected money goes directly to banks instead of individuals.
You might find this useful. Along with the author's write-up:
https://medium.com/@stew_rtsmith/quantum-javascript-d1effb84...