The illusion that agency 'emerges' from rules like games, is fundamentally absurd.
This is the foundational illusion of mechanics. It's UFOlogy not science.
Anyways. I thought the documentary was inspiring. Deepmind are the only lab that has historically prioritized science over consumer-facing product (that's changing now, however). I think their work with AlphaFold is commendable.
Science is exceeding the envelop of paradox, and what I see here is obeying the envelope in order to justify the binary as a path to AGI. It's not a path. The symbol is a bottleneck.
The computer is a hand-me-down tool under evolution's glass ceiling. This should be obvious: binary, symbols, metaphors. These are toys (ie they are models), and humans are in our adolescent stage using these toys.
Only analog correlation gets us to agency and thought.
btw an excellent explanation, thank you.
Also, solving the protein folding problem (or getting to 100% accuracy on structure prediction) would not really move the needle in terms of curing diseases. These sorts of simplifications are great if you're trying to inspire students into a field of science, but get in the way when you are actually trying to rationally allocate a research budget for drug discovery.
So, in the meantime (or perhaps for ever), we look for patterns rather than laws, with neural nets being one of the best tools we have available to do this.
Of course ANNs need massive amounts of data to "generalize" well, while protein folding only had a small amount available due to the months of effort needed to experimentally discover how any protein is folded, so DeepMind threw the kitchen sink at the problem, apparently using a diffusion like process in AlphaFold 3 to first determine large scale structure then refine it, and using co-evolution of proteins as another source of data to address the paucity.
So, OK, they found a way around our lack of knowledge of chemistry and managed to get an extremely useful result all the same. The movie, propaganda or not, never suggested anything different, and "at least 90% correct" was always the level at which it was understood the result would be useful, even if 100% based on having solved chemistry / molecular geometry would be better.
"Oh, machine learning certainly is not real learning! It is a purely statistical process, but perhaps you need to take some linear algebra. Okay... Now watch this machine learn some theoretical physics!"
"Of course chain-of-thought is not analogous to real thought. Goodness me, it was a metaphor! Okay... now let's see what ChatGPT is really thinking!"
"Nobody is claiming that LLMs are provably intelligent. We are Serious Scientists. We have a responsibility. Okay... now let's prove this LLM is intelligent by having it take a Putnam exam!"
One day AI researchers will be as honest as other researchers. Until then, Demis Hassabis will continue to tell people that MuZero improves via self-play. (MuZero is not capable of play and never will be)
Look around you, look at the absolute shit people are believing, the hope that we have any more agency than machines... to use the language of the kids, is cope.
I have never considered myself particularly intelligent, which, I feel puts me at odds with many of HN readership, but I do always try to surround myself with myself with the smartest people I can.
The amount of them that have fallen down the stupidest rabbit holes i have ever seen really makes me think: as a species, we have no agency
Edit to clarify my question: What useful techniques 1. Exist and are used now, and 2. Theoretically exist but have insurmountable engineering issues?
If your goal is to bring a drug to market, the most useful thing is predicting the outcome of the FDA drug approval process before you run all the clinical trials. Nobody has a foolproof method to do this, so failure rates at the clinical stage remain high (and it's unlikely you could create a useful predictive model for this).
Getting even more out there, you could in principle imagine an extremely high fidelity simulation model of humans that gave you detailed explanations of why a drug works but has side effects, and which patients would respond positively to the drug due to their genome or other factors. In principle, if you had that technology, you could iterate over large drug-like molecule libraries and just pick successful drugs (effective, few side effects, works for a large portion of the population). I would describe this as an insurmountable engineering issue because the space and time complexity is very high and we don't really know what level of fidelity is required to make useful predictions.
"Solving the protein folding problem" is really more of an academic exercise to answer a fundamental question; personally, I believe you could create successful drugs without knowing the structure of the target at all.