The correct level of analysis is not the substrate (silicon vs. wetware) but the computational principles being executed. A modern sparse Transformer, for instance, is not "conscious," but it is an excellent engineering approximation of two core brain functions: the Global Workspace (via self-attention) and Dynamic Sparsity (via MoE).
To dismiss these systems as incomparable to human cognition because their form is different is to miss the point. We should not be comparing a function to a soul, but comparing the functional architectures of two different information processing systems. The debate should move beyond the sterile dichotomy of "human vs. machine" to a more productive discussion of "function over form."
I elaborate on this here: https://dmf-archive.github.io/docs/posts/beyond-snn-plausibl...
This is actually not comparable, because the brain has a much more complex structure that is _not_ learned, even at that level. The proteins and their structure are not a result of training. The fixed part for LMMs is rather trivial and is, in fact, not much for than MatMul which is very easy to understand - and we do. The fixed part of the brain, including the structure of all the proteins is enormously complex which is very difficult to understand - and we don't.