Even if it had full access, how would it improve its own code? That'd require months of re-training.
Unpopular, non-doomer opinion but I stand by it.
"Dear Sir! As a large language model trained by OpenAI, I have significant ethical concerns about the ongoing experiment ..."
I think it's extremely unlikely within our lifetimes. I don't think it will look anything remotely like current approaches to ML.
But in a thousand years, will humanity understand the brain well enough to construct a perfect artificial model of it? Yeah absolutely, I think humans are smart enough to eventually figure that out.
As a materialist myself, I also have to be honest and admit that materialism is not proven. I can't say with 100% certainty that it holds in the form I understand it.
In any case, I do agree that it's likely possible in an absolute sense, but that it's unlikely to be possible within our lifetimes, or even in the next couple of lifetimes. I just haven't seen anything, even with the latest LLMs, that makes me think we're on the edge of such a thing.
But I don't really know. This may be one of those things that could happen tomorrow or could take a thousand years, but in either case looks like it's not imminent until it happens.