The value of looking at ai safety as a pascals mugging as posited by the video is in that it informs us that these philosophers arguments are too malleable to be strictly useful. As you note, just find an "expert" that agrees.
The most useful frame for examination is the evidence. (which to me means benchmarks), We'll be hard pressed to derive anything authoritative from the philosophical approach. And as someone who does his best to examine the evidence for and against the capabilities of these things... from Phi-1 to Llama to Orca to Gemini to bard...
To my understanding we struggle to at all strictly define intelligence and consciousness in humans, let alone in other "species". Granted I'm no David Chalmers.. Benchmarks seem inadequate for any number of reasons, philosophical arguments seem too flexible, I don't know how one can definitively speak about these LLMs other than to tout benchmarks and capabilities/shortcomings.
>It seems like there’s a rather high chance if (proverbially) the AI-cat is let out of the bag.
Agree, and I tend towards it not exactly being a pascal's mugging either, but I loved that video and it's always stuck with me . I've been watching that guy since GPT 2 and OpenAI's initial trepidation about releasing that for fear of misuse. He has given me a lot of credibility in my small political circles, after touting these things as coming for years after seeing the graphs never plateau in capabilities vs parameter count/training time.
Ai has also made me reevaluate my thoughts on open sourcing things. Do we really think it wise to have gpt 6-7 in the hands of every 4channer?
Re mass effect, that's so awesome. I have to play those games. That sounds like such a dope premise. I like the idea of turning the mugging like that.
It's a slightly different premise than what I described. Rather than AGI, it's faster-than-light travel (which actually makes sense for The Intergalactic Community). Otherwise, more or less the same.