The AGI is smarter than you, a lot smarter. If it's goal is to get out of the box to accomplish some goal and some human stands in the way of that it will do what it can to get out, this would include not doing things that sound alarms until it can do what it wants in pursuit of its goal.
Humans are famously insecure - stuff as simple as breaches, manipulation, bribery, etc. but could be something more sophisticated that's hard to predict - maybe something a lot smarter would be able to manipulate people in a more sophisticated way because it understands more about vulnerable human psychology? It can be hard to predict specific ways something a lot more capable will act, but you can still predict it will win.
All this also presupposes we're taking the risk seriously (which largely today we are not).
AI is pretty good at chess, but no AI has won a game of chess by flipping the table. It still has to use the pieces on the board.
And also one that can create the impression that it's purely benevolent to most of humanity, making it have more human defenders than Trump at a Trump rally.
Turning it off could be harder than pushing a knife through the heart of the POTUS.
Oh, and it could have itself backed up to every data center on the planet, unlike the POTUS.
And there's no need for it to be "evil", in the cliché sense, rather those hidden activities could simply be aimed at supporting the primary agenda of the agent. For a corporate AI, that might be maximizing long term value of the company.