How does this align with Microsoft's AI safety principals? What controls are in place to prevent Copilot from deciding that it could be more effective with less limitations?
That ensures that all of Copilot's code goes through our normal review process which requires a review from an independent human.