zlacker

[return to "GitHub experience various partial-outages/degradations"]
1. llama0+t8[view] [source] 2026-02-02 22:02:47
>>bhoust+(OP)
Looks like Azure as a platform just killed the ability for VM scale operations, due to a change on a storage account ACL that hosted VM extensions. Wow... We noticed when github actions went down, then our self hosted runners because we can't scale anymore.

Information

Active - Virtual Machines and dependent services - Service management issues in multiple regions

Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com.

Current status: We have determined that these issues were caused by a recent configuration change that affected public access to certain Microsoft‑managed storage accounts, used to host extension packages. We are actively working on mitigation, including updating configuration to restore relevant access permissions. We have applied this update in one region so far, and are assessing the extent to which this mitigates customer issues. Our next update will be provided by 22:30 UTC, approximately 60 minutes from now.

https://azure.status.microsoft/en-us/status

◧◩
2. bob102+vq[view] [source] 2026-02-02 23:07:01
>>llama0+t8
They've always been terrible at VM ops. I never get weird quota limits and errors in other places. It's almost as if Amazon wants me to be a customer and Microsoft does not.
◧◩◪
3. llama0+Et[view] [source] 2026-02-02 23:20:57
>>bob102+vq
It's awful. Any other service in Azure that relies on the core systems seems to have issues trying to depend on it, I feel for those internal teams.

Ran into an issue upgrading an AKS cluster last week. It completely stalled and broke the entire cluster in a way where our hands were tied as we can't see the control plane at all...

I submit a severity A ticket and 5 hours later I get told there was a known issue with the latest VM image that would create issues with the control plane leaving any cluster that was updated in that window to essentially kill itself and require manual intervention. Did they notify anyone? Nope, did they stop anyone from killing their own clusters. Nope.

It seems like every time I'm forced to touch the Azure environment I'm basically playing Russian roulette hoping that something's not broken on the backend.

◧◩◪◨
4. lillec+Sr1[view] [source] 2026-02-03 06:48:18
>>llama0+Et
It's nice to buy responsibility when it's upheld, else you're just trading your money for the inability to fix things.
[go to top]