"Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com."
Information
Active - Virtual Machines and dependent services - Service management issues in multiple regions
Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com.
Current status: We have determined that these issues were caused by a recent configuration change that affected public access to certain Microsoft‑managed storage accounts, used to host extension packages. We are actively working on mitigation, including updating configuration to restore relevant access permissions. We have applied this update in one region so far, and are assessing the extent to which this mitigates customer issues. Our next update will be provided by 22:30 UTC, approximately 60 minutes from now.
edit: Before someone says something. I do understand that the underlying issue is some issue with Azure.
Since there is no GitHub CEO, (Satya is not bothered anymore) and human employees not looking, Tay and Zoe are at the helm ruining GitHub with their broken AI generated fixes.
Ran into an issue upgrading an AKS cluster last week. It completely stalled and broke the entire cluster in a way where our hands were tied as we can't see the control plane at all...
I submit a severity A ticket and 5 hours later I get told there was a known issue with the latest VM image that would create issues with the control plane leaving any cluster that was updated in that window to essentially kill itself and require manual intervention. Did they notify anyone? Nope, did they stop anyone from killing their own clusters. Nope.
It seems like every time I'm forced to touch the Azure environment I'm basically playing Russian roulette hoping that something's not broken on the backend.
That’s was years ago, wild to see they have the same issues.
Also had a few instance types which won't spin up in some regions/AZs recently. I assume this is capacity issues.
I don't get how Microsoft views this level of service as acceptable.
There’s a bunch of hardware, and they can’t run more servers than they have hardware. I don’t see a way around that.
Must be nice to be a monopoly that has most of the businesses in the world as their hostages.
It means that any service designed to survive a control plane outage must statically allocate its compute resources and have enough slack that it never relies on auto scaling. True for AWS/GCP/Azure.
Being happy means:
- you don't feel the need to automate more manual tasks (you lack laziness)
- you don't feel the need to make your system faster (you lack impatience)
- you don't feel the need to make your system better (you lack hubris)
So basically, happiness is a Sin.
That sounds oddly similar to owning hardware.
As in why don't they mention Azure by name?
Or as in there shouldn't be isolated silos?
AWS has never had this type of outage in 20 years. Yet Azure constantly had them.
This is a total failure of engineering and has nothing to do with capacity. Azure is a joke of a cloud.
Which is my point.
The same fault on Azure would be a global (all-regions) fault.
Gitlab was generous first, to rise as a valid alternative to GitHub. They never got the comminity aspect right, perhaps aiming for profitability with a focus on the runners instances which is how they make money.
With profitability, the IPO made sense.
GitHub probably had a different strategy..keep it generous, get the entire open source community, keep raising money and one day someone will buys us out for billions. We we are, Microsoft goal is to capture the community, it works. It's sticky.
I wanted to try out the most cheapest option out of frugality & that was actually limited (but kudos to them that they mentioned that these servers have limits) so no worries I went and picked the 5.99 euro instead of the 3.99 euro option instead.
They also have limits option itself as a settings iirc and it shows you all the limits that are imposed in a transparent manner and my account's young so I can't request for limit increases but after some time, one definitely can.
Essentially I love this idea because essentially Cloud is just someone's else's hardware and there is no infinitium. But I feel as if it can come pretty close with hetzner (and I have heard some great things about OVH and have a good personal experience with netcup vps but netcup's payments were really PITA to setup]
One of the reasons I still use github is that I have starred quite a lot of projects and had to make an account initially to star a project. (I used to have bookmarks beforehand but I wanted to support author in a minor way :] and also github being de-facto & I wanted to talk to some projects which had issues which I wanted to create/discuss)
Another minor point is that Github actions are more generous than Codeberg's actions equivalent.
I believe hosting own Codeberg ie. Forejo (which is a gitea fork)/ gitea is actually easy. I once hosted them on my android phone using termux and on servers. Really liked the idea of having essentially github at my pockets.
For Gists [which is something that I like using a lot personally]. I found the idea of opengists really interesting as well. one minor complaint with opengists is that I love the comment part of gists which is an open issue in opengists but its not implemented yet. Wish it could be implemented.
Regarding losing bookmarks, I actually have a custom tampermonkey script in a private gist which shows a star button which essentially moves my bookmarks to some gist in a json format so as to not lose them ever again essentially.
Like imagine if AWS was composed of separate companies for different services - Fargate was an Heroku acquisition say - and then they all went down and blamed their 'upstream provider' because they can't work without say VPC or EC2 availability.
I think that's all GP meant, it just reads a bit funny, not that it's wrong.
It isnt actually all that much but most devs who have all of these I've come across are happy.
In Azure, for example, it's possible to use Entra as your Active Directory, along with the fine grained RBAC built in to the platform. On a host that just gives you VPS/DS, you have to run your own AD (and secondary backups). Likewise with things like webservers (IIS) and SQL Server, which both have PaaS offerings with SLAs and all the infra management tasks handled for you in an easily auditable way.
If you just need a few servers at the IaaS level, the big cloud platforms don't look like a great value. But, if you do a SOC2, for example, you're going to have to build all the documentation and observability/controls yourself.
Their status page seems to think everything's A-OK.
Similar to hetzner, I haven't used OVH but does it also have limits or how do they follow?
Out of pure curiosity, Is there anything aside from the three hyperscaler trifecta which doesn't show limits too?
There were 25 incidents in January and 15 in December.
There is Forgejo. I find it more stable, I self host that. It never suffered an outage in 2 years that I had it running and is faster than GitHub.