Azure North Central Connectivity Issues
Incident Report for Workspot
Postmortem

Summary of impact:

Between 21:39 on 9 Apr 2019 and 6:20 UTC on 10 Apr 2019 you were identified as a customer using Virtual Machines in North Central US who may have experienced connection failures when trying to access some Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Some residual impact was detected, impacting a small subset of recovered Virtual Machine connectivity with the underlying disk storage.

Preliminary root cause: Engineers determined that a recent Storage configuration change caused a loss of connectivity between Virtual Machines and storage.

Mitigation: The connectivity issue was self-healed by the Azure Platform. Engineers also performed manual recovery processes on a subset of affected Virtual Machines that experienced residual impact relating to storage connections within the Virtual Machine environment.

· A small subset of Virtual Machines may have experienced an additional impact window between 05:20 UTC and 06:20 UTC on 10 Apr 2019, required to fully mitigate the residual underlying storage connectivity issue.

Next steps: Engineers will continue to investigate to establish the full root cause and prevent future occurrences. To stay informed on any issues, maintenance events, or advisories, create service health alerts (https://www.aka.ms/ash-alerts) and you will be notified via your preferred communication channel(s): email, SMS, webhook, etc.

Posted Apr 10, 2019 - 18:24 UTC

Resolved
Summary of impact:
Between 21:39 on 9 Apr 2019 and 6:20 UTC on 10 Apr 2019 you were identified as a customer using Virtual Machines in North Central US who may have experienced connection failures when trying to access some Virtual Machines hosted in the region. These Virtual Machines may have also restarted unexpectedly. Some residual impact was detected, impacting a small subset of recovered Virtual Machine connectivity with the underlying disk storage.
Preliminary root cause: Engineers determined that a recent Storage configuration change caused a loss of connectivity between Virtual Machines and storage.
Mitigation: The connectivity issue was self-healed by the Azure Platform. Engineers also performed manual recovery processes on a subset of affected Virtual Machines that experienced residual impact relating to storage connections within the Virtual Machine environment.
· A small subset of Virtual Machines may have experienced an additional impact window between 05:20 UTC and 06:20 UTC on 10 Apr 2019, required to fully mitigate the residual underlying storage connectivity issue.
Next steps: Engineers will continue to investigate to establish the full root cause and prevent future occurrences. To stay informed on any issues, maintenance events, or advisories, create service health alerts (https://www.aka.ms/ash-alerts) and you will be notified via your preferred communication channel(s): email, SMS, webhook, etc.
Posted Apr 09, 2019 - 22:08 UTC