We believe that the mitigation efforts have resolved the issue.
Posted Dec 10, 2025 - 08:39 CST
Update
Cisco is tentatively confirming the theory our staff had this morning that a particular protocol was getting scanned and causing the firewall crashes. We took action on the two rules getting hit most at 5:30 AM and have been stable since then. After cisco gave their tentative confirmation we have since taken action on another 8 or so rules pertaining to the affected protocol. we will continue to monitor for any further issues and will continue to follow up with Cisco
Posted Dec 09, 2025 - 13:30 CST
Monitoring
A fix has been implemented and we are monitoring the results.
Posted Dec 09, 2025 - 13:27 CST
Update
Staff have worked with Cisco all evening. A repeat of a bug we dealt with in August that was expected to be patched last month seems to be the leading cause in Cisco's analysis. Further action was taken to attempt to reduce some of the memory pressure on the firewall. We will continue to work with Cisco throughout the day.
Posted Dec 09, 2025 - 06:50 CST
Update
At approximately 1AM both firewalls crashed and were responsible for 20 minutes of outage and we're escalating with support.
Posted Dec 09, 2025 - 01:35 CST
Investigating
Tonight our campus firewall has crashed multiple times. Each crash has triggered a failover that would lead to an expected 1-2 minutes of impact. we have escalated our case with Cisco and are working to determine the cause of the crashes and any potential upgrade or workaround to apply.
Posted Dec 09, 2025 - 00:22 CST
This incident affected: TigerWiFi and Wired Network.