Part of a blog series from the CoSN’s Cybersecurity Committee in preparation for and tied to Cybersecurity Month. 

As the Director of Technology, I have recent, first-hand experience managing the incident response of a cyber event. We ultimately took a series of steps to minimize the incident, reduce its impact, and return our system to a normal state. We will continue incorporate what we have learned from this event into our cybersecurity practices.

The Incident

We assembled a team that would go on to meet regularly throughout the duration of incident response. This included members of our systems administration and project management teams, the entirety of our technology leadership team, and my cabinet-level supervisor. In addition, I consulted regularly with our Superintendent, Executive Director of Business & Finance, Communications Manager, Digital Learning Manager, and Safety Coordinator. We leveraged the district’s designated Incident Response Center, a conference room equipped with a dozen phones, whiteboards, and a large format display.

Shortly after an assessment of the situation, we made the decision to proactively disconnect our network from the internet, eliminating inbound and outbound traffic to our environment. This was a substantial step in reducing the long-term impact of this incident on our organization, and the support from district leadership in this decision was crucial. In addition to this, we shut down our server environment.

We contacted district insurance to explain the situation and enlist their support. This resulted in formally engaging cybersecurity legal counsel, who played a lead role in enlisting additional support. Once all of this was in place, we began to work with consultants to analyze the situation and identify next steps. In addition to the actions we had already taken, we performed bulk password reset actions on each of our major systems (Active Directory, Google, and our Student Information System/ERP – Skyward SMS), including privileged and service accounts. In addition, I made contact with federal law enforcement authorities to inform them of our situation.

The Aftermathmarkus spiske fxfz sw0uwo unsplash

A number of activities took place within the first few days after our incident began. Much of our time was spent compiling and reviewing server and system logs alongside our forensic analysts. This started to paint a picture of when the suspicious activity first occurred, as well as what attempts were made to gain access to our environment. This work validated our initial impression that steps we took in the first few hours of our incident response stopped all threat actor activity within our environment. Ongoing analysis of server logs, firewall and VPN logs, and other environmental analysis has further confirmed this assessment.

In addition to this review of logs, we implemented several tools that served as prerequisites for restoring our internet connectivity. This included bringing a new firewall into service, which provided us much greater visibility into environmental activity in real-time and would aid in spotting concerning activities once internet access was restored. In addition to this, we deployed a new endpoint protection across our environment, with the goal of 90% saturation before our internet connection was restored.

Maintaining an “abundance of caution” mindset throughout our incident response, we chose to rebuild or restore – from a known good backup – all servers that showed any signs of threat actor activity. We were all intent on minimizing the likelihood of a recurrence. Having disconnected our network environment from the internet early on, our backup systems remained untouched by the threat actors, and we were able to leverage them in several instances to expedite this process. In addition to the above, I had ongoing discussions with an FBI contact to ensure they received regular high-level updates on what we had uncovered.

While all of this was taking place, many of our technology-reliant systems needed workarounds. Largely without Technology involvement, various departments created and implemented temporary systems for completing their business functions, and our instructional staff pivoted to practices and assignments that were not reliant on technology.

The most challenging item to address prior to restoring our internet connections was installation of a new endpoint protection software. Because our server environment was mostly offline, tools that we would normally have used to deploy software were largely unavailable to us. Instead, we relied on a handful of other methods for identifying Windows OS and other computers that needed this software. At our central office, a boots-on-the-ground effort was responsible for much of the installation efforts. This was not practical for all 34 sites in our environment, so we continued to explore alternative methods for remote installation, in partnership with our incident recovery consultants and new antivirus software vendor.

In anticipation of restoring our internet connection, we engaged with an MDR service provider, who would be prepared to actively monitor our environment on a 24×7 basis for the first 30 days after our internet connection was restored. This provided an additional set of eyes on our environment with an intention to further reduce the likelihood of recurrence of threat actor activity.

marvin meyer syto3xs06fu unsplashIt was midway into the second week since our incident began before the agreed-upon prerequisites were completed, and we were able to restore our internet connections. Upon restoration of our internet connection, we began the process of identifying critical services that should be restored, as well as the laborious task of ensuring all staff account access was available.

Early in our recovery process, it became clear that restoring student account access would need to be completed. We made an early decision to scale up our adoption of badge authentication, which was originally intended for K-1 students only, to all elementary students. This allowed our short-term focus to be on student account access at the secondary level.

We leveraged Windows computers in secondary labs, in partnership with our Career & Technical Education staff, support from secondary staff and our Digital Learning & Libraries team, as well as generous assistance offered by a number of our neighboring school districts, who sent teams of their staff to learn from and support us in this aspect of incident recovery. With all of this support, most students at the secondary level were able to access their accounts and devices within a few days after our internet connections were restored.

Throughout this stage of our recovery, we were also identifying, inspecting, and restoring access to various district systems, based on prioritization established in partnership with district leadership. This required ongoing analysis and categorization in order to ensure we were working on the most critical and appropriate systems at any given time.

We learned a lot about the incident response process, as well as how to be better prepared to prevent cyber incidents like what we experienced.

Read next: Lessons Learned from a Cyber Incident Part 2: Common Questions and Lessons

Author: Chris Bailey, Technology Director, Edmonds School District (WA)

Date: October 12, 2023

CoSN is vendor neutral and does not endorse products or services. Any mention of a specific solution is for contextual purposes.