Need to patch your servers for security and compliance while guaranteeing the most uptime of services possible? It helps to have a patch management checklist. In my previous two posts, I reviewed what you need to do to initially plan your patch management and handle patching for workstations. Here, the triad is complete with a list of best practices for patching servers.
Patch Management Strategy—Servers
Patching servers in a data center is a little different from patching your typical workstation or endpoint, and so a slightly different approach is needed. Not only are uptime and server security much bigger concerns, but there are different approaches that one can take.
First Step: Identifying Device Connectivity Scenarios and Point of Authority
“Servers” is almost a misleading term, because there are so many different scenarios these days, and many do not look like the classical server that an IT department would keep in a closet somewhere on prem (though that does count, too). Therefore, it helps to think through the different server connectivity scenarios:
In-office/on-prem devices. In other words, hardware that exists on site. There are many ways to update and patch these servers: Group Policy Objects (GPOs) through Windows Server Update Services (WSUS), and SCCM to name a few.
To use an internal WSUS server, you will need to configure clients with automated update settings and also configure the server with which they will communicate. Additionally, you can configure the clients to be a member of a specific WSUS computer group, if you’re deploying patches in WSUS based on computer group targets.
SCCM handles more than just server updates; it also handles operating system deployment, application infrastructure, software inventory, and more. For a larger organization, it’s handy to have all of this capability from a single dashboard. As far as patching servers goes, it’s helpful because it gives you more control over the schedule for patching and rebooting, which is really important for mission-critical servers.
In the end, SCCM provides greater flexibility and control for patching servers (and more), giving you more choices, more extensive updating, and better reporting. Still, something is better than nothing, and sometimes organizations have to make hard choices about the degree of complexity they want in their patching procedures. (That said, we can handle a lot of that complexity for you—just ask!)
Servers in the DMZ. As a freshener, DMZs are a sort of “buffer zone” between the public internet and your internal network(s). A good example would be web servers that have some public-facing components. Patching servers in a DMZ often requires careful management of firewall rules and/or placing a WSUS server in the DMZ.
Standalone cloud devices. Cloud servers are sometimes standalone, and so would have little-to-no network connectivity to production management systems. That being the case, using SaaS tools such as Microsoft’s Intune, or OMS, would be the way to go for updates. Given that they have support for multiplatform and are network agnostic, these tools would be preferred point of authorities over SCCM or group policies, as these on-prem tools simply won’t work for this scenario.
Second Step: Identifying Production Deployment Strategy
Patching servers effectively will require a standard deployment process according to known business rules. This deployment process follows a series or stages or rings—see my earlier post, which describes in general terms the stages of test, pilot, and production.
Like workstations, servers will sometimes need emergency patching as well. If you find yourself in this situation, start by prioritizing systems by how critical the services being provided, the disruption that will be caused by patching/downtime, and the potential severity of the consequences of not patching. Test on a low-priority system before deploying to high-priority ones. Follow up, and be ready to roll back if there are any issues.
Whichever the case, be sure your production deployment strategy works in the necessary feedback loops. Verifying success early on—or catching problems—depends on it.
Third Step: Identifying Server Patching Groups
Patching groups can be defined in many ways; some of the more common ones include:
- Server owner
- Service (application hosting, communications, web hosting, file serving, and so on)
- Redundancy with other devices
Servers will need to be added to groups once you have your deployment strategy settled. Be sure that each server is in a group!
Fourth Step: Defining Patching Schedules/Maintenance Windows
Maintenance windows are configured on a per-collection basis and consist of a start time, end time, and recurrence pattern. Servers in larger organizations tend to have their own maintenance windows already defined; if that is not the case, you will have to do so. The frequency of these windows depends on the cadence that the organization needs to set, which in turn depends on the SLAs for the business.
Why SLAs? These often have parameters for an agreed amount of uptime. Critical functions in some industries, for example, need to be up as much as possible. Credit card processing is a good example: The servers processing transactions need to be up almost all of the time, and they cannot afford frequent outages for patching. Therefore, they have a different cadence than most. The same would go for critical facilities, like a hospital or a power plant.
So check your organization’s SLAs, note any language that promises a certain amount of server uptime, and set your patching cadences accordingly.
If you’re unsure how to do any of the above, reach out to us. We can help you get on an automated schedule to do most of this without the need for constant intervention, all while keeping your servers up as much as possible.