In my last post, I discussed the need that every large organization’s IT department has: The need for a patch management checklist. Much of that checklist involves planning for thorough and efficient patch management. (For example, you need to know your targets, patch sources, and success metrics ahead of time.)
At a certain point, you will need to drill down into more detail, depending on whether you are patching workstations or servers (or both). Patching workstations and similar endpoints is a little different from patching servers, and so different questions need to be asked. To make things easier, I’ll break out the strategy for each in separate posts. This article is where I review some of the key decisions that have to be made when formulating a patch management strategy for workstations.
Patch Management Strategy—Workstations
If you followed my previous post, you’ve already identified your goals and metrics, organized your list of patch sources, and thought through deployment rules. Now you are at the step where you need to identify and list the different device connectivity scenarios, as well as the patching point of authority for distribution and reporting.
Example: Patching for In-Office vs. Roaming Devices
For example, some devices in your organization will be in-office, while others are roaming. How will you go about patching devices in each of these groups?
For in-office devices, there are different options. For example, one could use Microsoft’s Intune. This gives you a lot of flexibility for managing both in-office and out-of-office devices, but there is less control. WSUS is simpler but, again, does not give you much control or ability to automate. SCCM, by contrast, gives you tons more control…but it requires much more management, and thus may eat away at IT time and resources (unless you hire an outside firm, like us).
For roaming devices, you will need to think not only about services, but also connectivity. For example, you can use SCCM for managing these devices, but you will need to decide whether this will be done via some sort of internet-based client management app or using Microsoft Azure’s cloud proxy. Other examples are Windows Update for Business and, again, Intune.
(Connectivity, patching, and up-time can be a major issue depending on your particular environment. For example, see our use case with Air Methods for an example where the organization critically relied on a large fleet of mobile devices that had to be updated at specific times.)
Developing the End-User Experience
Unlike patching servers or routers, patching workstations must take into account the end-user experience. There are two general ways of creating patch management policies, depending on the involvement of the end-user. These are the high-visibility and low-visibility approaches.
High-visibility. This is a more modern approach to patching where the user is made aware of the need for a patch and is given more control over when the patch is deployed. This is a better approach in environments where a critical device needs to function at certain times (such as in a hospital), or when devices frequently travel in and out of network.
If you choose a high-visibility approach, you will need to determine:
- The cadence and length of the patch release cycle
- What end-user messaging is going to be used to inform and remind the user of the patch
- The frequency of the reminder messaging
- Test reminders and messaging
- Messaging indicating success (or problems)
A great tool for a high-visibility approach is our own Update Notification User Interface (UNUI). UNUI is an application that detects available updates or pending reboots stemming from updates or applications deployed via SCCM, and provides the end-user with the ability to install or reboot at their leisure—all while providing a maximum time limit to ensure security requirements are met. It’s a great way to put patching in the users hands while keeping up with security and compliance requirements. (You can read more about this and our other products here.)
Low-visibility. The low-visibility approach leaves patching to run quietly in the background. The advantage of this is that it does not disrupt workflow (assuming that everything goes according to plan) and takes less work to develop messages, etc. The disadvantage is that it removes the feeling of control from the user. This especially can make users upset when something goes wrong.
Thus, a big aim of the low-visibility approach is to ensure that, if something does go wrong, the disruption is minimal. This means determining the proper maintenance window and creating a deployment strategy that starts with the most adaptable users and continuing to the least adaptable.
Standard Deployment Process for Workstations
Once you have your scenarios and rules set out, it’s time to do the patching!
Of course, you don’t want to patch all machines at once. That could invite problems. Best practice is to follow a standard 22-day procedure for piloting and testing.
- Test stage (“smoke test”). A patch is acquired and installed on a low-priority workstation. This verifies that the patch works and does not break key applications, and also gives you a sense of what problems to anticipate.
- Pilot stage. Patch is then deployed to a pilot group of actual users. These users should be from different departments and locations; that not only gives you the most information but also ensures that, should there be an issue, you don’t shut down an entire department or location! Users are encouraged to work normally with the new patch for one to three weeks, to ensure that everything runs smoothly.
- Deployment stage. The patch is then rolled out to all users in waves. You will need to keep an ear out for potential problems, and be able to roll back the patch if a particular group is affected negatively.
- Analysis and reporting. Really, you should be collecting feedback and mitigating after each stage is done, particularly the test and pilot stages. Return to this feedback, and the process overall, after the patch is deployed. You may well learn things that end up causing you to revise your patching strategy.
Emergency Patch Deployment
All that said, there are also cases where you might need emergency patch deployment. In these cases, you might not have the luxury of a full 22-day test-and-pilot cycle. And you might have to manually package and deploy the patch, too.
If you find yourself in that situation, I suggest the following:
- Prioritize systems that need patching by how critical they are, and the potential severity of the consequences of not patching. Workstations with mission-critical apps that present a clear vulnerability would be high priority, obviously, whereas a workstation at lower risk can wait—even if a known risk has been plastered all over the news.
- Run your test on systems most like the high priority ones you identified. Testing on a clean workstation that looks nothing like your high-priority endpoints doesn’t tell you much.
- Test systems post-reboot. Don’t just reboot a patched system and pat yourself on the back when the user logs in. Test out applications and make sure everything works.
- Have a plan in place for emergency rollback in case something does go wrong.
Hopefully, all should be well under way for workstations? For the next part of my discussion about the patch management checklist, we’ll explore how these steps vary when we look at servers specifically.