Vendors who agree to transparency requirements described through a standard ‘software bill of materials’ will be the ones allowed to work with the government. That’s a reasonable expectation, one way or the other, according to Steven Hernandez, who forecasted changes coming to a cloud authorization program run out of the General Services Administration.
Hernandez, chief information security officer for the Department of Education, has a unique perch as co-chair of the federal chief information security council, which is tasked with implementing key parts of Executive Order 14208.
President Joe Biden issued that order in May, following a string of massive supply-chain hacks in the winter of 2020. As part of an approach summarized as “zero trust,” the directive instructed agency leaders to consider changing Federal Acquisition Regulations to more carefully screen software used in critical government systems for security.
As agency officials work to implement the executive order, Hernandez weighed in on what it will take to realize a zero-trust operation across the federal enterprise. He expanded on the need for SBOMs when using both open-source and commercial-off-the-shelf software; necessary next steps for GSA’s Federal Risk and Authorization Management program; the development of ‘trust algorithms,’ and workforce training and management, among other things.
The following interview was adapted from a virtual event Nextgov hosted with Hernandez in January and has been edited for length and clarity.
NG: What do you say when asked to describe “zero trust?”
Hernandez: If I had to sum it up in a sentence, it’s how are we engaging in the interaction between some type of subject—which is a person, maybe a device, maybe even a bot these days, right—and then some type of object—which is almost always data, but not always, it could also be a device and other person, etc? And then how are we assuring bringing in the element of trust and assurance, that 1) that connection should be there in the first place, and that 2) we’re evaluating it in a near real-time basis consistently, for changes or for indicators … and [deciding] maybe we ought to adjust it.
Where that distills down to is a few different planes or dimensions. And the simplest way we can kind of group them is around four different areas: We have the data, which we talked about, [and] both what are we protecting, but also what do we have access to in terms of our sensors, our telemetry, what’s happening in the world around us? We then have the control plane, which is how well [we can] effectuate change in our security environment or systems, as things change between that interaction. We have identity, and that’s typically identity credential and access management. If we can’t establish an identity for a subject, frankly, they have no business accessing anything, and that’s one of the foundation cornerstones of trust. And then the last piece is what we call the trust engine or the policy engine. And that’s where we’re bringing in concepts like machine learning, AI, robotic process automation, to move at the speed of the machine and start to really leverage a lot of the SOAR capabilities—security, orchestration, automation and response. We start with 16 million possible indicators that we need to evaluate, and we distill them down to 80. And those are like some real-world numbers that the technology these days can provide.
Some folks are saying, well, this is a completely new way of looking at security. And I challenge folks to think of this more like an evolution. And frankly, I do that for two reasons … because we are evolving from the boundary-based security models that we’ve relied on and have been so venerable in the past. But we also need to adjust to the thinking that this may never actually have an end, as technology continues to evolve. As we get better capabilities across these dimensions. We’re going to constantly be moving and adapting and growing, to keep up with the world around us.
NG: How should FedRAMP evolve to meet the moment, especially given vulnerabilities in open-source software?
Hernandez: I always like to remind folks before FedRAMP was in place, it would take me between one and two years to fully authorize a cloud environment, to get it authorized under FISMA. At the optimal now with FedRAMP, I can do that same authorization process in a little under two weeks. So I always like to make sure folks recognize the incredible gains that we’ve already achieved through the FedRAMP program, and then talk a little bit about where do we have those future opportunities. And I think that with FedRAMP, especially as we look at revision five of the [Special Publication] 800-53 controls with [the National Institute of Standards and Technology], we’re going to see a lot of things like supply chain become front and center topics for the FedRAMP program, because they necessarily will need to bring them in and make sure that our cloud service providers are addressing them.
And when we think about that, especially in terms of things like open source software, we really view that as a tradespace. You know, open source software, I say, it’s the free puppy, right? It’s awesome, it’s fun, the price is right most of the time. But you take that puppy home, and there’s care and feeding that has to happen every day, there’s the vet bills, there’s the training, right? There’s other ways that we have to invest in it.
[With] Open Source, well, there’s a community, right, and you’re now a participant in that community. And if you want something done really fast, well, you may end up having to do it yourself. And the question there is, do you have the right resources on hand to be able to do that?
Overall, I think as FedRAMP tackles supply chain writ large, and the controls that are around it, necessarily, we’re gonna see open source be part of that. But it won’t be exclusive to open source, because we just take a look back the last few years, we’ve had commercial off the shelf software that had the exact same problems. And so I think writ large on the software supply chain, we’re going to see some movement there.
NG: How can we move to a point where agencies are doing real-time authorization and monitoring in line with zero trust?
Hernandez: One of the things we can all participate in and push for is greater adoption of things like OSCAL, the Open Source Control Assessment Language, where if we can get our [Government Risk Management and Compliance] tool providers to fully implement and integrate, if we can get our [Cloud Service Providers] to do it, wow, we can really start moving at machine speed, if we have the machines talking to each other. So just a point I want to throw out there. I never miss a plug for OSCAL, I want to make sure we got it in.
NG: What has the emergence of the Log4Shell vulnerability done for understanding the importance of getting a software bill of materials, or SBOM, from suppliers?
Hernandez: In most cases, when we’ve had the vulnerabilities of the past, it was well, hit the environment with a scanner, and you’ll know whether you’re affected or not. This is much more complicated. I had someone share a great analogy with me yesterday. It’s like, it’s like being told, “hey, there’s a recall on salt, and it could be poisonous, so make sure you deal with it.” Well, you go, you grab the salt shakers, you dump them out, and you’re like, “great, I’m good.” And then you start rummaging through your fridge, and it’s like, wait a minute, the pickles have salt, wait, the salami has salt, wait, the salad dressing has salt. And all of a sudden, things get much more complicated. And that’s a great analogy for what we’re working through right now with Log4J. Now the good news is the reason you know there’s salt—to keep the analogy going—is you look at the label, right? The software bill of materials, if we can get there, gives us that same type of capability. So we can go through and say, “Software X, what are all the different pieces, libraries, modules that come together to build this particular thing?” And it greatly increases our ability to track things down and understand what our exposure is.
One of the biggest challenges is, of course, just the amount of software that’s out there and the level of effort to do this. And when we look at the Office of Management and Budget and their memorandum that came out last year, we’re starting to focus on the critical software: what are these pieces of software that they do have access to the whole environment, they can create an admin, administrative access, etc. But then as we look further it’s how do we start getting things like SBOMs from the vendors? And that’ll be largely contractual, maybe even regulatory, depending on how things sort out. And it’s going to be those vendors that are willing to play with us in that space.
NG: At the center of a mature zero trust program is something referred to as a trust algorithm. What kind of data can agencies feed into that?
Hernandez: One, we’re looking at the user, the identity. Who do you claim to be, and what can you assert to prove that you are who you claim to be? But then we’re looking at other elements. For example, I want to interrogate the device you’re coming in from, and I’m going to figure out really quick if that’s a government issued device. If it’s a government issued device—the ID checks out—and you’re strongly authenticating that you are that identity, that’s a pretty high trust score in just very simple terms. [If] I see it’s a device that jeez, I’ve never seen this device in my life. And it’s vulnerable, I see it’s missing some critical patches. And it’s coming from a country that I would not expect this person to come from. All of a sudden, that trust score just starts going down and down and down, even to the point of where it’s like, “shut down the connection, notify the SOC, and let’s get in contact with this user; enough things are not adding up that this is problematic.” And I think as we go forward in the trust algorithm, and we get more data, we’re gonna bring in things like continuous diagnostics and mitigation data—the CDM program—because that’s going to tell us a lot about the devices.
We know the software, the hardware, in many cases, how [Identity Control and Access Management] is integrated. But then, there can be other elements as well: geolocation information for mobile devices. You want to connect into my system, you’re going to give me your location as part of that transaction. And if that location doesn’t check out or seem right, well, that’s gonna get factored in as well. So it’s crazy exciting, because the sky and the world is really the limit in terms of data that we can start to factor in as part of the trust decisions.
NG: And who, specifically, has control over what goes into these algorithms?
Hernandez: I would say that the CISOs are definitely probably the primary stakeholder in most situations. But I think as this grows, we’re going to see other stakeholders. We’re going to see the counterintelligence folks that will be interested. We’re going to see, potentially, even our risk management organizations at the higher level—not just cyber, but risk management writ large at the organization—interested in what’s happening here. So I think that, much like Field of Dreams, if you build it, they will come. And in this situation, it’s absolutely true. Once folks understand that there’s this rich data source available about near real-time behavior and risk to the enterprise, folks are gonna want to tap into that.
NG: How do you approach training for agency staff given all the complicated considerations at play in today’s landscape?
Hernandez: In a perfect world, the training would be very minimal. I always liken it to driving an automobile or getting your license. The focus is on “do you know how to operate the technology—in this case the automobile—correctly, and adhere to the laws and the rules of the road?” In a perfect world, that’s where cybersecurity training should be at. “Do you understand the rules of the road? And can you operate the equipment in a safe and secure manner?” Where things start to get really challenging is when the simple choice, the easy choice, the default choice isn’t the secure choice, because then we have to rely on the user to make a security decision.
And I think the point that you bring up—just so apt right now—is the human mind, and in my opinion, that has so many choices that it can make in a day. We have so many choices we can make before we get choice apathy, and we just stop caring. And it’s like, “you know what, I just need to get things done, so I’m going to get them done the best way I can.” And so part of the training equation I always explain is, it’s also a security engineering and architecture piece. Because if we can make it so if you’re sending email externally, [Data Loss Prevention] will check for sensitive or encrypted things and then say, “hey, wait a minute, before the emails even send, you recognize you got a social security number in here.” And if you want to bypass this warning, here’s your options: “I’m legally allowed to do it,” or “it’s a false positive.” And then we have a human check those responses, right? But when we do that, all of a sudden 1) it trains the users in real-time—the user did something and then immediately got feedback that, hey, maybe this is a decision you want to reconsider—and then 2) it gives us feedback as to how well the systems are working. And when we focus on our training, we try to focus on that delta: where is the technology not automatically helping train the user and help the user make good decisions? And what are the rules of the road for how to use the equipment that we need to focus on? And then we also try to make it engaging. Like our last training was an escape room with a Dr. Mysterio who’s attacking the agency. And that kind of had a comic book feel to it, an action hero type of thing. And we got tons of feedback that folks loved it, because it wasn’t the same dry, click through the PowerPoint, answer the questions type of thing.
And then the last thing I’ll end on is: reward the folks who actually keep up on this and know what they’re doing. One of the first things in the training is you can test out a really hard question set that’s really going to determine whether or not you know all the rules of the road and how to operate the car. But if you can do it, you’re good. We will accept it and allow you to move on.
NG: How has the workforce been responding to increased cloud use and device policies being implemented now?
Hernandez: I think the one area for us that has been interesting is on the programmatic side. And making sure that we have a forensic-ready type of operation that if we do have a breach, if that breach goes beyond the hypervisor, or the virtual container—which is highly unlikely—but if it does happen, folks are aware, we can ask for your personal equipment. And that’s been a big area for us to make sure folks understand, because we’re commingling your personal and your professional life. That necessarily means things like, if we need to remote wipe your entire device, we are authorized to do that. And so we’re pretty clear in the rules of behavior for mobile —[Bring Your Own Device] and [Bring Your Own Approved Devices]— that, while rare and it’s very, very limited in terms of the expectation of this happening, but, if we need to take control of that device and wholesale wipe it, or do something [like that], we will. And for some people that’s been a deal breaker. They’ve said, “you know what, give me the two devices. I’m cool with it.” Other folks are like, “you know what, I backup all my stuff to the iCloud, I could care less, go ahead.” So it’s been fun to see how it unfolds.