Two decades of explosive IT innovation have forever altered the way government works. Citizens are benefiting from digital transformation, cloud-native technology, mobile IT, apps, A.I. and deep data analytics. Yet, just below the ever-shifting IT landscape—the foundation upon which government and the user experience of citizens is evolving—lurks a persistent threat.
I’m referring to software that isn’t secure and vulnerable to attack. Ubiquitous programmable code—the concrete of digital infrastructure—shapes and gives form to our digitally evolving world. Software is everywhere—and it is a persistent target of cyberattacks. I can’t think of another pillar of critical infrastructure—an asset that is essential to the functioning of our society and economy—that simultaneously encompasses mass use and, when improperly secured, massive risk. Imagine if, every time you tapped into the grid to use an electrical appliance, you had to worry about electrocution.
Despite more than 20 years of warnings from cybersecurity experts, some aspects of application development culture haven’t changed much, and software continues to be a target of cyberattacks. The problem isn’t developers; it’s the inevitable result of a competitive culture that values getting software into production quickly. At times, deadline-driven development organizations skimp on security best practices that prevent vulnerable and exploitable software flaws.
The data backs up the view that widely-observed development practices at times undermine software security. Veracode’s annual State of Software Security reports have found that some factors of software security are fluid. Constantly evolving open-source libraries, for example, that appear secure today could very well be vulnerable tomorrow. In the realm of cybersecurity, there is no cruise control, no such thing as “set it and forget it.”
Despite the vulnerability of open-source libraries, developers fail to update third-party resources most of the time. That’s unfortunate because developers are adept at responding to known threats. When alerted to vulnerable libraries, developers remediate 17% of flaws within an hour and 25% within a week. When developers prioritize security, fixing most flaws is relatively easy. An update or patch will remediate 92% of library flaws, with most of those—69%—requiring nothing more than a minor version change.
Awareness is key.
“Lack of contextual information, for instance about how a vulnerable library relates to their application, can be a roadblock for developers,” according to the Open Source Edition of Veracode’s State of Software Security report, version 11. “Developers who lack the information they need, take 7+ months to fix 50% of flaws. Developers who have the information they need, take 3 weeks to fix 50% of flaws.” For too long, too many of us have condoned kicking the cybersecurity can down the road—and fixing vulnerabilities as they arise. Often, those fixes involve bolting on post-production solutions euphemistically known as patches.
Here I have laid out a roundup of the major software security concerns for 2023 and actions the government can take to mitigate them. In many respects, they look a lot like the concerns I conveyed in testimony to Congress in 1998 and 2003.
Culture and security
The number one risk factor compromising cybersecurity is people, including developers who are undertrained and overworked. Often, they are on teams that are understaffed. And many developers don’t have access to robust security tools. In the area of application development, long-standing cultural norms—which emerged in a less-risky era—tend to value speed and innovation over security.
The development culture must pivot toward having a security bias. For that to happen, public sector organizations will need robust workforces of well-trained developers equipped with the security tools needed to “bake” security into applications at the earliest stage of development. Training is key.
I’m starting to see some progress on this issue when it comes to building the next-generation workforce. There are efforts underway from the Cybersecurity and Infrastructure Security Agency and the National Security Agency to get cybersecurity education and tools into the technology and computer science curriculums at community colleges and universities.
Seeing is security
Contemporary application development is akin to building a home out of prefabricated, modular components of unknown origin. In both cases, snapping together prefabricated pieces makes the process faster and more efficient. For software developers, it’s often impossible to know what’s inside the open-source code they rely on. The risk is analogous to living in a modular home and not knowing if the wiring is faulty or the insulation contains asbestos.
Public agencies must get better at knowing what’s in the software they buy and develop—it’s the law. The government’s requirement that agencies use software bills of materials is a step in the right direction. Beginning in 2023, an Office of Management and Budget memorandum will require government software vendors to provide SBOMs and other proof of secure development practices. This will help.
In 2023, the federal government will seek to advance a reimagining of network security. Agencies will begin abandoning the once-pervasive perimeter security model of network protection and adopt what’s popularly known as zero trust architecture. The new model isn’t hardware or software. Rather, it’s a philosophy of IT security whose implementation will affect just about every aspect of the government’s networks and data. According to CISA, applications and workloads constitute one of the five pillars of its Zero Trust Maturity Model. Securing networks’ application layers will be critical to the success of zero trust, along with containers and secure application delivery.
We have made progress in the area of software security, but not enough. Government must stop gambling with the health of its networks, its data and the public trust. We must move with conviction to shore up software security. If the government fails to act, it’s a matter of time until a vulnerability that could have been prevented opens the door to a catastrophic cyberattack.
Chris Wysopal is the founder and CTO of Veracode.