Will Government Secure Open Source or Muck It Up?
Can open source software be regulated? Should it be regulated? And if so, will it lead to enhanced security? In mid-September, two government’s approaches to securing open source software were on display, but questions surround whether either will lead to improvements in the open source ecosystem.
On Sept. 12, the US Cybersecurity and Infrastructure Security (CISA) agency released its “Open Source Software Security Roadmap,” in which the government agency pledged to work with the open source software community to promote a supply of secure software. In contrast, at the Open Source Summit Europe a week later, open source advocates voiced concerns that the European Cyber Resiliency Act (CRA) effectively placed liability for vulnerabilities in OS software on the developers and nonprofit foundations that manage open source software projects.
The two approaches demonstrate how government agencies and regulation can help foster a secure ecosystem of open source software — or undermine development, says Omkhar Arasaratnam, general manager at the Open Software Security Foundation (OpenSSF).
“The open source community likes engagement, and it likes to see that their participation is respected as a partner in the open source community,” he says. “Conversely, just as any other community does not like when things are done to them, I think what caused a reaction from the open source community in Europe was the fact that the government enacted this thing, the CRA, that affects them without consultation.”
Open source software has spurred technical innovation worldwide, leaving governments searching for the best approach to benefit from the ecosystem while improving security in the open source software. In 2022, downloads of open source components exceeded 2 billion across the four major ecosystems: Javascript, Java, Python, and .NET, according to data from software supply-chain management firm Sonatype.
At the same time, critical vulnerabilities in widespread open source components — such as the exploitation of issues in the Log4j logging library — have given momentum to efforts to secure open source software. The Census II initiative, for example, identified the top500 projects across two different ecosystems that are critical to the state of security and could lead to Log4j-like incidents.
Depending on how governments approach regulating liability and open source software, however, software developers could be looking at dramatically different outcomes — more security and resilience for the ecosystem, or the whole thing could backfire and innovation could be hobbled, says Dan Lorenc, CEO of Chainguard, which aims to secure the software supply chain.
“Open source isn’t something you can really just directly regulate. It’s not something where the government can just show up and tell people what they have to do,” he says. “It’s a massive, fragmented group of individuals that just kind of happened to use the same licenses and mechanisms to publish their code.”
Pledging to be a Good Partner
CISA aims to be a partner to those fragmented groups, urging them to use secure design and working on advising other branches of the US government to create requirements for software vendors to make secure products that incorporate open source software and are sold to the federal government.
With the release of its Open Source Software Security Roadmap, the agency aims to support the security of software, in general, by working to understand the most critical open source dependencies and hardening the broader open source software ecosystem with an initial goal of securing software for the government.
The Log4Shell attacks showed that the government needs to take more action to improve the security of a supply chain that underpins much of its own technology and ecosystem, says Jack Cable, a senior technical adviser at CISA.
“If we want to have a future that is much more resilient, much more secure, we have to start thinking about these foundations of the Internet,” he says. “Very much top of mind is how can we make sure that those building the software that’s used across critical infrastructure across the federal government is secure — and chief among that is open source software.”
The Biden administration and its various technical agencies — from the National Institute of Standards and Technology (NIST), to the Department of Defense, to CISA — have met repeatedly with industry to create the National Cybersecurity Strategy, which calls for securing the open source ecosystem, among other initiatives. Not all efforts have gained approval: The Securing Open Source Software Act (SOSSA) has faced criticism from companies, especially as cybersecurity-skilled workers are in short supply.
European Solution Causing Problems
The European Union’s CRA, proposed a year ago and passed in July, puts the responsibility of open source security on the makers of software, including many open source projects and maintainers. While the European Union has also consulted technology companies in the drafting of the legislation, the open source community was not consulted enough in the drafting and creation of the CRA, says the OpenSSF’s Arasaratnam, who took the temperature of attendees at the Open Source Summit Europe last week.
“We’ve heard a lot about the CRA in Europe, and the decisions that were made by the government over here, and the potential negative impacts that have profiles on individual contributors and on foundations as well, especially in terms of liability,” he says. “And the fear is that while the CRA was well intended, because of a lack of consultation, it’s resulted in a bit of legislation that just isn’t tenable.”
The problem is that the atomic unit of the open source ecosystem is a single-developer project that is published on the Internet with no warranty or maintenance contract. The European CRA complicates the world of open source software maintainers in a way that cloud hold these projects liable, making it harder to fix the security of software and at the same time may disincentivize innovation, says Andrew Brinker, group lead and lead cybersecurity engineer at MITRE
“If you consider open source ‘the goose that laid the golden egg,’ you can risk killing the goose by assigning liability to the goose for the egg that it’s creating,” he says. “So it does make more sense to apply liability to groups that are integrating that open source into products and services that they are then commercializing and selling.”
No Obvious Answer
The approaches are neither black and white nor a lesson in a light touch versus a heavy hand. For example, CISA’s approach does not address a major problem in open source communities: funding projects. Companies need to invest in the open source projects whose code they use, and the government needs to spur that investment, says Brian Fox, chief technology officer at Sonatype.
“There’s a couple of things that both sides of the ocean have in common, which is we desire to improve the cybersecurity of the software that we all use and … a focus on the quality of the products being brought to market and defining minimum standards and expectations,” he says.
The focus on liability could end up forcing software companies to fund projects that they rely on to make sure that security is done right, he says. And while Fox is “chomping at the bit” to move onto implementation aspects of the coming requirements, he has resigned himself to the fact that the industry moves slowly.
Case in point: Nearly two years after vulnerabilities in Log4j caused companies to scramble to find potential points of compromise in their applications, nearly a quarter of the versions (23%) downloaded from the Maven repository remain vulnerable. No other industry would be allowed to ship known vulnerable products, and the software industry will get there, Fox says.
“Moving the industry toward a place where software vendors have liability is a big, big shift,” he says. “It’s overdue, I think, and it’s also inevitable.”
Read More HERE