Every day, billions of people rely on digital systems to operate everything from communication to commerce and critical infrastructure. But the global early warning system that notifies security teams about dangerous software flaws is presenting critical gaps in its coverage, and most users have no idea that their digital lives are likely becoming more vulnerable.
In the last 18 months, two pillars of global cybersecurity have flirted with an apparent collapse. In February 2024, the National Vulnerability Database (NVD), supported by the US, and widely used for its free analysis of security threats, abruptly stopped the publication of new entries, citing an enigmatic “chand in support between agencies”. Then, in April of this year, the Common Vulnerabilities and Exposures (CVE) program, which is the fundamental numbering system for tracking software failures, seemed to run a similar risk: a leaked letter warned of the imminent expiration of a contract.
Cybersecurity professionals then began to flood Discord channels and LinkedIn feeds with emergency posts and memes with “NVD” and “CVE” engraved on tombstones. Uncorrected vulnerabilities are the second most common form of invasion by cybercriminals and have already caused fatal blackouts in hospitals and failures in critical infrastructures. In a social media post, Jen Easterly, a US cybersecurity expert, said: “Losing the [CVE] would be like tearing off the catalog of cards from all libraries at the same time, leaving defenders in the midst of chaos while the attackers make the most of it.” If the CVEs identify each vulnerability as a book in a catalog of files, the NVD entries offer the detailed review with context on gravity, reach and exploitability.
In the end, the Cybersecurity and Infrastructure Security Agency (CISA) extended the funding for the CVE for another year, attributing the incident to a “contract administration issue”. But the NVD story turned out to be more complicated. Its parent organization, the National Institute of Standards and Technology (NIST), saw its budget reduced by about 12% in 2024, just at the time CISA withdrew its $3.7 million in annual funding for the NVD. Soon after, as the backlog (a list of pending issues) increased, CISA launched its own “Vulnrichment” program, to help solve the analysis gap, promoting a more distributed approach that allows multiple authorized partners to publish enriched data.
“CISA continuously evaluates how to more effectively allocate limited resources to help organizations reduce the risk of newly disclosed vulnerabilities,” says Sandy Radesky, the agency’s associate director for vulnerability management. Instead of simply filling the gap, she emphasizes that Vulnrichment was established to provide unique additional information, such as recommended actions for specific stakeholders, and to “reduce dependence on the federal government as the only vulnerability enrichment provider.”
Meanwhile, NIST rushed to hire people to help clean up the backlog. Despite a return to pre-crisis processing levels, an increase in newly disclosed NVD vulnerabilities surpassed these efforts. Currently, more than 25,000 vulnerabilities are awaiting processing, almost ten times the previous record in 2017, according to data from the software company Anchore. Before that, the NVD usually followed the CVE publications, maintaining a minimal backlog.
“Things have been disruptive and we are going through moments of change in all aspects,” said Matthew Scholl, then head of the computer security division at NIST’s Information Technology Laboratory, during an industry event in April. “The leadership assured me, and everyone, that NVD is and will remain a mission priority for NIST, both in resources and capabilities.” Scholl left NIST in May, after 20 years at the agency, and NIST refused to comment on the backlog.
The situation has now provoked several government actions, with the Department of Commerce starting an NVD audit in May, and House Democrats calling for a broader investigation into both programs in June. But the damage to confidence is already transforming geopolitics and supply chains, as security teams prepare for a new era of cyber risk. “This left a bitter taste, and people are realizing that they can’t trust it,” says Rose Gupta, who builds and manages business vulnerability management programs. “Even if they can solve everything tomorrow with a bigger budget, I don’t know if that won’t happen again. So, I have to make sure it has other controls in place.”
As these public resources fail, organizations and governments are facing a critical weakness in our digital infrastructure: essential global cybersecurity services depend on a complex network of U.S. agency interests and government funding, which can be cut or redirected at any time.
Security: those who have and those who don’t have
What began as a small stream of software vulnerabilities in the early Internet era, turned into an unstoppable avalanche, and free databases, which have been accompanying them for decades, have struggled to keep up. At the beginning of July, the CVE database exceeded 300,000 cataloged vulnerabilities. The numbers increase unpredictably every year, sometimes by 10% or much more. Even before its last crisis, NVD was already known for delayed publications of new vulnerability analysis, often falling behind private security software and vendor warnings for weeks or months.
Gupta has observed that organizations are increasingly adopting commercial vulnerability management (VM) software that includes their own threat intelligence services. “We have definitely become excessively dependent on our VM tools,” she says, describing the growing dependence on supplier security teams, such as Qualys, Rapid7 and Tenable, to complement or replace public databases, which are unreliable. These platforms combine their own research with various data sources to create proprietary risk scores that help teams prioritize fixes. But not all organizations can afford to fill the NVD gap with premium security tools. “Smaller companies and startups, already at a disadvantage, will be more at risk,” she explains.
Komal Rawat, a security engineer in New Delhi, whose mid-stage cloud startup has a limited budget, describes the impact forcefully: “If the NVD falls, there will be a crisis in the market. Other databases are not so popular, and, to the extent that they are adopted, they are not free. If you don’t have recent data, you are exposed to attackers who do.”
The growing backlog means that new devices may be more likely to have blind spots of vulnerability, whether a doorbell or the “smart” access control system of an office building. The biggest risk may be “isolated” security flaws that go unnoticed. “There are thousands of vulnerabilities that will not affect most companies,” says Gupta. “These are the ones we are not receiving analysis, which would leave us at risk.”
NIST recognizes that it has limited visibility into which organizations are most affected by the backlog. “We don’t track which industries use which products and, therefore, we can’t measure the impact on specific industries,” says a spokesperson. Instead, the team prioritizes vulnerabilities based on CISA’s list of exploits (code or data set) known and those that are included in vendor notices, such as Microsoft Patch Tuesday.
The greatest vulnerability
Brian Martin followed the evolution, and deterioration, of this system from within. Former member of the CVE board and original leader of the Open Source Vulnerability Database project, he has built a combative reputation over the decades as one of the leading historians and practitioners in the area. Martin states that his current project, VulnDB (part of Flashpoint Security), surpasses the official databases he helped supervise. “Our team processes more vulnerabilities, with a much faster response time, and we do this for a fraction of the cost,” he says, referring to the tens of millions in government contracts that support the current system.
When we talked in May, Martin said that his database contains more than 112,000 vulnerabilities without CVE identifiers, security flaws that exist in the real world, but remain invisible to organizations that rely exclusively on public channels. “If you gave me the money to triple my team, this number without CVE would be in the range of 500,000,” he said.
In the US, official vulnerability management responsibilities are divided among a network of contractors, agencies, and non-profit centers such as Mitre Corporation. Critics like Martin claim that this creates a potential for redundancy, confusion and inefficiency, with intermediate management layers and relatively few real vulnerability experts. Others defend the value of this fragmentation. “These programs are based on each other or complement each other to create a more comprehensive, supportive and diverse community,” CISA said in a statement. “This increases the resilience and usefulness of the entire ecosystem.”
While the American leadership falters, other countries are taking the lead. China now operates multiple vulnerability databases, some surprisingly robust, but tarnished by the possibility of being subject to state control. In May, the European Union accelerated the launch of its own database, as well as a decentralized “Global CVE” architecture. Following social networks and cloud services, vulnerability intelligence has become another front in the dispute for technological independence.
This leaves security professionals navigating through multiple potentially conflicting data sources. “It’s going to be a mess, but I’d rather have too much information than no information,” says Gupta, describing how his team monitors several banks despite the additional complexity.
Redefining software responsibility
As advocates adapt to the fragmented landscape, the technology industry faces another dilemma: why do software providers no longer take responsibility for protecting their customers from security problems? Large suppliers routinely disclose, but do not necessarily correct, thousands of new vulnerabilities every year. A single exposure can bring down critical systems or increase the risks of fraud and data misuse.
For decades, the industry hid behind legal shields. “Packaging licenses” once forced consumers to largely waive the right to hold software providers responsible for defects. Today’s end-user license agreements (EULAs), often delivered in pop-up windows in the browser, have evolved into incomprehensibly long documents. Last November, a laboratory project called “EULAS of Despair” used the length of the book War and Peace (587,287 words) to measure these extensive contracts. The worst offender? Twitter, with 15.83 novels in fine print.
“This is a cool fiction we created around this entire ecosystem, and it’s simply not sustainable,” says Andrea Matwyshyn, US special advisor and professor of technology law at Penn State University, where she runs the Policy Innovation Lab of Tomorrow. “Some people point out the fact that the software can contain a mixture of products and services, creating more complex facts. But, as in engineering or financial litigation, even the most confusing scenarios can be solved with the help of experts.”
This shield of responsibility around software failures, whether due to security vulnerabilities or other defects that endanger public safety, is finally starting to crack. As an example, Matwyshyn points to CrowdStrike’s July 2024 event, when an update of the company’s popular endpoint program caused the failure of millions of Windows computers around the world and caused disruptions to everything from airlines to hospitals and 911 emergency systems. The incident led to billions in estimated damage, and the city of Portland, Oregon, even declared a “state of emergency”. Now, affected companies such as Delta Airlines have hired expensive lawyers to seek large compensation, a clear sign of the opening of the gates for litigation.
Despite the growing number of vulnerabilities, many of them fall into well-established categories, such as SQL injections that interfere with database queries and memory buffer overflows that allow remote code execution. Matwyshyn advocates a mandatory “software bill of materials” (S-BOM) – a list of ingredients that would allow organizations to understand what components and potential vulnerabilities exist in their software supply chains. A recent report found that 30% of data breaches originated from vulnerabilities of third-party software providers or cloud service providers.
She adds: “When you can’t distinguish between companies that are cutting costs and a company that has actually invested in doing the right thing for its customers, it results in a market where everyone loses.”
CISA leadership shares this sentiment, with a spokesperson emphasizing its “security principles by design,” such as “maring essential security features available at no additional cost, eliminating vulnerabilities classes, and building products in a way that reduces the burden of cybersecurity for customers.”
Avoiding a digital “dary age”
It will be no surprise to realize that professionals are turning to AI to help fill the gap, while at the same time preparing for an imminent flood of cyberattacks by AI agents. Security researchers used an OpenAI model to discover new “zero-day” vulnerabilities (when it is still unknown). And both the NVD and CVE teams are developing “AI-powered tools” to help speed up the collection, identification and processing of data. NIST states that “up to 65% of our analysis time was spent generating CPEs”, product information codes that identify the affected software. If AI can even solve a part of this tedious process, it can dramatically accelerate the flow of analysis.
But Martin warns against optimism about AI, noting that the technology remains unproven and often full of inaccuracies, which, in security, can be fatal. “Instead of AI or ML [machine learning], there are ways to strategically automate parts of the processing of this vulnerability data, while ensuring 99.5% accuracy,” he says.
AI also does not solve more fundamental challenges in governance. The CVE Foundation, launched in April 2025 by dissident board members, proposes a globally funded non-profit model, similar to the internet addressing system, which has moved from U.S. government control to international governance. Other security leaders are pushing to revitalize open source alternatives, such as Google’s OSV Project or NVD++ (maintained by VulnCheck), which are publicly accessible but currently have limited capabilities.
As these reform efforts gain strength, the world is becoming aware of the fact that vulnerability intelligence, as well as disease surveillance or aviation security, require continuous cooperation and public investment. Without this, a tangle of paid databases will be all that will remain, threatening to leave all but the richest organizations and countries, permanently exposed.
( fontes:MIT Technology Review)



