Reading view

CVE Program Gets a Lifeline—But the Real Story Is Just Starting

Last month, the cybersecurity world got a wake-up call: the backbone of global vulnerability tracking—the CVE program—almost collapsed.

On April 15, MITRE revealed that its contract with CISA to run the program hadn’t been renewed, and they had about 36 hours before pulling the plug. Cue widespread panic. Then, with just hours to spare, CISA came through with an 11-month extension. Crisis averted—for now.

But the chaos lit a fire. Within days, a group of CVE insiders announced something big: they’re launching the CVE Foundation, a new, independent nonprofit aimed at fixing what they see as a fragile, outdated setup. Their goal? A more resilient, globally supported system—one not tied to a single government’s checkbook.

Not surprisingly, this ruffled feathers. Former CISA Director Jen Easterly slammed the move, calling it a conflict of interest. In her words, board members shouldn’t be building a rival organization while still governing the current one.

Meanwhile, Europe isn’t waiting around. ENISA dropped its EUVD (European Union Vulnerability Database) earlier this month, and Luxembourg’s CIRCL launched the decentralized GCVE project—both offering new ways to handle vulnerability tracking, minus the U.S. drama.

So here we are. The CVE program lives on—for now—but its near-death experience exposed the cracks. The question isn’t just about who runs it. It’s about whether the whole system needs to evolve. And depending on who you ask, that change is either long overdue—or a risky gamble.

The post CVE Program Gets a Lifeline—But the Real Story Is Just Starting appeared first on Centraleyes.

  •  

Securing AI Agents: A New Frontier in Cybersecurity

As RSA Conference 2025 just wrapped up, one thing’s clear: AI agents are everywhere—and apparently, they need security guards too.

These digital overachievers are working 24/7, managing networks, analyzing data, and getting things done while we’re all just trying to find a charger. But without proper security, these agents could accidentally leak sensitive information, misuse credentials, or even open the floodgates for hackers to exploit vulnerabilities.

While AI agents are revolutionizing industries, the cybersecurity world is scrambling to figure out how to protect these new digital workers, especially given their ability to operate autonomously. At the RSA Conference 2025, David Bradbury, Chief Security Officer at Okta, summed it up perfectly: “You can’t treat them like a human identity and think that multifactor authentication applies in the same way.”

As AI agents become an increasingly larger part of the workforce, the need for robust security measures has never been more pressing. According to Deloitte, 25% of companies using generative AI are expected to launch agentic AI pilots this year, with that number expected to rise to 50% by 2027. These statistics underscore the rapid expansion of AI’s role and the growing cybersecurity risks associated with it.

The Security Implications of Autonomous AI Agents

The rise of AI agents has already raised significant concerns about security. Without proper guardrails, these agents could inadvertently cause data breaches, misuse login credentials, or leak sensitive information, especially considering their ability to act independently and at speed. For many organizations, their security infrastructure was not built with AI agents in mind. The problem becomes even more complicated as machine identities continue to proliferate across enterprise environments.

CyberArk’s 2025 Identity Security Landscape report reveals that machine identities now outnumber human identities by more than 80 to 1, a stark reminder of just how quickly this shift is happening. As these agents take on more critical tasks, they require as much—if not more—security as human employees.

In fact, experts argue that AI agents need “elevated trust” to ensure they don’t pose a risk. While securing traditional machine-based identities like VPN gateways and file servers is already part of the cybersecurity landscape, AI agents are far more complex. As Jeff Shiner, CEO of 1Password, explains: “An agent acts and reasons, and as a result of that, you need to understand what it’s doing.”

A Call for Immediate Action: Securing AI Agents

As companies rapidly deploy AI agents, security vendors are scrambling to develop solutions that can help manage these new digital employees. At the RSA Conference, security providers such as 1Password, Okta, and OwnID introduced products designed to secure AI identities. These tools aim to provide the necessary protection for AI agents, ensuring that they can carry out their work without compromising an organization’s security.

Proactive security measures will be vital as AI agents take on more responsibility. 

The post Securing AI Agents: A New Frontier in Cybersecurity appeared first on Centraleyes.

  •