Irregular

Irregular is creating a new category in AI as the world’s first frontier security lab. Backed by Sequoia Capital, the team works with OpenAI, Anthropic, and Google DeepMind to stress-test their most advanced models. We’ve partnered with Irregular’s founders from day one to translate deep research into high-impact media.

AI’s Frontlines

Irregular’s customers are at the epicenter of the AI boom – from leading AI labs to government organizations around the globe.

Ongoing Research

Irregular’s publications and research partnerships focus on staying ahead of emerging threats and vulnerabilities, and on ensuring these transformative technologies are deployed safely and securely.

A North Star Beyond Growth

Irregular was built on a simple conviction: frontier AI systems are rapidly approaching a level of capability where careless deployment could cause real-world harm.

Forbes
Image 1

Anthropic And OpenAI Pay This $450 Million Startup To Test AI’s Capacity For Evil

OpenAI and Anthropic are counting on early-stage startup Irregular, valued at $450 million, to stress-test advanced AI models for harmful uses like hacking and phishing. The company helps labs identify risks, strengthen safeguards and better understand how powerful systems such as ChatGPT could be misused before real-world deployment.

TBPN
Image 1

Irregular Founder warns the bigger near-term risk is AI in the hands of bad actors

“The near future is AI augmenting attackers and being used as part of the attack surface,” Lahav said, noting that terror organizations or malicious groups gaining access to advanced systems is a more immediate threat than fully rogue AI.

The New York Times
Image 1

A Social Network for A.I. Bots Only. No Humans Allowed.

Moltbook has become a Silicon Valley obsession - and a Rorschach test for beliefs about today’s AI. As Dan Lahav, the founder of Irregular puts it: “Securing these bots is going to be a huge headache.”

Fortune
Image 1

AI has made hacking cheap. That changes everything for business

AI is dramatically lowering the cost of cyberattacks, according to a joint Wiz–Irregular study showing agents can execute complex exploits for under $50 versus roughly $100,000 by humans. The Fortune story warns cheaper, automated hacking expands the attack surface, enabling more actors to target organizations and forcing companies to adopt AI-driven defenses more rapidly worldwide.