About Us

Building safe AI for humanity's multi-planetary future.

Blankline was founded on a simple premise: the most powerful AI systems should also be the safest. We're building the foundation for AI that will help humanity thrive across worlds.

Founded on the belief that safety and capability are not at odds.

In 2024, a group of researchers from leading AI labs came together with a shared conviction: the most important AI systems would also need to be the most trustworthy. We saw an industry racing toward capability while treating safety as an afterthought.

Blankline was founded to prove a different path was possible. We believe that truly capable AI systems must be interpretable, verifiable, and aligned with human values—not in spite of their capabilities, but because of them.

Our vision is to secure the digital foundation of the modern world. As AI becomes integrated into critical infrastructure, we are building the verifiable logic layers that ensure these systems remain reliable, robust, and aligned with human intent.

Blankline researchers collaborating

What We Stand For

These values guide every decision we make, from research direction to hiring to partnerships. They are the non-negotiable constraints of our mission.

Safety is the Constraint

Safety is not traded for capabilities. When in conflict, the model is paused. We prioritize the prevention of catastrophic risk above deployment speed.

Radical Transparency

We publish the failures. We open-source the methods. In an industry of black boxes, Blankline is the glass box that allows humanity to audit its future.

Intergenerational View

We do not optimize for quarterly cycles. We optimize for the light cone of humanity's future. If a decision hurts the long term, it is rejected.

Scientific Integrity

Every claim must be verifiable. We embrace uncertainty and reject hype. If we cannot prove it mathematically or empirically, we do not ship it.

Our Teams

We're organized around four core functions, each critical to our mission of building safe AI for humanity's future.

Safety Research

Developing alignment techniques, interpretability tools, and safety evaluation frameworks to ensure AI systems remain beneficial.

AlignmentInterpretabilityRed-teaming

Engineering

Building scalable infrastructure and products that bring safe AI capabilities to developers worldwide.

InfrastructureProductsDevTools

Applied Science

Applying our reasoning models to solve complex physical challenges in material science, energy, and aerospace.

Material DiscoveryAutonomous ControlAerospace Simulation

Operations

Supporting organizational growth, strategic partnerships, and operational effectiveness.

PartnershipsPeopleFinance

Join us in building humanity's future.

We're looking for exceptional people who share our commitment to safe, beneficial AI development.