We are hiring

Solve the alignment problem.
Safeguard the future.

We are assembling a team of world-class researchers, engineers, and policy experts to ensure artificial general intelligence benefits all of humanity.

Operating Principles

Our culture is defined by the problems we choose to solve and the constraints we accept to solve them safely.

Safety is non-negotiable.

We do not trade safety for capabilities. If a model is not safe, it does not ship. This is the fundamental constraint of our work.

Rigorous empiricism.

We believe in evidence over intuition. Our research is grounded in falsifiable hypotheses and reproducible results.

Long-term orientation.

We are building for a multi-generational horizon. We optimize for the long-term future of humanity, not short-term cycles.

Collaborative truth-seeking.

We value being right over being consistent. We foster a culture where the best argument wins, regardless of hierarchy.

Intake Status: Selective

Contribute to the Alignment Problem.

We do not hire for specific roles. We hire exceptional generalists who can reason from first principles.

Our team is lean by design. We are currently looking for researchers with deep intuition in formal verification, entropy coding, and mechanistic interpretability.

Protocol: SMTPLatency: <12ms

// Submit your work samples and a brief on safety constraints.

research@blankline.org
RustCUDAPythonFormal Methods

Encrypted channel. PGP key available on request.