We are assembling a team of world-class researchers, engineers, and policy experts to ensure artificial general intelligence benefits all of humanity.
Our culture is defined by the problems we choose to solve and the constraints we accept to solve them safely.
We do not trade safety for capabilities. If a model is not safe, it does not ship. This is the fundamental constraint of our work.
We believe in evidence over intuition. Our research is grounded in falsifiable hypotheses and reproducible results.
We are building for a multi-generational horizon. We optimize for the long-term future of humanity, not short-term cycles.
We value being right over being consistent. We foster a culture where the best argument wins, regardless of hierarchy.
We do not hire for specific roles. We hire exceptional generalists who can reason from first principles.
Our team is lean by design. We are currently looking for researchers with deep intuition in formal verification, entropy coding, and mechanistic interpretability.
// Submit your work samples and a brief on safety constraints.
Encrypted channel. PGP key available on request.