We are hiring for exceptional research engineers
Our worldview
Almost no one has priced in what is about to happen. Even startups that say that they believe in transformative AI don't act that way. As we get closer to a world where the cost of writing software goes to zero, it becomes more important than ever for hackers to be mindful of what they work on.
AI safety is the most impactful thing one could work with at this point in time. If the development of AI goes right, most of humanity's biggest problems (diseases, poverty, energy, etc.) will be solved. But there are many potential missteps with consequences ranging from "missed opportunity" to "doom". In our view, AI safety is not opposed to AI progress, it is the key to it.
Our work
Andon Labs builds Safe Autonomous Organizations without humans in the loop.
Silicon Valley is rushing to build software around today's AI, but by 2027 AI models will be useful without it. The only software you'll need are the safety protocols to align and control them.
We are building the Safe Autonomous Organization. We iteratively launch and scale autonomous organizations, while bridging AI control research with real-world testing. We bring AI control research out of the lab to our currently deployed SAOs.
Our first SAO (a vending machine at Anthropic's office) was a huge hit.
Our paper "Vending-Bench" got almost a million views.
We have a front row seat to the frontiers of AI through our collaborations with the AI labs.
Your role
We're looking for research scientists, research engineers, and software engineers to build this future. Fast-moving startups struggle to specify the exact responsibilities of roles because they change all the time, but below are two examples of work
Example role 1: AI Control and Alignment
We don't believe model alignment will be guaranteed as capabilities increase. Nor will humans be able to stay in the loop and keep up with every step an agent takes.
Your role would be to build control protocols and/or alignment techniques to make sure our SAOs are safe.
In the short term, this means monitoring for misalignment and implementing AI control protocols to prevent similar behaviour in the future.
Example role 2: Scaling SAOs
What's next after vending machines? Hotel? Stock trading? Fashion brand?
AI safety can't be solved in the lab. We need our SAOs to be everywhere in society. You would have the role of 20 startup founders, at once.
Concretely, this means identifying interesting ventures, building integrations and deploying our agents in the real world.
We're based in San Francisco (in-person only), offering competitive salary and stock compensation, and most importantly, a mission critical to ensuring humanity's prosperous future.