Context
I recently gave a talk about doing AI safety as a YC startup. I thought it would be interesting to share my thoughts with both the YC and AI safety communities. Please share any feedback or thoughts. I would love to hear them!
I am not an AGI timelines researcher, so take this with a grain of salt. But to set the scene, I am one of those AGI-pilled people. I believe that society will change so drastically that it will be unrecognizable in a couple of years. Specifically, I think the probability of AGI within 5 years is high enough that it’s worth acting as if it will happen. I acknowledge that I might hold this belief because I want to be someone who has this belief, but that’s a story for another day.
AI Safety is a problem and people pay to solve problems
Intelligence is dangerous, and I think there’s a significant chance that the default scenario of AI progress poses an existential risk to humanity. While it’s far from certain, even small probabilities are significant when the stakes are this high. This is an enormous abstract problem, but there are thousands of sub-problems waiting to be solved.
Some of these sub-problems already exist today (like AI misinformation), but most are in the future (GPT-4 is not capable of killing us yet). When people start feeling these pains, they will pay to have them solved. I don’t think the solution is to slow down (although I’m not certain), because this also comes with a cost. Therefore, we have to solve these problems. I think it’s one of the most interesting challenges of our time, because otherwise we won’t get to reap the rewards of AI utopia.
More startups should solve these problems
From my experience, builders are heavily underrepresented in the AI safety community. There are far more researchers and philosophers than entrepreneurs thinking about these problems (source: personal experience from trying to hire such people). I don’t think this should be the case. The AI safety market is currently very small, but it is predicted to grow a lot. VCs exist to enable startups to make such long-term bets.
If you’re a technical person with a passion for AI safety, it’s very tempting to join a big AI lab. However, I think you should start a startup instead. Startups are more fun, and you will have much more counterfactual impact. A friend once told me: “The most impactful people will build something great that wouldn’t have happened without them”. I think it’s generally harder to do this in the hierarchical structure of AI labs (but not impossible).
My path: Y Combinator
I’m a bit biased in this matter. I’ve been fascinated by startups for many years, and getting into YC straight from college was a dream come true. Like many other YC companies, our first idea didn’t work out and we had to pivot. Our pivot was somewhat successful; three weeks later, we worked on something new with revenue that ultimately made fundraising easy. We didn’t find a concrete idea though, and to some extent we still haven’t. Instead, we found a really cool customer we could build stuff for. This was largely thanks to intros from the AI safety community. AI safety has its roots in an altruistic movement (Effective Altruism), and you can see that from how helpful people are. This is a real advantage for AI safety startups. Whenever I speak to what would have been called a “competitor” in other industries, we share stuff much more freely because we want the same thing for the world.
Communities are such an incredible thing. I have been lucky to also be part of the YC community, which brought us our second big customer. YC is great for all of the obvious reasons, but in my experience, the community is its strongest asset. The advice is also great, but most of it is publicly available. This advice has become famous over the years; phrases like “Make something people want”, “Love your customer”, and “Do things that don’t scale” are echoed everywhere you go in San Francisco. They are not, however, in AI safety circles.
YC advice in the context of AI safety
Not all AI safety startup ideas are the same, but there are some characteristics that apply to many of them. Here are some thoughts on how YC advice applies to these characteristics.
The problems they are solving are far in the future
As discussed above, current AI systems do not pose an existential threat to humanity. It’s therefore very hard to know if you have “made something people want” while trying to solve this problem. This means you have to be creative when trying to follow the advice of launching early and iterating. Additionally, the market is very uncertain, and you have to be flexible to change your ideas and processes. “Do things that don’t scale” is therefore extra important in this setting.
Customers are often researchers or from the government
This is obviously not true for all AI safety startups, but it is for many and it is for me. We primarily sell to researchers, and this makes my day-to-day very enjoyable. Every customer meeting I have is with someone I would probably like to go out for a beer with. It’s also much easier for me to put myself in their shoes and understand what they want. Regardless of who your customer is, ask yourself if you like them.
The pool of potential customers is often small
As a result of customers being researchers and government employees, the pool of potential customers is not huge (yes, there are a lot of academic researchers, but they don’t have a lot of money). YC’s advice is often that having 100 passionate users is better than 1,000,000 average users. It’s tempting to then assume that the small customer pool is not a problem, but this advice assumes that you can scale up the 100 users after you have learned from their feedback. However, another YC advice comes in handy here: “charge more”. Most early-stage startups are scared of scaring away customers, but if you’ve made something people want, they won’t walk away without attempting to push the price down. They know that you don’t know what you’re doing when you set the price and therefore expect the price to be flexible. This is especially true for AI safety ideas where there isn’t much competition; if they walk away, they have no alternative.
Doing good
If you have a passion for AI safety, I think ideas in this space could lead to great startup success. But if you don’t, there are probably better ideas to maximize success probability. Founders with this passion often also want their startup to have a positive impact on the world. You do this by doing something that is net-positive and makes it influential. Basically, Impact = Magnitude * Direction. I think most people in the world have a bias for maximizing Magnitude. This is not to say that people are immoral. However I think most people just don’t recognize (or heavily underestimates) potential one’s career has to make the world a better place. They recycle and donate to charity, but their career is their biggest opportunity to make a difference to the world.
However, I think there is a group of people who over-optimize for Direction and neglect the Magnitude. Increasing Magnitude often comes with the risk of corrupt the Direction. For example, scaling fast often makes it difficult to hire only mission-aligned people, and it requires you to give voting power to investors that prioritizes profit. To increase Magnitude can therefore feel risky, what if I end up working at something that is net-negative for the world? Therefore it might be easier for one’s personal sanity to optimize for Direction, to do something that is unquestionably net-positive. But this is the easy way out, and If you want to have the highest expected value of your Impact, you cannot disregard Magnitude. I am not an expert, but my uninformed intuition is that the people with the biggest positive impact on the world have prioritized the Magnitude (would love to hear other opinions on this. What are some examples of people from either side?) Don’t forget that you can always use “earn to give” as a fallback.