AI safety career guide
Disclaimer: The content of this guide is largely based on information gathered in expert interviews and does not represent everyones opinions. It is also not complete, as we have optimised for providing a concise overview of the most important points.
Who is this guide for?
This career guide is meant for anyone with an interest to contribute to the field of AI safety. We don’t assume readers to have any particular background or qualifications. While we do expect mostly students and young professionals to benefit from this guide, we still encourage mid- to late-career professionals to have a look and check how they can contribute to AI safety with their skills.
What are the risks from AI?
Advancements in Artificial Intelligence (AI) are progressing rapidly, surpassing human capabilities in various areas such as writing code, predicting protein folding, and passing the bar exam. The potential of AI to significantly enhance societal conditions is immense. However, with great power comes considerable risks, some of which could be catastrophic for civilisation, extending well beyond concerns like algorithmic bias or job displacement.
As AI technologies get even more powerful and integrate further into our lives, it becomes increasingly important to acknowledge and mitigate these risks. The rapid rate of progress of AI capabilities makes powerful technology relatively accessible for misuse. This might lead to large-scale harm through cyberattacks, biosecurity risks, and corroding democratic institutions and could exacerbate other issues like social inequality and job displacement. Besides misuse scenarios, the autonomy of AI systems may outgrow the ability of humans to effectively oversee and control them. Concerns about the most severe risks, including the possibility of extinction, are shared by a wide range of professionals, including AI experts, academics, and policymakers. Emphasising the development of safe and responsible AI practices is crucial to unlocking the full potential of this technology for the benefit of humanity.
What is AI safety?
AI safety (AIS) concerns itself with reducing risks from AI, with a particular focus on catastrophic risks. This can be achieved through engaging in technical AIS research, contributing to AI governance, or participating in advocacy and efforts to build a broader understanding of these issues. To learn more about what AI safety is or why it is important to work on it, check out this summary by the Center for AI Safety.
What is the purpose of this career guide?
This guide serves to give an overview of upskilling and employment options specifically for the Netherlands, inform decisions about career moves, and provide further readings.
Extending on the outstanding career profiles from 80,000 Hours on AI governance and technical research, we focus explicitly on considerations and recommendations for the Netherlands (NL). For general overviews of the career paths see here.
In this version, our emphasis is specifically directed towards roles involving object-level tasks related to technical alignment and governance. Broadly speaking, safety organisations and companies increasingly require individuals possessing expertise in operations, management, finance, and related domains. Although these roles may not be directly involved in research, policy formulation, or engineering, their contributions significantly impact the functioning and productivity of safety organisations. If you identify your comparative advantage lying in these roles, we encourage you to proceed accordingly.
Our findings and recommendations are intentionally quite general. When pursuing and committing to a career path it is extremely important to consider your personal fit, which means whether you feel that you are personally well suited for a career path, that you can likely excel at it, find it sufficiently fulfilling, and can sustainably put in your best efforts. People tend to have different individual ideas about fulfilling careers, so we do not go into detail about personal fit here. However, we advocate to factor this important aspect into your decision-making. Perhaps you find this article useful.
AI governance is about shaping local and global governance systems that affect the development and use of existing and future AI systems in ways that positively shape societal outcomes. This can happen on many levels, including norms, policies, laws, processes, politics, and institutions. In this career guide we mainly focus on government policy, but readers should keep in mind that AI governance involves a large range of actors.
- Job opportunities with the greatest impact are likely found in the US or Brussels.
- The Netherlands offers many opportunities to upskill in policy.
- The importance of acquiring career capital is often underestimated.
- For work in AI governance, reputation and credibility are very important.
- Joining a political party and shaping its agenda presents an accessible opportunity to contribute substantially and to test personal fit for policy work.
- Often it’s easier to pick up technical knowledge first and later acquire governance-relevant skills.
The current state of AI Governance
Unfortunately, the understanding of AI within the Dutch public sector, and likely in overall Europe, appears to be somewhat lacking. This situation does, however, create opportunities for those without extensive AI knowledge to contribute meaningfully. Numerous chances exist to engage in AI governance at a national level. However, because there are no major AI companies in NL, the most impactful opportunities are those that work at a multilateral level. We expect the most important governance work to happen in the US, China, or in multilateral institutions. The US is especially important, because of its global influence and its jurisdiction over much of the AI industry, including the top AI labs training state-of-the-art, general-purpose models (Anthropic, OpenAI, and Google Deepmind) and key parts of the chip supply chain. We would therefore strongly recommend people to consider going into AI governance in the US.
Work on the European level can also be influential, as seen in the potential ‘Brussels effect’. This phenomenon arises because EU regulations and standards are mandatory for all companies, including American and Chinese firms, seeking to market their products within the EU. Therefore, aiming to go to Brussels for governance work is also a good option.
For AI governance there are many university degrees that could be useful. For some roles, a technical background in machine learning, AI hardware, and related fields can be extremely valuable. Many politicians and policymakers are generalists, as their roles require them to work in many different subject areas and on different types of problems. This means they often rely on expert knowledge when crafting and implementing policy on AI technology that they don’t fully understand. So if you can provide them this information, especially if you’re skilled at communicating it clearly, you can potentially fill influential roles.
Although there is unfortunately a lack of individuals with technical backgrounds entering government roles, being an expert in AI is not necessary. Relevant educational and professional backgrounds include politics, public administration, law, international relations, and economics. A great option that provides candidates with a comprehensive background would be to complete a technical bachelor’s degree followed by a master’s in policy. The general consensus is that it’s more feasible to acquire governance skills later in one’s career than technical skills. Additionally, knowledge about AI governance can be self-taught outside of formal education through independent study.
Choice of University
For policy-related jobs in the Netherlands, the choice of university is not very significant as Dutch universities are broadly comparable in this field. However, for those aiming to go to Brussels, graduates from prestigious institutions like Sciences Po, College of Europe, and possibly the London School of Economics are often preferred.
Here is a non-exhaustive(!) list of university degrees in NL related to technology governance and public policy.
(unranked, alphabetical order)
- Amsterdam, University of Amsterdam: Advanced LLM in Technology Governance
- Amsterdam, University of Amsterdam: MSc Political Science, Public Policy and Governance
- Amsterdam, University of Amsterdam: MA European Policy (European Studies)
- Amsterdam, Vrije Universiteit: Master’s in Public Administration, Artificial Intelligence and Governance specialisation
- Leiden: MSc Public Administration
- Rotterdam: Mundus Master Public Policy
- Tilburg: MSc Public Governance: Public Administration, Economics and Law
- TU Delft: MSc Engineering and Policy Analysis
(unranked, alphabetical order)
- Leiden: BSc Liberal Arts and Sciences, Major in Governance, Economics and Development
- Leiden: BSc Public Administration Science
- Tilburg: BSc Bestuurskunde (Dutch)
- University of Twente: BSc Management, Society & Technology
Importance of Career Capital
Gaining career capital is crucial for a successful career in policy making. The most important aspects are building a network and establishing credibility. While the most impactful jobs might be found outside the Netherlands, there are still opportunities within the country that can serve as valuable stepping stones for future career advancement. The trajectory of AI governance regulation in the Netherlands and the EU is uncertain for the coming years, but a lot of career capital in policy seems to be transferable between different roles. This makes any policy role potentially beneficial for career development. Starting in a sub-optimal role to build career capital, with the intention of transitioning to a more impactful role later, is a practical strategy.
Technology Policy Roles and Traineeships in NL and Brussels
Here is a list of places that offer policy roles or traineeships related to AI or technology.
- Ministry of the Interior and Kingdom Relations
- Ministry of Economic Affairs
- Autoriteit persoonsgegevens
- Rijksinspectie Digitale Infrastructuur (RDI)
- Nederlands Normalisatie Instituut (NEN)
- Rijks I-traineeship (or other Rijkstraineeship)
Also check the 80,000 hours job board for remote positions outside of the Netherlands.
Working in Government Policy Making vs Other Places
Besides working in governmental policy making, there are various other places you can work at, including think tanks, government research organisations, and government agencies. Examples of these include the NL AI Coalition, TNO, or EU AI Board. Below we list some arguments to consider when choosing where to work.
Pros of Government Policy Making
- Working at the government might be better in terms of building a broad network and establishing credibility.
- In government, having one or two people in key positions could matter a lot.
- Entering the Dutch government might be more accessible due to the generally low level of AI knowledge there. With even a modest understanding of AI, you could likely secure a position and quickly gain trust.
- A small fraction of think tank reports are actually read by government officials, which underscores the importance of having more AI safety experts directly within government roles for greater influence and effectiveness.
Cons of Government Policy Making
- Governments often act slowly, which can be frustrating. Additionally, it can be challenging to convince government officials to act proactively instead of just reactively, as there tends to be a preference for a ‘wait and see’ attitude. There is also a general reluctance to consider large-scale risks within short timeframes.
- The path to impact in government jobs can be very convoluted and involves a high degree of uncertainty whether an action has actually had a positive impact. Engaging in policy work carries the risk of inadvertently causing harm or contributing to policies that have negative outcomes. Additionally, the long feedback loops typical in government roles can make it challenging to learn from experiences and adapt strategies effectively.
- Personal fit is very important in government work. This environment is not ideal for everyone, as it often involves a lot of office politics and operates within a highly hierarchical structure.
- However, individuals are usually limited to working for one government, whereas research conducted by think tanks can be advantageous to multiple governments at once.
Compute governance is the subfield of AI governance concerned with controlling and governing access to computational resources. Today’s most capable AI systems are trained on large amounts of expensive hardware. Since this hardware is detectable and relies on a concentrated supply chain, it is an opportunity to govern who has access to the capabilities to train advanced AI systems. The Netherlands is the base for ASML, a leading global supplier in the semiconductor industry and the sole producer of EUV machines crucial for manufacturing advanced AI chips. This makes the Netherlands a promising location for engaging in compute governance. For inquiries or discussions related to ASML, please reach out to firstname.lastname@example.org.
Compute governance, being a very new and small field, is expected to grow in significance. Currently there are probably very few people within the Dutch and European civil service who have a comprehensive understanding of computer chips and compute supply chains. Possessing expertise in this area and entering this field could therefore be very impactful. A technical degree, like computer engineering, is very beneficial for such roles. However, a general understanding of how chips function, their manufacturing processes, and insights into the semiconductor supply chain would also be valuable.
A great opportunity to have impact now and gain some career capital is to become a member of a political party. Compared to other paths in AI governance, it can be relatively low effort to influence your political party’s program. This is an especially good option for young people, who tend to have a bit more time than mid- to late-career professionals. It also offers an excellent platform for learning valuable skills, such as communicating effectively about AI safety with those who have limited knowledge on the topic and engaging constructively with individuals holding opposing views. Additionally, this experience can enhance your overall understanding of the political landscape and test personal fit for a career in policy.
Based on expert interviews, we have identified the following pitfalls that people tend to encounter:
- Not considering all options. Many people restrict themselves to jobs in their home country, very specific types of roles, or a limited number of organisations. It’s a good idea to consider many options and to apply broadly.
- Putting too much focus on writing independent policy opinions on forums or blogs. It doesn’t provide benefits for your CV or reputation, which is very important in the policy space. Instead, people should do things that provide them with better career capital, such as volunteering for an advocacy organisation.
- Optimising for early career impact. The first few years of most careers are going to be quite low regarding direct impact compared to the rest of the career. People should not put too much focus on having a high impact early on, but optimise for the long run. The first few roles should be heavily focused on gaining career capital
- Optimising for status. Many people are concerned with how others perceive their job, often seeking exciting or prestigious positions. However, this shouldn’t be the main focus when making career decisions. It’s important to consider roles that may seem less glamorous or interesting but offer greater long-term impact or career capital.
Technical AI safety
Technical AI safety concerns itself with research about technical solutions that prevent unwanted behaviour from AI systems and aligning AI systems with human values. Research on this is done mainly in for-profit companies, non-profit organisations, academia, and by independent researchers. Typical outputs are research papers, product features, software tools, and technical demos. An important distinction within this field is between research scientist and engineering roles. Generally, research scientists concentrate on formulating research ideas, designing experiments, and analysis, while engineers focus more on implementation, system design, and scaling. Although the lines between these roles can be blurry, understanding this distinction is useful for skill development and career planning.
- Choosing a bachelor’s degree in a technical discipline like mathematics, physics, or computer science may be more beneficial than a bachelor’s in AI for those aiming to pursue a master’s degree in AI at the University of Amsterdam (UvA). These subjects tend to provide a more robust foundational knowledge that can be advantageous for advanced AI studies.
- The Netherlands is not ideally positioned for leading technical AIS work due to its smaller scale and lack of major AI companies, but it does have opportunities for other AI research and offers great educational opportunities.
- A crucial career decision in the field of AI involves choosing between focusing on a path as a Research Scientist or an Engineer.
- The competition in AI technical alignment is intense due to highly talented individuals competing for very few full-time roles. Evaluate your strengths and consider if you can compete at the top level. Focusing on a niche area where you can excel, like excelling in theoretical physics rather than being average in machine learning, might be a more strategic approach.
- Legible outputs are extremely valuable and can showcase your interests and skills to potential employers.
Technical AI studies
Possible Backgrounds for Technical Research
To pursue technical AIS research, developing a solid technical foundation and gaining research skills is essential. University degrees, particularly those specialising in Machine Learning (ML) and AI, are effective paths for acquiring these skills. The Netherlands hosts several universities and higher education institutions offering programs in ML/AI. We have listed here the ones we are aware of, although some might be missing. Additionally, related fields such as data science or applied mathematics, not explicitly listed here, are also worth considering. Beyond these, diverse disciplines like logic, philosophy, ecology, evolutionary biology, cognitive science, and neuroscience can also produce promising contributors to AI technical alignment.
Choice of Bachelor’s Degree
Contrary to intuition, it might not be the best choice to do a bachelor’s degree in AI. Bachelor programmes might not prepare students enough for the technical depth needed in some master’s programmes, such as at the University of Amsterdam (UvA), or for technical research later on. Also, concepts from the bachelor’s are often repeated in the master’s, making the bachelor’s redundant in terms of knowledge accumulated. Bachelor degrees that provide students with a more in-depth technical education are maths, physics, or sometimes also computer science. Hence, there’s no strict best option for bachelor degrees and various backgrounds can lead to successful graduate studies. Nevertheless, master programs may require certain modules or prior knowledge, so ensure your bachelor covers that.
Top University Choice
While many universities in NL offer AI study programmes, the clear leader in the field of deep learning is the UvA. UvA’s master’s programme equips students with a robust technical understanding of ML and modern deep learning, alongside the possibility to specialise in areas like natural language processing (NLP), reinforcement learning (RL), and computer vision (CV). Although other universities may not match UvA in terms of deep learning expertise and technical depth, they often provide more flexibility in course selection.
Our top recommendation
- University of Amsterdam: MSc Artificial Intelligence
Here is a comprehensive review (Lang, 2020), some things may be outdated since the program keeps evolving
Other master’s degrees
(unranked, alphabetical order)
- Amsterdam, Vrije Universiteit: MSc Artificial Intelligence
- Eindhoven: MSc Artificial Intelligence and Engineering Systems
- Groningen: MSc Artificial Intelligence
- Leiden: MSc Artificial Intelligence
- Maastricht: MSc Artificial Intelligence
- Radboud: MSc Artificial Intelligence
- Tilburg: MSc Cognitive Science and Artificial Intelligence
- TU Delft: MSc Computer Science, Artificial Intelligence Technology Track
- Utrecht: MSc Artificial Intelligence
Bachelor’s degrees to consider
Note: We only list AI degrees here, but would again encourage you to consider other technical degrees such as maths, physics, or computer science (see discussion above).
(unranked, alphabetical order)
- Amsterdam, UvA: BSc Kunstmatige Intelligentie (Dutch)
- Amsterdam, Vrije Universiteit: BSc Artificial Intelligence
- Groningen: BSc Artificial Intelligence
- Leiden: BSc Data Science and Artificial Intelligence
- Maastricht: BSc Data Science and Artificial Intelligence
- Radboud: BSc Artificial Intelligence
- Tilburg: BSc Cognitive Science and Artificial Intelligence
- Utrecht: BSc Kunstmatige Intelligentie (Dutch)
PhD, yes or no?
Many people wanting to go into technical AIS research might think about whether to do a PhD, which is a difficult decision. Here are some of the things to take into consideration:
- Topic: When aiming for a PhD, prioritise fields closely related to AI alignment. If direct alignment research is not feasible, consider focusing on topics relevant to alignment, such as Large Language Models (LLMs) or Reinforcement Learning (RL). Computer Vision (CV) might also be relevant, though perhaps to a lesser extent. Additionally, similar to undergraduate degree choices, there are also non-ML subjects that can make valuable contributions to alignment, such as developmental biology or neuroscience.
- Compatibility: Research skills might transfer well between different areas, making personal motivation and fit with the supervisor also important factors to take into account.
- Need for Credentials: A PhD is sometimes needed for research roles at large tech companies, although this requirement varies across different firms. While research positions might require a PhD, engineering roles generally do not.
- Skill Acquisition: A PhD is a great opportunity to develop research skills, enhancing your capabilities in research and increasing your appeal to potential employers. It can be seen as an entry-level research job.
- Stability: A PhD program offers about four years of stable engagement as a research scientist, in contrast to research science jobs in industry which can sometimes offer less job security.
The field of technical AI safety is currently very opportunity-constrained, meaning that even highly skilled people struggle to find jobs where they can have direct impact. The Netherlands also does not contain any big AI company (such as OpenAI, Google Deepmind, or Anthropic), limiting job opportunities even more. However, some companies do offer remote jobs, making work from the Netherlands possible. Additionally, the 80,000 hours job board is a useful resource for finding remote positions. Entering academia is another option for doing AI safety work, and the Netherlands offers more opportunities in this domain. Independent research outside established organisations or academia is also a viable path.
When navigating a technical career path with the goal of transitioning into AI safety work, consider the advantages of gaining engineering skills in mainstream industry roles. Industry engineering jobs provide a platform to acquire essential software engineering and machine learning practices, emphasising speed, quality, and robustness. If you’re considering this path, also check out this guide on how to transition from software engineering into AI safety.
- Technical Benefits:
- Access mentoring and feedback from experienced developers.
- Exposed to good software engineering and machine learning practices in fast-paced environments.
- Develop mental models and problem-solving skills.
- Non-Technical Benefits:
- Accumulate financial capital for future endeavours and risk taking.
- Learn to take ownership and deliver value for customers/company.
- Exposed to agile methodologies and collaboration within teams.
- Reduce hours, enabling part-time engagement in safety work.
- Cultivate pragmatism in settings with deadlines and constrained resources.
- Relevance of technical skills to safety work may be limited.
- Commercial incentives in mainstream industry may not align with personal learning priorities.
- Risk of value drift from tech salaries and personal security.
- Difficulty staying informed about safety-related developments with a full-time job.
- Delays in time to impact, which may be problematic for those that think we will soon have Artificial General Intelligence (AGI).
More generally, having a full-time job has advantages such as (1) honing the ability to learn quickly and excel at the job, (2) building opinions and preferences, (3) dealing with adversity and people, and (4) getting to know different/commercial environments.
In the realm of AI safety, the demand extends beyond ML and research engineers to encompass software and infrastructure engineers essential for large-scale R&D. Professional experience is highly valued by many safety organisations, serving as tangible proof of technical achievements and a readiness to contribute effectively, potentially reducing internal upskilling costs.
Below are considerations for choosing between big tech, medium-sized companies, and startups. Ultimately, this decision depends on personal preferences, career goals, and risk tolerance.
- Abundant resources and infrastructure.
- Structured training programs and mentorship.
- Extensive networking opportunities.
- Prestige and brand recognition.
- Bureaucracy and potential lack of autonomy.
- Specialised roles with potentially limited exposure/diversity.
- Competitive and high-pressure environment.
- Flexibility and diverse responsibilities.
- Visibility and direct impact on company success.
- Dynamic and agile working environment.
- Opportunities for career advancement.
- Limited resources compared to larger companies.
- Potential market instability affecting security.
- Networking opportunities may be limited.
- Diverse responsibilities and skill exposure.
- Ownership, autonomy, and potential for equity.
- Innovative and agile work culture.
- Higher risk and uncertainty.
- Limited resources and potentially lower salaries.
- Limited mentorship.
Industry & Academia
For the following pointers we selected companies and organisations with a good reputation that likely provide a conducive environment for skill-building and even safety-related contributions. Small to medium-size organisations are excluded due to relatively high variance in their quality of work. Naturally, there are many great AI companies and organisations we haven’t mentioned in this short list.
Tech & Consulting Companies:
- Booking.com (Amsterdam)
- Uber (Amsterdam)
- Tom Tom (Amsterdam)
- Qualcomm Research (Amsterdam)
- Xebia Data (various locations)
- Huggingface (remote)
Applied & Industry Research:
- Microsoft Research (Amsterdam)
- Google Research (Amsterdam)
- Data Science & Responsible AI unit at TNO (Netherlands Organisation for Applied Scientific Research; Nederlandse Organisatie voor toegepast-natuurwetenschappelijk onderzoek)
- Civic AI Lab (Amsterdam)
In the long run, you might want to work directly at established safety organisations, which are often located in the US and UK but offer remote positions. Here are job boards by 80,000 hours and Probably Good are frequently updated.
A short list of organisations that offer remote options:
- METR (formerly ARC evals) (Berkeley, remote)
- Elicit (Oakland, remote)
- FAR (Berkely, remote)
- Research Collaborator at CHAI (Berkeley, remote)
- Future of Life Institute (remote)
Independent Research and Starting Your Own Organisation
If you’re interested in doing research outside academia, it is also possible to establish your career through self-study and independent work — during your free time or on scholarships designed for this (such as EA Long-Term Future Fund grants and Open Philanthropy support for individuals working on relevant topics). Independent research can make sense when no established organisation is working on the topic you find most promising, as a way to upskill for a job, to gain new research skills or to transition between jobs. However, it’s important to note that independent research can be difficult, as you have no default structure and you carry a lot of responsibility yourself. Personality traits such as self-discipline are very important in this career path.
Another option is founding a new organisation to tackle a pressing problem. Doing so involves identifying a gap in AIS, formulating a solution, investigating it, and then helping to build an organisation by investing in strategy, hiring, management, culture, and so on. Examples of AIS organisations founded in the Netherlands are
- Catalyze: A non-profit providing support to individuals in setting up AI safety research organisations.
- Existential Risk Observatory: A non-profit aiming to reduce human existential risk by informing the public debate.
- Timaeus: An AI safety research organisation working on developmental interpretability.
If you want to start your own organisation, you can apply for funding e.g. at the Long-Term Future Fund. To be clear, these options are typically much riskier and difficult to pursue compared to joining an established organisation.
Based on expert interviews, we have identified the following pitfalls that people tend to encounter:
- Lack of Online Visibility and Clear Projects. Develop GitHub projects or technical reports to demonstrate skills and interests. Build a basic personal website for employers to view your achievements and interests. Summarise your expertise through tangible outputs.
- Optimising for early career impact. While a few people may have outstanding achievements and contributions very early, the median person likely exhibits a less exceptional trajectory. Excellence requires years of skill and knowledge development. Career impact typically follows a heavy-tailed distribution, peaking in later stages. Although some valuable contributions can already be made by PhD students.
- Overemphasis on Passive Learning. There’s a common tendency to passively consume content from sources like LessWrong and academic papers, without engaging in critical thinking. While reading is crucial for foundational understanding and staying informed, active processing is key. This can be achieved by writing about the subject matter or participating in meaningful discussions with peers. Don’t hesitate to reach out to researchers for paper clarifications or potential collaborations. Sharing even unrefined ideas and comments is beneficial. Stepping out of your comfort zone to interact with others can also help in forging valuable personal connections.
- Chasing Online Status. A tendency exists to prioritise gaining recognition on online forums, which is simpler than developing skills in mathematics and machine learning. This ‘status game’ often diverts focus from acquiring more substantial skills.
- Not diversifying interests and skills. Mainstream AIS research is very competitive and, hence, only people in the top % get hired. It can sometimes be valuable to have additional interest and skills (e.g., project or people management experience) to stand out.
- Neglecting mental health. It is also important to take care of your mental health. Despite the perception of short AGI timelines, find a way of working and living that is sustainable.
- Financial insecurity. Building savings is beneficial. In competitive fields, response times for roles can be lengthy, so having financial stability aids in pursuing high-impact opportunities without pressure. Lack of financial security can cause stress and lead to less optimal career decisions made out of necessity.
Field building is a career path that supports AI safety on the meta-level by raising awareness and motivating talented individuals. The field of AI safety is still very young, and communities and organisations around the world are only now taking notice of the large risks posed by advanced AI systems. Very few people are working on mitigating AI risks, and even fewer people are intentionally thinking about how to grow the field and ensuring that stakeholders collaborate effectively to achieve their goals. Field building can be done for both AI governance and technical AIS.
There’s no specific route to enter field building. The key is to have a deep understanding of the field, identify its needs, and take the initiative to address these gaps. To learn more about field building in AI governance, check out this course by BlueDot Impact.
In the Netherlands, various student groups are involved in AI safety field building:
Additionally, the Existential Risk Observatory is a Dutch organisation dedicated to increasing public awareness about the risks of AI.
Next Steps and Further Resources
Our career guide is a very small contribution to existing resources on AI safety. We encourage you to check out other, more in-depth resources. Below we have listed a few that you can get started with, but don’t limit yourself to only those.
Also, consider going to an EAG(x) conference to learn more about AI safety and to build your network.
Introductions to AI safety
For a the general public
- The case for taking AI seriously as a threat to humanity (Column article by Kelsey Piper)
- Intro to AI Safety (18 minute video by Robert Miles)
- Overview of the AI Alignment Landscape (Forum post by Neel Nanda)
- Compilation of podcast episodes (80,000 hours podcast)
For an academic audience
- Unsolved problems in ML Safety (Dan Hendrycks, Nicholas Carlini, John Schulman, Jacob Steinhardt)
- Recommended Readings (Centre for Human Compatible AI, UC Berkeley)
- AI Safety Newsletter (for a general audience, Center for AI Safety)
- ML Safety Newsletter (for an ML research audience, Center for AI Safety)
- EU AI Act Newsletter (Future of Life Institute)
Funding for technical AI safety and AI governance research
- Long-Term Future Fund – EA Funds: LTFF funds various activities, such as up-skilling in a field to prepare for future work; movement-building programs; scholarships, academic teaching buy-outs, and additional funding for academics to free up their time; funding to make existing researchers more effective; direct work in AI; seed money for new organisations; etc.
- Career development and transition funding – Open Philanthropy: This program aims to provide support for early-career individuals who want to pursue careers that help improve the long-term future.
- Grants – Future of Life Institute: Many different grant opportunities: project proposals; PhD fellowships; post-doctoral fellowships; and for professors to join their AI Existential Safety community.
- Effective Thesis – Want to use your thesis to work on solving the most important problems our world faces? ET provides free coaching and guidance to students, from undergraduate to PhD level, who want to begin research careers that significantly improve the world.
Jobs and internships
We want to thank (in no particular order) Jan-Willem van Putten, Tim Bakker, Leon Lang, Teun van der Weij, Sam Bogerd, Daan Juijn, Stan van Wingerden, Jesse Hoogland, Alexandra Bos, Jelle Donders, Marieke de Visscher, and James Herbert for their input and feedback.