Pioneering AI risk research institute closes after nearly 20 years
The Future of Humanity Institute (FHI), a leading entity in existential risk research related to artificial intelligence (AI), has announced its closure. The institute, founded by Swedish philosopher Nick Bostrom at Oxford University in 2005, has been instrumental in investigating potential threats posed by AI for nearly two decades. The closure was attributed to increasing administrative challenges within Oxford University's Faculty of Philosophy, FHI's institutional base, as stated on the institute's official website.
Administrative hurdles led to institute's demise
According to a post by the organization, faculty initiated a halt on fundraising and recruitment activities in 2020, and by late 2023, decided not to extend contracts for the remaining FHI staff. The institute formally ceased operations on April 16, 2024, as per the announcement on its website.
Contributions and funding sources
FHI's funding came from various sources, including Elon Musk, the European Research Council, and the Future of Life Institute. Over its 19-year existence, FHI made significant research contributions that influenced discussions about our future. The institute was instrumental in formulating concepts such as existential risk, effective altruism, AI alignment, long-termism, and AI governance. Global catastrophic risk, grand futures, information hazards, unilateralist's curse, and moral uncertainty were also subjects of discussion.
AI concerns and impact of FHI's closure
In 2023, concerns about AI reached an extreme point when the Future of Life Institute released a letter, urging a six-month halt in AI development due to its rapid and potentially hazardous advancement. Although the actual impact remains uncertain, the closure of FHI will likely be celebrated by techno-optimists, effective accelerationists, and other AI enthusiasts keen on swift development and promotion. This group perceives "AI safety" and effective altruism as hindrances to technological advancement.