AI policy must be based on 'science, not science fiction'
What's the story
Fei-Fei Li, a prominent Stanford computer scientist and a start-up founder, has outlined three key principles for future artificial intelligence (AI) policymaking.
The guidelines were shared ahead of the upcoming AI Action Summit in Paris.
Li stressed that AI policies should be based on "science, not science fiction," calling on policymakers to focus on the present state of AI rather than speculative future scenarios.
AI comprehension
Li calls for a realistic understanding of AI
Li emphasized that policymakers need to understand that chatbots and co-pilot programs lack intentions, free will, or consciousness.
She thinks this clarification is important to prevent "the distraction of far-fetched scenarios" and concentrate on "vital challenges."
This view further highlights her call for a more realistic approach toward AI in developing policies.
Policy pragmatism
Pragmatism over ideology in AI policy
Li's second principle for AI policy is that it should be "pragmatic, rather than ideological."
She advocates for policies designed to "minimize unintended consequences while incentivizing innovation."
This approach highlights the importance of practicality and foresight in regulating AI technologies, to make sure they continue to evolve without leading to unforeseen problems.
Inclusive policies
Li advocates for inclusive AI policies
In her final principle, Li argues that AI policy should empower "the entire AI ecosystem — including open-source communities and academia."
She cautions against limiting access to AI models and computational tools, saying that such restrictions could create barriers and slow innovation.
This is especially relevant for academic institutions and researchers with fewer resources than their private-sector counterparts.