After President Biden's initial AI Executive Order, which aimed to ensure the United States remains at the forefront of artificial intelligence innovation while addressing its potential risks, a new announcement has further stirred the AI community. The recent executive order has been described as broad but lacking depth without the necessary legislative support.
Recap of the First Executive Order
As previously reported, President Biden's first AI Executive Order emphasized:
- Safety and Security: Mandating AI developers to share safety test results with the U.S. government and setting rigorous standards for AI system testing.
- Protecting Privacy: Urging Congress to pass data privacy legislation and promoting privacy-preserving techniques in AI.
- Championing Equity and Civil Rights: Providing guidance to prevent AI algorithms from perpetuating discrimination and bias, especially in the criminal justice system.
- Empowering Consumers and Workers: Harnessing AI's benefits in healthcare and education while addressing job displacement and labor standards.
- Boosting Innovation and Global Leadership: Catalyzing AI research nationwide and collaborating with other nations on AI governance frameworks.
- Government's Responsible Use of AI: Setting clear standards for government agencies using AI and modernizing federal AI infrastructure.
The Latest Executive Order: Broad but Not Deep
The recent executive order has been seen as a stopgap measure that supports the "voluntary" practices many companies are already implementing. While it encourages sharing results, developing best practices, and providing clear guidance, it falls short in terms of legislative remedies to potential AI risks and abuses. The rapid evolution of the AI industry has made it challenging for any rule to remain relevant by the time it's passed.
Senator Mark Warner of Virginia expressed his views on the order, stating that while he was impressed by its breadth, many sections merely "scratch the surface." He emphasized the need for additional legislative measures to address areas like healthcare and competition policy.
Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, acknowledged the importance of the message President Biden is sending about the immediate risks posed by certain AI systems. However, he also highlighted the need for a regulatory process that requires companies to prove the safety and effectiveness of their AI products.
Sheila Gulati, co-founder of Tola Capital, appreciated the executive order's intention to balance innovation promotion with citizen protection. She emphasized the importance of AI explainability, risk-based approaches, and security and privacy.
While the recent executive order is a step forward in addressing the challenges and opportunities posed by AI, it underscores the need for comprehensive legislation that keeps pace with the rapid advancements in the field. As the AI landscape continues to evolve, it remains to be seen how the U.S. government will further shape its AI policies and strategies.