Artificial Intelligence is rapidly gaining popularity and making advancements in industries, societies, and economies globally. As AI continues to evolve, countries are racing to develop policies and regulations that foster innovation while ensuring ethical use. Recently, the European Union has been seen at the forefront of the media for its stringent AI regulations. Comparing the European Union’s AI policies with those of China and the United States reveals disparate philosophies and strategies.
The EU Approach
The European Union AI Act was adopted in May of this year emphasizing trustworthiness, transparency, and human-centric values. The obligations under this act are hoped to be implemented within 36 months and the major components in 24 months. On the 12th of July the AI act was officially published, counting down to the first European AI law which will be enforced starting on the 1st of August 2024. The purpose of this act is to be a legal framework that ensures AI systems used in Europe are safe, respect fundamental rights, and comply with existing laws on data protection, privacy, and non-discrimination.
Key Components of the EU AI Act:
Risk-Based Classification
- There are four risk levels used (unacceptable, high, limited, and minimal) that will assist in identifying non-compliance with the regulations and determining the severity of the situations as well as the suitable penalties for said non-compliance.
Human Oversight
- This requires the presence of mechanisms for human oversight in AI systems (specifically high-risk systems) to ensure human intervention in decision-making to prevent automated systems from making critical decisions independently.
Data Governance
- This focuses on using high-quality, non-biased data sets to train AI systems and through that, minimize the risk of discrimination and ensure fairness.
Transparency and Accountability
- There must be clear information provided regarding the functioning and limitations of the AI systems. This includes logging and documentation to keep accountability and traceability of AI operations.
United States – The Innovative Approach
The U.S. lacks a centralized AI policy, however, there are various initiatives and frameworks to help guide AI development in the states. Voluntary standards and guidelines for AI have been developed to emphasize risk management and warrant trustworthy AI. The guidelines in the US encourage industries to self-regulate and adopt “best practices” to their liking. The U.S. also relies heavily on the private sector to drive AI innovation. Using companies like Google, Microsoft, and OpenAI to play pivotal roles in advancing AI technologies often leads to industry standards being set through their research and development efforts. In regards to ethical guidelines, they are often developed by individual companies or industry groups, rather than mandated by federal regulation in the United States.
China – The State-Driven Approach
China uses strong state involvement and ambitious national strategies for its AI development with ultimate goals aimed at global dominance. Their government heavily invests in AI research and development. They support both state-owned enterprises and private companies. They have even established “AI development zones” in 11 major cities to allow for rapid innovation in these technologies. They have begun to integrate AI into smart cities, healthcare, and military applications to ensure the alignment of AI with the Chinese economic and strategic objectives. Significant privacy and human rights concerns have been raised due to the government’s use of facial recognition and social credit systems in AI for social governance. Data sharing is widely used in China’s efforts to strengthen AI systems which has led to even more concerns regarding data privacy and security.
Concluding points
All three of these entities have varying focuses regarding the development of AI because they each have unique values, political systems, and strategic goals. There are clear contrasts between the three approaches, especially the EU’s major focus on ethical governance compared to the US and China. As AI continues to transform societies, these differing approaches will shape the development of AI technologies and also the global discourse on ethical standards, privacy, and human rights. International dialogue regarding this topic is essential for balancing innovation and ethical considerations in future developments of artificial intelligence systems.