A history of AI regulations
AI regulations have been a source of contention for some time.
They entail the establishment of policies and legislations to promote and regulate artificial intelligence.
Below is a timeline of various efforts towards those goals.
The European Union introduced the General Data Protection Regulation (GDPR) in 2018, which includes provisions affecting AI, most notably wording stating a “right to explanation” — essentially a right to be given an explanation for an output of algorithm.
Canada’s Bill C-27
In June 2022, Canada presented Bill C-27, also known as the Digital Charter Implementation Act, 2022, which implements the Artificial Intelligence and Data Act (AIDA), Canada’s first artificial intelligence legislation.
Such encourages the responsible adoption of AI technologies by individuals and businesses.
The AI Act
Introduced in 2022, the AI Act is a proposed European Union artificial intelligence law, would be the first AI law passed by a major regulator.
It divides AI applications into three risk categories: those that represent an unacceptable danger are prohibited, those that pose a high risk are subject to rigorous limitations, and those that pose a low risk are subject to transparency duties.
Other efforts to promote ethical development and use of AI
In addition to these regulations, numerous efforts have been made to support the ethical development and use of AI. In 2016, several tech companies, including Google, Amazon, Meta, IBM, and Microsoft, founded the Partnership on AI with the purpose of promoting public awareness of AI and defining best practices for its usage.
Another example is the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which was started in 2016 with the purpose of developing ethical design and development standards for autonomous and intelligent systems.