With the development of technology, artificial intelligence is making its way to more and more different areas of society every year: from healthcare to transportation. In Russia, the use and implementation of artificial intelligence is not yet well regulated by law, therefore, there are barriers to the introduction of technologies into daily processes. Currently, the authorities are finalizing the regulations so that over time these barriers are removed. Deputy Minister of Economic Development Oksana Tarasenko in an interview with TASS spoke about what legal conflicts we will have to overcome on the way of introducing artificial intelligence into the life of Russians, what hinders the development of artificial intelligence in Russia and which countries’ experience is the most successful.
— Oksana, artificial intelligence already exists, but there are no laws regulating it, is that correct?
— So far, there is only the National Strategy for the Development of AI until 2030, which was approved by the Russian President. And the recently adopted Concept for the regulation of Artificial Intelligence and robotics. And now we are in the process of developing a schedule for the regulatory legal acts for the development of AI for the period until 2024. It contains about 80 normative legal acts. They are all divided into groups. The first includes general bills related to personal data, regulatory sandboxes, civil liability, etc. The second includes the issues of effective implementation of AI in those industries where there are currently regulatory barriers. In the near future we plan to finalize the plan and approve it at the end of 2020 - beginning of 2021.
— What issues do you think are important to regulate in the first place? How does the lack of regulation hinder the process? It provides more freedom, doesn’t it?
— No, it does not. For example, the issue of regulation of legal relations in the field of personal data. Today, there are slightly outdated norms that often do not correspond to the best international practices. This delicate area needs to be adjusted so that there should be a balance between protecting the interests of individuals and advancing technology. In particular, we want to create a mechanism for anonymizing data so that it is impossible to determine the person whom this data belongs to, even if it is possible to compare them. We are currently working on that.
Another area is in the development of the law on the so-called regulatory sandboxes, adopted in July. Now we are actively working on the necessary regulatory legal acts, which will detail the regulation of certain areas, particularly, industry legislation in connection with the introduction of AI.
— What else hinders AI development in Russia?
— There are difficulties associated with the low efficiency of combining AI systems with information systems, there are problems in assessing the economic effect of the introduction of AI systems and the risks of artificial monopolization of certain smart technologies. We are currently developing standards that will support the Artificial Intelligence by removing such regulatory and technical barriers.
— In this regard, is Russia following its own path or is it borrowing the experience of other countries?
— We, of course, focus, among other things, on the experience of our foreign colleagues, analyzing and working through their positive and negative results.
In China, for example, experiments are already underway with algorithms and business models using AI. However, as experience shows, the national strategy for the development of artificial intelligence acts only as a catalyst for social relations. The development of AI itself, its development, widespread introduction and use are still ahead. The Chinese Plan for the Development of AI Technologies only refers to the need to develop laws and ethical standards for the development of this technology, and provides for the creation of specific laws for AI by 2024.
India's 2018 National Strategy emphasizes that artificial intelligence is a "black box", that AI must be controlled and its development should be responsible, and considers regulation in the context of ethics and safety issues. Only in 2019, in another Indian strategic planning document, a separate section is highlighted, which outlines the need to adjust data legislation, antimonopoly and consumer legislation, and certain industry legal acts.
— Which model of regulations is closer to us?
— To us, the closest thing, of course, is the experience of our European colleagues. The White Paper on Artificial Intelligence, adopted by the European Commission in 2020, can indeed be called the only analogue of the Russian concept. It contains the rules for using artificial intelligence.
In particular, it envisages that when implementing AI in high-risk areas such as healthcare and transportation, artificial intelligence systems must be transparent so that they could be controlled by humans.
Simultaneously with the European Commission, CAHAI (Ad hoc Committee on Artificial Intelligence) is also carrying out very important work, its area of responsibility is the development and implementation of the legal framework for AI technologies, taking into account the EU standards. This work provides for the need for a generally accepted definition of AI, mapping the risks and opportunities associated with AI, especially with regard to its impact on human rights, rule of law and democracy, and also an opportunity to move forward towards a legally binding framework.