RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/03/15 12:22:48

EU strategy in the field of artificial intelligence

In February 2020, the European Commission presented a document outlining Europe's policy on artificial intelligence (AI), which aims to ensure the trusted and safe development of these technologies, while respecting the rights of citizens. It complements previously issued AI directives and is part of EU digital transformation initiatives.

Content

The document (download) presented in February 2020 by the European Commission is not the first to outline the strategy and vision for the development of AI in Europe. In 2018, the EU presented a strategy for the development of AI and later - a coordinated action plan based on it, designed for a period until 2027.

The policies presented in February 2020 complement previous initiatives. The main emphasis in them is on the development of the AI ecosystem in the EU, combining the efforts of the EU member states and developing the legislative changes necessary for this.

It is planned to collect feedback, proposals and hold public consultations on the new document, after which the European Commission invites EU member states to revise the previously developed coordinated plan and adopt a new version of it by the end of 2020. Taking into account the proposals, in the end, the goal is to develop a European approach to AI.

The document focuses on two areas:

  • Developing a set of policies, regulatory documents that would form one row of efforts in the field of AI development at the European, national and local levels. They should allow businesses, together with the state, to mobilize their resources to create an "ecosystem of excellence," including a complete production chain - from the start of research and development to the implementation of AI-based solutions, including in the medium and small business (SMB) segment.
  • Develop key elements of a regulatory framework that creates a unique "trust ecosystem." The regulatory framework in this case will be aimed at ensuring the observance of fundamental rights and consumer rights when using AI systems. We are talking, first of all, about systems that carry high risks. The European Commission supports an approach focused primarily on humans, the document emphasizes.

TAdviser examined what steps the EU expects to take on both fronts and outlined them briefly below.

EU AI policies complement previously issued directives in this area and are part of digital transformation initiatives "(photo - Venturecafecambridge.org)"

Ecosystem of Excellence

Developing this direction involves focusing efforts on working with the research and development community. The existing landscape of competence centers in Europe is rather fragmented and does not allow achieving the scale that would allow competing at the global level, the authors of the document believe. It is therefore important to ensure greater synergy between the various European AI research centres. The European Commission will contribute to the creation of centers of best practices and testing, which could function by attracting national and private investment.

Special attention is paid to professional training. A number of measures are planned to attract the best scientists and professors, as well as the provision of the best world training programs in the field of AI.

The direction of creating an ecosystem of excellence involves creating a situation where every EU member country has at least one digital innovation center with a high degree of specialization in AI. Support for such centers can be provided, among other things, through the Digital Europe program.

It is also supposed to launch a pilot scheme, under which funding for innovations in the field of AI will be provided in exchange for shares in development companies. The total amount of funding for the pilot is 100 million euros.

Another initiative is the creation of a public-private partnership in the field of AI, robotics, in which it is planned to join forces, ensure the coordination of research and development and interact with the center of digital innovation and testing, which was discussed above.

Another initiative is the promotion and adaptation of AI solutions in the public sector. The European Commission plans to launch a series of open and transparent dialogues with priority in the field of health care, administrations of small settlements and state service operators in order to develop a plan to accelerate the development, experimentation and adaptation of technologies.

Ecosystem of trust

The construction of such an ecosystem relies primarily on the creation of a regulatory framework for AI, based on the fact that AI, like other technologies, brings both new opportunities and risks. Citizens fear that they will remain powerless in protecting their rights and security in conditions when decisions are made with the participation of algorithms. In turn, business is puzzled by legislative uncertainty in this area.

A number of already adopted documents touch upon these problems and the need to improve legislation in the field of AI, and the policies presented in February 2020 indicate their role in creating an ecosystem of trust, taking into account existing laws and legal acts. The European Commission notes that any changes in legislation should be determined by clearly identified problems for which there are feasible solutions.

According to the European Commission, changes could be made to its existing legislation affecting the following risks and situations:

  • Effective application and control of existing national and EU level laws. This may require regulatory changes in some areas to ensure greater transparency of legislation;
  • The limited spectrum of existing EU legislation: it focuses on products, not services, accordingly, there is a gap in relation to services using AI;
  • Changing functionality of AI-based systems: existing legislation does not take into account the security risks that may be caused by this, focusing on those risks that are present at the time the system enters the market;
  • Uncertainty in the distribution of responsibility between suppliers: The bulk of the responsibility lies with the solution maker, but there is regulatory uncertainty for cases where AI functions are added to the solution later by someone else rather than the solution maker;
  • Security Concept Changes: Security legislation must take into account not only the risks that products may carry at the time of launch, but also the potential risks that they may carry in the future as a result of receiving updates or self-training.

Regulation

2024: EU passes AI law banning facial recognition systems from public spaces

On March 13, 2024, the European Union passed the "Artificial Intelligence Act." The document aims to ensure security when using AI and respect fundamental rights while stimulating innovation. The law, among other things, prohibits the operation of AI-based facial recognition systems in public places.

The new rules prohibit certain AI use cases that threaten citizens' rights. These are, in particular, biometric categorization systems based on certain characteristics, as well as inappropriate extraction of images of faces from the Internet or recordings from CCTV cameras to create databases. In addition, a ban is imposed on the recognition of emotions in the workplace and in schools, the formation of social ratings and predictive policing - when it is based solely on human profiling.

EU passes "Artificial Intelligence Act"

The use of AI biometric identification systems in real time is allowed only if strict security measures are observed. Moreover, such application is subject to prior judicial or administrative permission. This can be, for example, a targeted search for a missing person or the prevention of a terrorist attack.

"High-risk" AI use cases include critical infrastructures, education and training, employment, essential private and public services (e.g., health care, banking), some law enforcement systems, migration, and democratic processes (including those affecting elections). Citizens will be able to file complaints about AI systems and receive clarification about decisions based on high-risk AI tools that affect their rights. In addition, transparency requirements regarding AI are introduced.[1]

2023: EU passes law regulating artificial intelligence

On May 11, 2023, the relevant bodies of the European Parliament agreed on the EU Law on Artificial Intelligence. The document called AI Act defines the norms for the use and ethical development of various services, systems and applications based on neural networks and AI algorithms.

The project received approval from two structures of the European Parliament - the Committee on the Internal Market and the Committee on Civil Liberties. During the voting, 84 deputies spoke out for the adoption of the rules, only 7 against. Another 12 people abstained. The document will be the first law in the world aimed at comprehensive regulation in the AI sphere. According to the authors of the initiative, such systems should be under the control of people, be safe, transparent, traceable and non-discriminatory.

AI Act received approval from two structures of the European Parliament

The law defines five key provisions. In particular, the use of AI systems for remote biometric identification of citizens in real time in public places is prohibited. In addition, neural networks and AI algorithms should not be used to recognize people's emotions: this ban applies to employers, educational institutions, law enforcement agencies, etc.

The document prohibits the use of AI to deploy predictive police systems based on profiling, the location of certain persons or past criminal behavior. Another area is high-risk areas: these are, in particular, medical systems and security tools based on AI. The law also applies to generative AI platforms such as ChatGPT: operators of such services will have to comply with transparency standards, disclosing copyright information and reporting that specific content was generated by a neural network.[2]

See also

Notes