RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/07/18 18:45:57

Chat Bots in Healthcare: How to Protect Users' Interests

Thanks to the active development of technologies in the field of artificial intelligence, the question of the ethical principles of artificial intelligence (AI), especially for chat bots in the healthcare sector, has become urgent.

Content

The main articles are:


At the legislative level, relationships related to the ethical aspects of AI technology are currently loosely settled. On the one hand, there is a huge layer of disparate and not systematized legal norms, the provisions of which to one degree or another affect this area, and therefore must be strictly observed. On the other hand, the Code of Ethics of Artificial Intelligence has been developed, which is supported by the largest players in the AI market in Russia, but, unfortunately, is only advisory in nature.

At the same time, the health sector is a special "territory," and the risk of the negative impact of AI on human life and health in it is higher. How to track and not violate the boundaries of the permissible, we will tell in our article.

What the Code of Ethics for Artificial Intelligence recommends (Code)

1. Enforce the law

Contrary to the evidence of this recommendation, it can be safely called fundamental. As flawed as the legislation is, this is not grounds for violating it.

AI market participants should know and comply with the provisions of Russian legislation in all areas of their activities and at all stages of the creation, implementation and use of AI technologies, including in matters of their own legal responsibility.

2. Conduct risk and humanitarian impact assessments

AI market participants are advised to conduct a thorough assessment of the potential risks associated with the use of AI systems, including:

  • analysis of possible social consequences for the person, society and the state;
  • analysis of the humanitarian impact of such systems on human rights and freedoms at various stages of their life cycle, starting with the formation and use of data sets.

In order to improve the effectiveness of these procedures, it is recommended to monitor emerging risks on a long-term basis. And in some cases, especially when using AI in critical applications (applications in the healthcare sector can be safely attributed to this category), it is proposed to involve independent third parties or official authorized bodies to conduct risk assessments and prevent potential threats.

3. Implement a risk-based approach

Coders are convinced that the level of attention to ethical issues in the field of AI and the actions of people and companies involved in AI should correspond to the level of risks that these technologies can pose to society.

In this regard, it is recommended to develop and use methods for assessing risks associated with AI, including both already known threats and possible ones, and an analysis not only of the likelihood of their occurrence, but also of their possible consequences both in the near future and in the future.

4. Implement voluntary certification

AI-based product developers can implement voluntary certification systems for the compliance of these technologies with the norms established by current legislation and the Code. For these purposes, it is proposed to create systems for voluntary certification and labeling of AI systems, indicating the passage of voluntary certification procedures by these systems and confirming quality standards.

Legislative limitations for "living intelligence" applicable to AI

Yes, there are no direct legislative norms governing the limits allowed for AI technologies in the field of health care. But we have the right to use as markers the maximum permissible restrictions established by law for remote counseling carried out by the so-called "living intelligence." We are talking about the restrictions established for doctors in the provision of assistance using telemedicine technologies. After all, what is not allowed to a person, the legislator will definitely not allow artificial intelligence either.

So, providing assistance in the format of telemedicine consultation, the doctor:

  • has the right to change the treatment previously prescribed to the patient only if the diagnosis was established and the treatment was prescribed at full-time reception;
  • has the right to recommend no more than preliminary examinations if the patient has not yet been diagnosed and has not been prescribed treatment;
  • has the right to advise patients over the age of 18;
  • is not entitled to advise patients with infectious diseases;
  • is not entitled to advise on diseases in which medical care is provided in emergency or emergency form
  • if the patient is assigned to monitor his health using a special mobile application, he is obliged to first explain to him the rules for using such an application and the procedure for the patient to perform independent actions in case of deviation of his health indicators from the limit values, as well as the need for strict observance of these rules.

This list of restrictions does not exist in a single form, it is compiled on the basis of various legal and by-laws, including:

Of course, the above list of restrictions is not exhaustive, but it can be relied on by building a strategy for developing, determining the possible functionality and introducing AI-based products in the field of medical consulting. And it is safe to say that AI technologies that would allow users and specialists to avoid such restrictions will definitely be both beyond ethical restrictions (see paragraph 1 above) and outside the law.

Forecasts and Recommendations

Having understood the existing guidelines and potential restrictions, one can try to predict the further development of industry legislation and the legal prospects of projects aimed at creating AI-based technologies in the healthcare sector.

1. Introduction of the experimental regime

Thanks to the adoption in 2020 of Federal Law No. 258-FZ "On Experimental Legal Regimes in the Field of Digital Innovation in the Russian Federation," the most effective way to adapt the existing legislative system to the rapid development of technologies has been introduced into law enforcement practice.

That is why the next step that AI market participants should expect from the regulator is the introduction of an appropriate experimental regime, within the framework of which an accurate list and conditions for the use of AI technologies in the field of advising users on health issues and receiving medical care will be determined.

At the same time, you should not wait for sensations. Despite the fact that within the framework of the already applied experimental regimes, the boundaries of the permissible ones are indeed expanding, however, this does not mean that the development and application of new technologies will take place in conditions of complete refusal to comply with the requirements of the legislation. In any case, under such a regime, deviation from general regulation will be minimized.

2. Creation of a compensation fund for participants in the experimental regime

Both AI market participants and the regulator see the risk of personal data leaks as the key risk of introducing AI-based technologies into the medical sphere. According to industry representatives, in the event of the introduction of the experimental regime mentioned above, it is necessary to provide for the creation of a compensation fund by its participants. In this case, users will be given the opportunity to obtain information about the use of their data, send a complaint and receive compensation in the event of harm to them.

3. Introduction of liability insurance

Another problem discussed by industry participants is the delineation of responsibility for causing harm to users when using AI-based technologies in the field of health advice and medical care between developers and service providers. Its decision is seen in the liability insurance of each of the participants in this chain. Despite the lack of mandatory insurance requirements, AI technology developers and market participants are already available to implement liability insurance on a voluntary basis. And the implementation of this tool at this stage meets the requirements of ethical standards as effectively as possible.

4. Detailed study of the conditions for the use of the technology by users/development of correct disclamers

The introduction of new products and technologies is not a one-way process. Users should not just be tempted by the opportunity to simplify and improve their lives. They should be very clear about the risks that this opportunity can carry. That is why the key recommendation of today for developers and providers of AI products is a detailed study of the conditions for using technologies, the development of correct disclamers that will protect the interests of both users themselves and creators and owners of the product. To achieve this goal, the competencies of technical and commercial specialists are not enough, and the involvement of professional lawyers is strategically justified.

At the end of the article, I would like to quote the Swiss doctor, alchemist and natural scientist of the Renaissance Paracelsus: "Everything is poison, everything is medicine; both determine the dose. " The introduction of AI technologies into the field of health care and health counseling is inevitable. The task of everyone involved in this process is to strive to strike a balance in which these technologies will be of maximum benefit without compromising users.

Author: Elena Shershneva