Medical artificial intelligence. Why can’t it be easy?

The scope of the possibilities for AI in medicine and pharmaceuticals is remarkable. The industry is booming and hundreds of millions of dollars are invested. Medical artificial intelligence provides one of the great potentials that we have in terms of addressing the issue of healthcare. And this solution is fast, efficient and economical. What are the risks of working with large databases and their analysis in medicine? Why not all projects are equally useful? We find answers to these questions in this article.

Among the areas of development the following can be considered satisfying for 2019:

  1. Diagnosis of a number of diseases, especially requiring the processing of large quantities of images and standardized data. Like, for example, in oncology or when decoding an ECG;
  2. Preliminary new drugs testing and screening in order to analyze their effectiveness;
  3. Improving existing drugs treatment strategy based on the analysis of patient histories and laboratory data;
  4. Managing hospital patients and processing their data;
  5. Automatic reports generation, including work with patient histories.

According to analysts’ forecasts, these areas underlie the new market formation, whose capacity by 2025 will be 34 billion US dollars. And it is not surprising, because complete or at least partial automation of these areas will cut down the costs of the therapeutic and diagnostic processes by tens of percent.

However, problems of AI developing and machine learning in medicine are consist not only of the development issues but also of interaction between academic and industrial issues, issues of morality, personal and collective responsibility.

Industrial and scientific scopes have a different nature. The science is a place for research, basics learning, scientific analysis, and production is traditionally the place where ideas are put into practice. But this is the utopia situation. In real life, scientific research does not always have practical potential, and industry sometimes produces unclaimed products.

It is the creation of interactive intercultural and technological platforms that will allow efficient assessment of the idea potential, testing the product, and thus accelerate the development of AI in healthcare.

Machine learning is nothing like a software creating. It requires a large number of experiments, which is typical for academic science. This interaction will help the active deployment of machine learning in production.

Attracting open-minded, high-class specialists in the new solution development based on machine learning is extremely important for start-ups in medicine. It is not only about working in a medical practitioners’ team, but also about using the individual scientists’ knowledge and experience.

Issues of morality reflect the whole history of humankind and ethical laws is considered the prerogative of human exclusively. The existence of self-taught systems raises a machine ethics issue. Otherwise, our usual traditions and rules can go against the machine’s ideas. This problem is especially acute in medicine, where human life and health depend on third-party solutions. There are no such working prototypes yet.

Team ethics is very important when developing new solutions in medicine. New era requires new responsibility from designers. Improper operation of the mobile app, which counting the pulse and calories, can result in the issuance of inaccurate basic health data. A failure or an error in the ECG diagnostic program can lead to patient death.

AI developers are concerned with moral obligations. Cases of deliberate harm by doctors or, for example, pilots are well known. The desire to cause harm, affect the destinies and lives of other people, can lead the developer to write a hidden code that can fake or modify incoming or outgoing data at a certain moment.

Let’s hope that the next technological revolution will become the most safe, environmentally friendly and beneficial for us and humankind as a whole. It is not by chance that the word «responsibility» sounded more often than others at the MWC—2018 conference devoted to the development of AI.

Rating
( 9 assessment, average 4.89 from 5 )
intelligentmd-news.net
Leave a Reply