top of page
Search

AI in Healthcare

The world in which we find ourselves and we live is changing. Today, all of humanity is still facing a COVID-19 pandemic. That is why our way of communication is also changing. We are increasingly hearing about people using phones, tablets, or computers to connect and share life at a distance or in isolation. They also use them as a means of protection. We are online and looking for information. We probably don't even realize it, but it's often artificial intelligence that makes our lives easier when searching or writing. Whatever we do in the virtual world can be classified as data, whether in the form of an image, video, conversation, or the like. Individual data have added value. It is all the more important when we talk about our health and life. Today, it is possible to use various technologies in healthcare which accelerate development, innovation, and ultimately lead to saving or prolonging life, or shortening the disease. All in order to improve the quality of life. That is why it is important to pay attention to healthcare based on the values ​​and technologies that are available today and help in other sectors as well. Artificial intelligence represents such a change. In the following lines, I will share with you selected parts of the theory from my MBA thesis on the application of artificial intelligence in a hospital setting.

Artificial intelligence (hereinafter AI) today can be considered one of the most interesting advances of our time. Just a few quick examples to show. The AI can select the movies we like based on our profile, determine the taste of music with remarkable accuracy, cars can drive themselves to some extent, or mobile applications help prevent life situations that may be undesirable, or reverse diseases that are considered chronic and progressive. The AI is multidisciplinary and consists of several subgroups. A subgroup of informatics that has, among other things origins in mathematics, logic, philosophy, psychology, cognitive science, and biology. The earliest AI research was inspired by a series of ideas dating back to the late 1930s and culminating in 1950 when the British pioneer Alan Turing published the monograph Computing Machinery and Intelligence. This publication openly asks if a machine can think. The term AI was first coined in 1956 by professor John McCarthy of Dartmouth College. In his research project, professor McCarthy suggested that “every aspect teaching or any other feature of intelligence can in principle be precisely described in such a way that machine simulated”. The truth remains that AI is at its core only about programming. Therefore, AI can be understood as an abstraction of computer science. The increase in its popularity, and therefore its capabilities, has much in common with the explosion of data via mobile devices, smartwatches, laptops, and more. The use and application of AI in organizations are still relatively new, and this is all the more true in healthcare. AI provides an opportunity to optimize healthcare as healthcare costs rise worldwide and government and private payers are putting increasing pressure on services to be more cost-effective. At the same time, costs must be managed in such a way that they do not adversely affect patient access, patient care and health outcomes. We want to improve decision-making processes, avoid mistakes (such as misdiagnosis, unnecessary procedures), assist in ordering and interpreting appropriate tests, and recommend treatment. All this is based on data today. We are currently in an era of big data. The world produces zettabytes of data every year. A combination of human characteristics, data and machine capabilities will move the quality of healthcare to unprecedented levels.


Data Science


Data science is a growing discipline that includes everything related to the collection, processing, cleaning, extraction, data analysis up to the modeling and product creation. Data science is a general term for the many techniques used in obtaining knowledge and information from the data that are used in decision-making.

The term data science was formulated by William Cleveland in 2001 to describe an academic discipline that brings statistics, computer science closer, and includes AI and machine learning.

The science of data is closely linked to statistics, as individual data are analyzed. Different statistical methods are used in the analyzes as needed.

Machine Learning


The pioneer of machine learning, Arthur Samuel, in 1956 demonstrated AI skills based on a checkers game. Samuel's software ran on an IBM 701. The data was usually discreet. Almost 70 years later, data is ubiquitous, and computers are more powerful today. This has also contributed to AI and machine learning to growth and progress. In our daily lives, we see and work with artificial intelligences developed by machine learning. Due to the large amount of data, the AI focuses on the power of machine learning, where machines can learn on their own. In 1959, Arthur Samuel defined machine learning as a field of study that allows computers to learn without being explicitly programmed. Therefore, machine learning was born out of pattern recognition and the theory that computers can learn without being programmed to perform specific tasks. As a result, learning is driven by data through intelligence gained through the ability to make effective decisions based on the nature of the learning signal or feedback. Machine learning focuses on the development of algorithms that adapt to the presentation of new data and discovery. Machine learning illustrates the principles of data mining but is also able to derive correlations and learn to apply new algorithms from them. The aim is to mimic the ability to learn like a human through experience and to achieve the assigned task without or with minimal external assistance.


Types of machine learning


Machine learning is categorized into four types:

  • Supervised learning

  • Unsupervised learning

  • Semi-supervised learning

  • Reinforcement learning


Supervised learning


In supervised learning, algorithms are presented with examples of inputs and then the required outputs in the form of training data in order to learn the general rules that map inputs to outputs. The input data are called training data and are associated with known outputs (results). Training data lead algorithm development. The model is created through a training process in which the model makes predictions and corrects itself when the predictions are incorrect. Training continues until the model reaches the required level of accuracy in the training data. Algorithms supervised machine learning can apply what they have learned in the past to new data and thus predict future events. The learning algorithm can also compare the output with the correct output to detect any errors and adjust the model accordingly.

Supervised machine learning can be implemented using the following techniques; classification, regression, decision trees, forecasting, support vector machine, and others.


Unsupervised learning


In unsupervised learning, the learning algorithm does not have an input data designation, so the algorithm itself determines their structure. Because the data is not marked, the accuracy is not evaluated. The data may lack both classifications and labels. As a result, the model is developed by interpretation, finding hidden structures and conclusions in the input data. This can be achieved by extracting rules, reducing data redundancy, or as a result of organizing data.

There are generally three unsupervised learning techniques; association, clustering and dimensionality reduction.


Semi-supervised learning


Semi-supervised learning is a hybrid where there is a mixture of labeled and unlabeled data in the input. Thus, the model also learns structures to organize data and make predictions.


Reinforcement learning


Reinforcement learning allows you to decide on the best course of action based on its current state and learning behavior that maximizes reward. Optimal activities are usually learned through trial, error and feedback. This allows the algorithm to determine the ideal behavior in context. Reinforcement learning is commonly used in robotics. For example, a robotic vacuum cleaner that learns to prevent collisions by receiving negative feedback by hitting tables, chairs. Reinforcement learning differs from supervised learning in that it is correct examples are never presented. The emphasis is on real-world performance.


AI Application (Some example publicly available)


AI is unlikely to replace doctors and general nurses, but AI and machine learning are transforming healthcare and having a positive impact on patients health. Here are some examples.


Example 1: Imagine a situation where you need to see a doctor with heart pain. After hearing your symptoms, your doctor will enter the information into computer. The individual information is part of an electronic medical record, which consists of several medical records. In these medical records, it is possible to look for associated factors that will serve in the effective diagnosis and determination of the treatment of heart pain.


Example 2: Another example is from the field of magnetic resonance, which is used in differential diagnostics. An intelligent computer system helps the radiologist identify any concerns that are too small for the human eye of the radiologist to see.


Example 3: Digital therapy helps people with type 2 diabetes and pre-diabetes reverse their condition. The application provides customized education and integrated health monitoring. In this way, the progress of the individual or group can also be monitored.


Example 4: Repeated return of patients to hospitals is a major challenge. Individual physicians, as well as public and private organizations, strive to keep patients healthy, especially when they return home from the hospital. Today, there are digital health trainers, similar to virtual assistants. The assistant asks questions about the patient's medication and reminds them to take drugs. He asks them for signs of their condition and provides the doctor with the relevant information.

 

In closing, although the AI offers a different perspective and is both exciting and challenging at the same time, it needs to be part of the regulations to ensure security and fairness in their use. As an example, I will mention the regulation of new drugs. New medicines must first be subjected to rigorous scientific experimentation and testing, and their medium and long-term effects must be monitored. Care and caution are paramount in this area, because if something goes wrong, many people will suffer significant damage. The same goes for AI.


Marek

45 views
bottom of page