The healthcare system is like an old mechanical clock, seemingly outdated but incredibly intricate, with each component interlocking with the next. It can be easily dismantled but challenging to reassemble.
Indeed, the healthcare system is considered relatively outdated compared to other sectors of our society. Therefore, it seems like an “obvious” idea to develop healthcare products using artificial intelligence (AI). However, precisely because it is “obvious,” there may be significant challenges involved.
With good intentions, we aspire to utilize AI technology to assist physicians in decision-making and alleviate the burden on patients. These ideas are commendable. However, the laws of development do not guarantee positive outcomes simply because our intentions are noble. The healthcare system is a complex system, and the application of AI within it is not just a technical issue but a systemic one. It often requires an analytical and problem-solving approach rooted in systems thinking. In the context of developing AI products for the healthcare system, the intricate relationships between causes and effects become particularly apparent.
Those involved in high-tech fields such as computer science often adopt a mindset that “new is always better.” The structure of the healthcare system has remained relatively unchanged for hundreds of years, and it could even be argued that it has seen no fundamental changes for thousands of years. It has essentially remained an industry model reminiscent of manual labor. To professionals in the high-tech industry, this appears as a highly archaic operational model. Consequently, it should be “revolutionized”!
However, in my opinion, we should respect this “ancient” healthcare system. The logic that “new is always better” itself needs to be questioned. Viruses have a simple and ancient structure, yet the intelligent and complex human race struggles to defeat them. The healthcare system is a fundamental component of human society, involving the lives of every individual. The fact that an ancient system has survived until now indicates that there must be a reason behind it.
That reason lies within our cognitive processes. Humans are inherently hesitant to make decisions about the unknown. When we encounter a new service or product, even if it is purported to bring great benefits, we usually do not immediately believe it. We tend to conduct our own research and require irrefutable evidence before opening our wallets. However, when it comes to new products, especially specialized services like healthcare, we often lack the necessary professional knowledge to make informed decisions. In such situations, our minds experience a phenomenon known in psychology as “decision paralysis.” People say, “Wait, let me figure things out first,” or “Let me see how things develop.”
However, the treatment of diseases cannot wait. Therefore, when it comes to specialized services like healthcare, if we genuinely want to help patients, we cannot leave the decision-making power solely in their hands. We need to allow professionals, namely doctors, to make decisions. The key lies in building trust between patients and doctors, enabling patients to listen to and trust the advice of healthcare professionals, effectively transferring the decision-making power to the doctors. From a business model perspective, the greatest challenge in specialized service-oriented products like healthcare lies in establishing trust. We already know that when faced with the unknown, especially with innovative products, it is ineffective to rely solely on rational explanations and analyses to guide decision-making. To enable patients to make sound decisions, we must guide their minds along a different path—one that relies on intuitive systems, including the senses.
This is precisely how the traditional healthcare system operates. Patients are unable to discern the expertise of a hospital, but when they see a large hospital with a long history, an imposing outpatient building, and advanced medical equipment, they “SEE” it as a professional institution. Similarly, patients cannot verify the professional competence of doctors, but when they see the high admission score requirements for medical schools and the minimum eight years of study required to obtain a doctoral degree, they “SEE” the doctors as experts. Diseases often remain hidden within the body, and patients themselves are often unable to identify the causes. However, when they undergo CT scans, MRIs, and blood tests to measure various indicators, and when they observe deviations from normal values, they “SEE” that they are truly ill. These seemingly complex, bureaucratic, and seemingly unrelated steps instill trust in patients and make them receptive to the advice of doctors.
Therefore, when we employ AI technology in the healthcare system, our intentions are often altruistic. We believe we are helping both doctors and patients. However, if we only consider the technological or general business aspects of a specific task within the healthcare system and overlook the systemic perspective, we may inadvertently disrupt the system’s support structures—its underlying business models. Without viewing our work from a systemic perspective, we may neglect the complexity of the entire machine, overlooking the intricate interconnections between gears. Especially when we lack awareness of how many gears are involved, we might recklessly remove one gear, assuming we are simply replacing it with a new and “better” one, but this action could potentially lead to the collapse of the entire system. Therefore, caution is necessary.
Clearly, when developing AI products for healthcare, understanding the users’ cognitive processes is crucial, and we must design our products based on this understanding. Therefore, AI product managers in the healthcare field should possess theoretical and practical knowledge in user experience design, behavioral design, and business model design. Additionally, we need to apply systems thinking to evaluate our products and services. On the one hand, we acknowledge that traditional healthcare systems have many problems, and change is necessary. On the other hand, we should remain cautious, aware of the potential for making mistakes, and thus, we must not only develop plans for success but also consider strategies to address failures. By understanding the users’ cognitive processes and maintaining humility and prudence, we can genuinely assist doctors and patients and contribute to a healthier future for humanity.