OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.04.2026, 17:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

On artificial intelligence in healthcare

2025·0 Zitationen·Indian Journal of OphthalmologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

From its earlier limited, to general, and now, the evolving super-intelligent forms, AI seems omnipresent, omnipotent, and omniscient…traits that we earlier ascribed only to the natural, mysterious, creative force, of which we are all a part. But before we give AI this exalted description, and before any further comments on AI, there is a need to dwell a little on intelligence itself. Intelligence, akin to life, is all-pervading and relative, yet defies a concise and clear definition. The closest one could come to defining intelligence would be the ability to choose and act or react, in a measured manner, using innate, intuitive, imitational, or learned skills or information [or knowledge], with the purpose of self-preservation [physical, mental, social and spiritual], self-learning, and self-growth [obtaining values, skills-facilitative and manipulative, and accomplishments, understanding right from wrong], and problem-solving. That intelligence is not unique to humans and higher primates but can be demonstrated and observed in beings down the taxonomical order, is common knowledge. Also, I would want to believe that intelligence [human] is intricately linked to and subconsciously influenced by the heart [emotional intelligence]. We now speak more about “Artificial” intelligence [AI] than intelligence itself, but if you concur with the above definition, the only thing it has in common with natural intelligence is that it can “assimilate” skills [algorithmic, language, image recognition] and having done so, solve, assist, or create, problems. With its limited capabilities, having no “heart”, and the manner in which it is materialized by humans, AI would be a misnomer, diminishing the very essence of the indefinable and all-pervading universal intelligence. Hence, AI would be best termed as coded intelligence [CI] or simulated intelligence through coding. Since the first mention of AI by John McCarthy [1956],[1] the field has been constantly explored by a small group of researchers. However, over the past decade or so, and since the explosive demonstration of the capabilities of complex computer networking by a team lead by Geoffrey Hinton [Alexnet, 2012],[2] AI has become an all-pervading influencer on how we do things, from the very simple to the most complex. But when should AI be encouraged, applied, and exalted? Certainly not when it takes away simple joys like beautiful handwriting, genuine expression of heartfelt gratitude, creating an innately inspired musical note, bringing forth sketches and paintings of intricate beauty, writing up stories having a heart and a spirit; and neither when it allows and tempts you to order the most delicious but also the most unhealthy and addicting food, “tricks” you to spend on purchasing an ocean of things that you will not use, “invites” you to the most alluring but also the most time-draining stream of entertainment options, day in and day out. Acceptance of this form of AI could lead to self-harm [mental and physical] and the loss of simple joys, self-belief, focus, and purpose. Whether we should accept and encourage, and even pay for this form of harmful AI, the decision is ours to make. In addition, we need to be mindful of the huge resources that are being consumed [data centers are expected to consume about 96 gigawatts of power by 2030] in keeping these “gigantic” machinery, processes, and systems running, 24 × 7. Eric Schmidt, the former gatekeeper at Google and current chair of the pro-AI think tank, recently warned that AI’s soaring energy needs may outrun their country’s power grid. So, collectively, there is an overwhelming, hidden, and sometimes intangible cost to AI utilization that cannot be ignored. AI should on the other hand, be wholeheartedly accepted and encouraged in domains where arduous labor is involved [e.g., mining and construction], where complexity is involved [e.g., predicting earthquakes], where you want to pique your curiosity [solving advanced and unsolved mathematical problems], where you want to understand the depths [e.g., oceanography] and the limits [e.g., heliophysics] of our planet and solar system, and where there is a need to protect your people and land from incursions and invasions. But how about AI in our own field of medicine? There are mixed views on AI applications here, because on one hand, it certainly has the potential to reduce hospital-acquired morbidity [e.g., from errors in prescription], alerting us to potential allergies and drug cross-reactions, providing clues to consider other useful alternatives, leading us to consider the possibility of a rare disease, rapidly summarizing published study reports, and allowing detection of major flaws and manipulations in publications [e.g., those highlighted in retraction watch]. On the other side, it reduces the chances of us acquiring practical skills, exposes us to deskilling, encourages over-reliance on technology without assessing the drain on resources in the long-term [risks of breakdown and maintenance of hardware, upgradation of software, expenditure on human resources with limited skill sets], and persists inequality in access and affordability of healthcare services [which, ironically, it over promises to reduce]. Added to these concerns there are issues of lack of data standardization,[3] privacy, ethics, inadequate regulatory control, and AI hallucinations. Improvement in administrative and managerial processes [e.g., inventory maintenance] using AI is often mixed up with its utility in the clinical workflow and management of patients, and one must be wary of this. A careful evaluation would reveal that AI in healthcare [like in most other fields] is actually still in the hype phase of Gartner’s cycle of emerging technologies. This is evident by the fact that a vast majority of hospital stakeholders believe in the potential of AI but have themselves been very slow to adopt such technology. Early adoption may have its own perils, a recent example being Sweden’s switching back to traditional means of education after nearly a decade of hyper-digitized schooling efforts. The nascent stage at which AI in healthcare is currently in is also evident from the fact that, while there have been thousands of publications on the topic, there have been only a few approved AI models by regulatory bodies, for use in the population. This alludes to the possibility, however, harsh it may seem, that most researchers enter the field with the feeling of Fear of missing out (FOMO) and are simply content with a few publications. Another question that needs to be critically addressed is that, when human health is itself universally constant and similar, why are the research and regulatory committees, the world over, encouraging work on similar topics, in silos. Agencies also need to address the issue of how quality and timely care would be provided, once a diagnosis is made, using AI. These comments hold for the current status and use of AI in managing retinal disorders, as well.[4] The resources that are being spent on AI in healthcare, using a narrow band, unidirectional approach [AI may provide a diagnosis and advice referral but is incapable of addressing anything thereafter], if spent on the training, reskilling, upskilling, reinforcing, regularizing, implementing and incentivizing human intelligence [and monitoring outcome indicators], is likely to catalyze the creation of a workforce that would clone itself, be able to address several steps and processes involved in managing a condition, rather than just one or two steps. The result would not only be humane and holistic care throughout the journey of a patient, but in the long-term, manifold savings on healthcare expenditure. The hype surrounding AI and the corresponding risk of wasteful AI research in healthcare has also been highlighted by Wilkinson et al.[5] It is important to understand that with all the constant talk and claims [e.g., a recent one, saying that AI would replace most surgeons within the next five years], AI in healthcare may seem like a boundless elixir, while in reality, it is only a limited remedy. The most exciting application of AI in healthcare seems to be its immense potential for big data predictive analysis that would in turn enable a better understanding of disease pathogenesis and its risk factors, translating itself to evidence-based implementation of preventive and promotive care. And yes, there is also the possibility of potent AI-led discoveries. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationCOVID-19 diagnosis using AIArtificial Intelligence in Healthcare
Volltext beim Verlag öffnen