AI :Use in Biomedical field



Understanding the relationship between symptoms and diagnosis is important for
the diagnostic process. A health knowledge graph is a representation of the connections between symptoms and diseases and can help with diagnosis prediction tasks. Chen and Agrawal et al present results evaluating health knowledge graphs and looking at what features are important for accurate diagnosis prediction. They identify major sources of error including unmeasured confounders and sample size and discuss new methods of robust medical knowledge extraction from electronic health records. Diagnosing a disease is not always straightforward; some disease states require multiple tests and longitudinal evaluation before a diagnosis is established. Artificial
intelligence may play a role in helping diagnose a disease state sooner by learning
predictive patterns that humans may miss. For example, Alzheimer’s disease is a
neurodegenerative disorder marked with progressive loss of cognitive ability, and
researchers hope to better predict an early diagnosis based on patient data.5 Lodewijk Brand and colleagues used a tensor-based joint classification and regression model to make Alzheimer’s disease diagnostic predictions based on longitudinal multi-modal data that included imaging data, cognitive testing, and genetic data generated by the Alzheimer’s Disease Neuroimaging Initiative.5 The use of longitudinal, multi-modal data for model creation represents an important requirement for the application of artificial intelligence to medicine, as such data is representative of real world use cases.
In the acute care setting such as in the intensive care unit, predicting which patients are improving and which patients require increased vigilance could help with informing clinical care. Yu et al. explore techniques from Natural Language Processing to analyze Electronic Health Records (EHR) for predicting mortality risk in ICU patients, outperforming existing severity scoring systems (SAPS-II).They use bag-of-words representation of EHR to deal with missing data, Latent Semantic Analysis to reduce the dimensionality, and bidirectional Long Short-Term Memory (LSTM) networks as regression models. They conjecture that superior performance is due to the bidirectional nature of their model.6 Predicting future disease and healthcare costs could allow targeted preventative care for at risk patients. To that end, Xianlong Zeng et al. developed a multilevel selfattention model to account for the dependencies and temporal relationships between medical diagnoses as represented by medical codes. They applied their model to two large real-world datasets to predict disease and healthcare costs with improved performance compared to previous models. Along the same lines, Haoran Zhang and colleagues were also looking to predict which patients were at risk of becoming high cost users; however, their model used free form text data from electronic medical records of family care practices in Ontario.As much of medical data is free form through unstructured medical notes, building models that can handle this type of data has the potential for many applications. Predicting treatment outcomes is important for patient care, since these predictions can inform clinicians trying to decide whether a treatment course should continue or be changed. With treatment resistant major depressive disorder being treated with deep brain stimulation (DBS), there are no objective measures for treatment outcome; rather, a clinician administered measure is used based on the patient’s self-reported symptoms. Prediction using this clinician administered measure is further complicated, as patients often have a non-linear course on their way to improvement.9 However, recent research has suggested that facial expressions and other psychomotor attributes may be able to help predict DBS response in patients with major depressive disorder. In their paper, Harati and colleagues leveraged a joint state-estimation and temporal difference learning approach to be able to model patients’ treatment response from audio-visual features extracted from weekly video recordings taken during DBS treatment. Their results hold promise for the development of an objective measure for predicting DBS efficacy in patients with major depressive disorder.
Artificial intelligence for improved insight into disease pathogenesis and
features
Artificial intelligence can help make sense of large multifactorial disease datasets and discover patterns that would not have been easily discerned otherwise.

In cancer, somatic genomic alterations (SGAs) cause disruption of the normal
cellular pathways, which can lead to oncogenesis. Thus, elucidating the functional impact of SGAs can inform our understanding of cancer development. Tao et al. propose a genomic impact transformer (GIT) model—an encoder-decoder deep neural network architecture to achieve state-of-the-art dimensionality reduction of somatic genomic alterations (SGA) patterns.10 They apply the model to 4468 tumors profiled by the Cancer Genome Atlas (TCGA) project and find it outperforms existing models in capturing the relationships between SGAs and differentially expressed genes.10 Moreover, they find that the latent representation given by GIT is predictive of tumor status, survival time and drug response.In medicine, a “disease” entity may actually have many subtypes, with different features and outcomes. Identifying these subtypes and outcomes depends on having a large enough dataset and an ability to find patterns. Vandromme et al. present an automatic method for phenotyping patients with non-alcoholic fatty liver disease (NAFLD).11 In the cohort of 13,290 patients labeled as NAFLD in the electronic health record, they aggregated four types of features:
demographic, disease-specific based on ICD/CPT codes, lab tests, and vital signs.
Clustering techniques revealed 5 clusters which were clinically distinct and associated with different rates of death and disease occurrence.
Understanding the microbiome signatures of disease is an area of ongoing research. To that end, Khan and Kelly constructed predictive models for identifying 19 diseases in 5643 samples of whole-community metagenomes.12 They compare random forests, convolutional neural networks, and a graph convolution architecture, which they developed for the purpose of this study. Their new technique outperforms other models, and they find interesting disease specific signatures that could form the basis for further study.
  1. Artificial intelligence for advancing medical workflows
    An exciting opportunity for artificial intelligence in medicine is the automation of previously manual workflows.
    For example, clinical trial recruitment often requires humans to help identify patients that are eligible for the trial. This can be a tedious process, and the manual nature of it can cause eligible patients to be missed. Chen and Kunder et al address the need to efficiently screen patients and their tumor profiles for appropriate clinical trials. Their algorithm screens and matches genetic tumor biomarkers with NCIMATCH and internal precision medicine clinical trials to provide an automated pipeline for feasible, accurate and effective trial accrual.
    Another example is the clinical interpretation of genetic data. As genome
    sequencing costs plummet, genetic testing has become more common in clinical cases that are suspected to have a genetic component. However, clinically interpreting a list of genetic mutations requires manual validation by trained biocurators. These curators will search the literature for evidence of a mutation’s pathogenicity, looking at prior clinical studies and functional experiments.Nie et al created LitGen, which uses semi-supervised deep learning to retrieve papers for the genetic variant being studied and to predict the relevant evidence provided by that paper. This tool, in the hands of curators, could improve the time spent on identifying evidence for clinically relevant genetic variants.
  2. Artificial intelligence for improving imaging
    Imaging is an important diagnostic tool in medicine, and improving data processing can save time and money. Zhao et al. proposed a convolutional neural network architecture to
    predict a difference between the two channels in a Dual-Energy Computed Tomography (CT) scan only from the low-energy channel.15 This allows approximating the high-energy signal using only Single-Energy CT systems, which are more prevalent in clinical imaging than Dual-Energy CT systems.The proposed technique can simplify data acquisition,optimize scanning doses, and reduce noise levels in CT imaging.
AI is growing into the public health sector and is going to have a major impact on every aspect of primary care. AI-enabled computer applications will help primary care physicians to better identify patients who require extra attention and provide personalized protocols for each individual. Primary care physicians can use AI to take their notes, analyze their discussions with patients, and enter required information directly into EHR systems. These applications will collect and analyze patient data and present it to primary care physicians alongside insight into patient’s medical needs.
Clinical data mining (CDM): Undirected or unsupervised queries may result in false assumptions about relations between variables and consequential combinatorial explosion. CDM is prone to high dimensionality issues. Data for complex relationships are usually sparse and thinly spread across many dimensions, and extensive data are required to alleviate this problem [30]. Such robust clinical records are usually not freely available. Traditional biomedical researchers are reluctant to be driven by the highly structured analyses that are typical of data mining approaches. More recently, however, the trends are changing, with more degree of acceptance and motivation.
Augmented reality (AR) requires tremendous computational power for understanding user speech and creating intelligent, meaningful dialogue. This can pose organizational barriers. Other hurdles include application barriers like accurate placement of AR tools, applying depth perception to 3D models, multiple user handling as well as technological problems like system latency, field of view, viewpoint matching and occlusion produced by real-time shadows from the instrument itself overlapping on the imagery and distorting it. Misalignment of overlaid or misplaced virtual objects greatly compromise manipulative fidelity and the sense of presence, thereby reducing the overall training effect.
AR technology, despite its many merits, harbors some unanswered ethical dilemmas as well. It holds the potential to be one of the darkest, most powerful systems for mass control of the general population that has ever been invented. An overflow of experimental broadcasts could result in total mind control, brain washing, user abuse and disengagement with reality.
Brain Computer Interface (BCI): Apart from the high costs of BCI tools and intensive training sessions required for users, there are other issues like social stratification, informed consent in disabled persons, shared responsibility of BCI teams, personality and personhood alterations in users, therapeutic exceedance, privacy issues like mind reading and mind control ,interpretative errors etc.
However, it may be expected that in the near future, BCI technology will improve considerably and provide valuable options for communication and motor control that will be much easier to handle. Minds, machines and morals Man is termed as ‘homosapien’ because of ‘sapience’ that is his inherent ‘wisdom’. Modern science ; through artificial intelligence, is trying to reinvent the human being, ranging from speech synthesizers, bionic eyes, robot arms and even humanoid robots . Scientists are claiming to recreate the human brain; right down to its cognitive physiology and molecular level. This prospect inspires hopes as well as fears. Today, there are AI machines like chat-bots, tutors and artificial therapists that have passed the ‘Turing Test’, i.e. they can engage in ‘spontaneous conversation’ and they can be mistaken for ‘real’ persons . We require genuine feelings of empathy, care and respect from people in these positions and substituting them with machines still remains an issue of fervent moral debate
In terms of specific intelligence, the average computer is already more intelligent than the human brain. According to the scholar Verner Vinge, a time will come when the capability and intent of these smart machines will become impossible for the average humans to decipher. He referred to this period as the ‘Age of singularity’ . To quote his own words; “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended”.
Likewise; there are some haunting questions that we cannot choose to ignore.

Robotic surgery
Critical disadvantages include technologically complicated and high cost setups, prolonged training requirements, user acceptance and motivation problems, and higher expectation induced dissatisfaction and regret . Minor issues are prolonged anesthesia and bleeding time on the operation table due to inexpertice of novice trainees. Feasibility studies question the cost effectiveness and practicability of such systems.
Virtual reality
This is the latest trend in medicine, where computer-simulated environments are used to provide cultural competency skills to medical learners and also to treat medical conditions like phobias stress disorders, addictions, binge eating, post operative or burn pain management and brain retraining in stroke victims. Virtual reality can bring a version of the real world into the clinic.

Augmented reality:
Augmented reality (AR) has been used in the medical field for nearly ten years. It works by overlaying seemingly-real experiences on top of a person’s local environment. AR tools employ advanced computing technology to produce live interactive imaging for assisting physicians and medical students in positioning implants, neurosurgical procedures and surgical education. Open source image guided procedures provide framework for real-time visualization of surgical instruments relative to the patient’s anatomy by fusion of live endoscope or microscope video with stereotactic medical imaging data, real-time broadcast of intra operative data via telecommunication networks and real-time intra operative access to a second opinion via interactive telecommunication from a remote expert.
CDSS (computerized decision support systems)
These are computerized programs based on complex clinical algorithms that are designed to assist doctors with diagnosis tasks and make therapy recommendations; according to clinical symptoms and investigation results of patients. CDSS not only guide medical beginners, but also supplement experienced professionals and optimize overall health care in an institution. They even raise alarm in case of any incidental overlooking or drastic change in the medical condition of the patient. They often save valuable hours of clinical diagnostics.
Intelligent tutoring systems
These are computerized self adaptive teaching programs that use ‘virtual teachers’ to impart medical education. These pedagogical tools are learner friendly, where the student can direct his own tutoring time, place and frequency. This flexibility is particularly attractive for many students with out of university commitments. AI tutors are effective in resolving logistic issues like student overloads, time constraints and scarcity of teaching staff .

Virtual patients
Clinical learning systems are employing simulated patients in surgery/dentistry/psychiatry settings to allow medical trainees to practice their clinical skills in a risk free environment. Patient simulations are effective in dealing with issues like student overloads, time crisis and noncompliant patients.
Robotic surgery
It is computer assisted surgery by highly sophisticated robots where the surgeon uses a computer inter-phase console to manipulate instruments attached to robotic arms. The surgeon need not be in the same site as the patient, but instead, can operate from a different site (remote surgery). The robots can even perform independent surgery (unmanned surgery) without the aid of a human surgeon. In 2006, robots successfully completed unassisted heart surgery [7]. This type of advanced robotic surgery has numerous advantages. It is characterized by tremor free precision, miniaturization, smaller incisions, perfect articulation, improved magnification, decreased blood loss, less pain, and quicker healing time. This leads to an overall decrease in the duration of hospital stay [8]. Recent advances in the field include miniature robotics. Scientists dream of a time when Nano-robots will travel through the blood stream to operate, evaluate and assess the patient’s medical condition.

Expert lab information systems (LIS)
LIS are expert software systems integrated with point of care diagnostics for information management in clinical or research laboratory and hospital settings. LIS can support various sub-specialties of a pathology laboratory including hematology, chemistry, immunology, blood bank, surgical pathology, anatomical pathology, flow cytometry and microbiology. LIS can also be used for workflow management, accounts and instruments control if integrated with the system. LIS is interfaced with Electronic Medical Record (EMR) or Hospital Information System (HIS) for faster data exchange and availability of results at the patients’ bedside along with easier and quicker billing by integration with accounting systems. LIS can also generate unique identifier bar-codes for biological samples to enable quick and error free tracking. It harmonizes collection, encoding, storage and sharing of clinical data, including patient demographics, disease nomenclature, procedures, pathogens, pharmaceuticals, etc in a global perspective. This standardization facilitates seamless exchange of information across countries and health care systems. Unlike traditional paper based systems, LIS is characterized by faster turnaround time of results in emergency settings, which increases the quality of care, decrease cost of care and reduces medical errors, often saving valuable patient lives. Increasingly, general practitioners feel that clinical information is expanding too fast for them to keep track of. In such a scenario, LIS with decision support can enable physicians to order lesser number of tests . LIS can be connected with central monitoring systems to detect subtle changes in trends for disease surveillance and earlier detection of epidemics. This can dynamically influence treatment plans and facilitate evidence-based medicine.
Laboratory Information Systems have come a long way, but are poised for even greater growth as complex medical investigations become more pervasive. A lot of work still needs to be done for fool proof algorithm development and decision analysis to fully support the physicians in the selection and interpretation of tests, but ongoing developments in evidence-based practice of medicine are encouraging. .

Clinical data mining:
It is a framework for integrated management of clinical data on computer networks, consisting of a data base, a knowledge base, an inference and learning component which are inter connected to each other in a meshwork. Depending upon the type of data mined, CDM can be qualitative or quantitative. It involves the extraction, analysis and interpretation of available clinical data for practice knowledge-building, detecting biomedical associations and disease patterns, enhancing clinical and administrative functions, clinical decision-making and practitioner reflection. Clinical Data-Mining shows long term benefits like creating new evidence based medical knowledge ,designing earlier and more effective interventions and improving patient care and satisfaction. Data warehousing and mining techniques can also solve business related problems such as designing investment strategies or developing health marketing campaigns .

Visualization:
This technique uses real, virtual three dimensional or animated biomedical images to communicate abstract and concrete ideas in medicine for understanding of the human body, clinical procedures and disease pathogenesis. Visualization has ever-expanding applications in the fields of AR is a highly effective experiential learning tool, allowing medical learners to peer ‘under the skin’ and reveal the inner workings of human body by literally ‘walking through’ the human body and directly ‘seeing’ the functioning of biological systems. Other possibilities for media AR applications lay in leveraging and managing the massive mine of patient data.
Skeptics predict that a tremendous AR industry is waiting to emerge. This industry will dwarf today’s software and computing industries and become one of the most influential technological paradigm shifts yet experienced by our civilization.
Simulated reality / Brain-computer interface (BCI)
Brain-computer interfaces are characterized by direct interaction between the brain and a technical system. There are two forms of brain-computer interfaces (BCIs) that can be grouped according to the directions in which the interaction between brains and technical devices works: BCIs that ‘extract’ neural signals from the brain and BCIs that ‘insert’ signals into the brain . BCI is primarily focused on speech implant, auditory implant and ‘thought- translation’ neuro-prosthetic devices that aim at restoring damaged hearing, sight and movement. BCI prosthetics are of special assistance in clinical rehabilitation and motor or cognitive function recovery of patients with impaired cortical function and locked in syndrome. More recent applications include targeting non-disabled individuals by exploring the potential of BCI as toys for entertainment, gaming and virtual reality (VR) control.

Artificial telepathy: Research is ongoing into synthetic or computer-mediated telepathy which would allow user-to-user ‘wordless’ communication through analysis of neural signals . These hold potential for management of behavior and speech related medical conditions and medico-legal forensics.

Cell-culture BCI: These involve interfacing neural cell cultures with computer or robotic devices to produce entire neural networks with ‘problem solving’ capacity. The products are sometimes referred to as ‘neuro-electronic chips’. Scientists have even managed to create an artificial prosthetic hippocampus, which is considered as the most ordered and structured part of the human brain and encodes experiences for long term memories.
AI systems that seem to simulate our intelligence almost better than us. We are entrusting our health, well being and even our lives to artificial systems without imagining the consequence of this over dependence. Today AI has become indispensible enough to make us handicapped. There is no denying the fact that judicious use of artificial intelligence can be extremely productive and holds the potential to translate into a better quality of life for the common man. But the history of human kind is witness to the fact that extremism may not be good. We need to reflect more seriously on this ‘machine trend’ that is overpowering today’s medicine.

Comments

Popular posts from this blog