A Review of Biosensors and Artificial Intelligence in Healthcare and Their Clinical Significance


International Research Journal of Economics and Management Studies
© 2024 by IRJEMS
Volume 3  Issue 1
Year of Publication : 2024
Authors : Yawar Hayat, Mehtab Tariq, Adil Hussain, Aftab Tariq, Saad Rasool
irjems doi : 10.56472/25835238/IRJEMS-V3I1P126

Citation:

Yawar Hayat, Mehtab Tariq, Adil Hussain, Aftab Tariq, Saad Rasool. "A Review of Biosensors and Artificial Intelligence in Healthcare and Their Clinical Significance" International Research Journal of Economics and Management Studies, Vol. 3, No. 1, pp. 230-247, 2024.

Abstract:

In the past decade, a substantial increase in medical data from various sources, including wearable sensors, medical imaging, personal health records, and public health organizations, has propelled advancements in the medical sciences. The evolution of computational hardware, such as cloud computing, GPUs, FPGAs, and TPUs, has enabled the effective utilization of this vast amount of data. Consequently, sophisticated AI techniques have been developed to extract valuable insights from healthcare datasets. This article provides a comprehensive overview of recent developments in AI and biosensors within the medical and life sciences. The review highlights the role of machine learning in key areas such as medical imaging, precision medicine, and biosensors designed for the Internet of Things (IoT). Emphasis is placed on the latest progress in wearable biosensing technologies, where AI plays a pivotal role in monitoring electrophysiological and electrochemical signals and aiding in disease diagnosis. These advancements underscore the growing trend towards personalized medicine, offering precise and cost-efficient point-of-care treatment.

Additionally, the article delves into the advancements in computing technologies, including accelerated AI, edge computing, and federated learning specifically tailored for medical data. The challenges associated with data-driven AI approaches, potential issues arising from biosensors and IoT-based healthcare, and distribution shifts among different data modalities are thoroughly explored. The discussion concludes with insights into future prospects in the field.

References:

[1] G. Hariharan, ‘‘Global perspectives on economics and healthcare finance,’’ in Global Healthcare: Issues and Policies. Boston, MA, USA: Jones & Bartlett, 2020, p. 95.
[2] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, ‘‘Edge computing: Vision and challenges,’’ IEEE Internet Things J., vol. 3, no. 5, pp. 637–646, Oct. 2016.
[3] J. M. Dennis, B. M. Shields, W. E. Henley, A. G. Jones, and A. T. Hattersley, ‘‘Disease progression and treatment response in data-driven subgroups of type 2 diabetes compared with models based on simple clinical features: An analysis using clinical trial data,’’ Lancet Diabetes Endocrinology, vol. 7, no. 6, pp. 442–451, Jun. 2019.
[4] H. Fröhlich, R. Balling, N. Beerenwinkel, O. Kohlbacher, S. Kumar, T. Lengauer, M. H. Maathuis, Y. Moreau, S. A. Murphy, T. M. Przytycka, M. Rebhan, H. Röst, A. Schuppert, M. Schwab, R. Spang, D. Stekhoven, J. Sun, A. Weber, D. Ziemek, and B. Zupan, ‘‘From hype to reality: Data science enabling personalized medicine,’’ BMC Med., vol. 16, no. 1, pp. 1–15, Dec. 2018.
[5] A. Adadi and M. Berrada, ‘‘Explainable AI for healthcare: From black box to interpretable models,’’ in Embedded Systems and Artificial Intelligence. Singapore: Springer, 2020, pp. 327–337.
[6] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žídek, and A. Potapenko, ‘‘Highly accurate protein structure prediction with AlphaFold,’’ nature, vol. 596, pp. 583–589, Aug. 2021.
[7] J. Moult, ‘‘A decade of CASP: Progress, bottlenecks, and prognosis in protein structure prediction,’’ Current Opinion Structural Biol., vol. 15, no. 3, pp. 285–289, Jun. 2005.
[8] E. Callaway, ‘‘AlphaFold’s new rival? Meta AI predicts shape of 600 million proteins,’’ Nature, vol. 611, no. 7935, pp. 211–212, Nov. 2022.
[9] B. Ristevski and M. Chen, ‘‘Big data analytics in medicine and healthcare,’’ J. Integrative Bioinf., vol. 15, no. 3, 2018, Art. no. 20170030.
[10] F. Cui, Y. Yue, Y. Zhang, Z. Zhang, and H. S. Zhou, ‘‘Advancing biosensors with machine learning,’’ ACS Sensors, vol. 5, no. 11, pp. 3346–3364, Nov. 2020.
[11] H. Haick and N. Tang, ‘‘Artificial intelligence in medical sensors for clinical decisions,’’ ACS Nano, vol. 15, no. 3, pp. 3557–3567, Mar. 2021.
[12] S. B. Junaid, A. A. Imam, M. Abdulkarim, Y. A. Surakat, A. O. Balogun, G. Kumar, A. N. Shuaibu, A. Garba, Y. Sahalu, A. Mohammed, T. Y. Mohammed, B. A. Abdulkadir, A. A. Abba, N. A. I. Kakumi, and A. S. Hashim, ‘‘Recent advances in artificial intelligence and wearable sensors in healthcare delivery,’’ Appl. Sci., vol. 12, no. 20, p. 10271, Oct. 2022.
[13] P. Manickam, S. A. Mariappan, S. M. Murugesan, S. Hansda, A. Kaushik, R. Shinde, and S. P. Thipperudraswamy, ‘‘Artificial intelligence (AI) and Internet of Medical Things (IoMT) assisted biomedical systems for intelligent healthcare,’’ Biosensors, vol. 12, no. 8, p. 562, Jul. 2022.
[14] S. K. Karmaker, M. M. Hassan, M. J. Smith, L. Xu, C. Zhai, and K. Veeramachaneni, ‘‘AutoML to date and beyond: Challenges and opportunities,’’ ACM Comput. Surveys, vol. 54, no. 8, pp. 1–36, Nov. 2022.
[15] X. He, K. Zhao, and X. Chu, ‘‘AutoML: A survey of the state-of-the-art,’’ Knowl.-Based Syst., vol. 212, Jan. 2021, Art. no. 106622.
[16] R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman, ‘‘Metrics for explainable AI: Challenges and prospects,’’ 2018, arXiv:1812.04608.
[17] J. Xu, B. S. Glicksberg, C. Su, P. Walker, J. Bian, and F. Wang, ‘‘Federated learning for healthcare informatics,’’ J. Healthcare Informat. Res., vol. 5, no. 1, pp. 1–19, Mar. 2021.
[18] O. Sadak, F. Sadak, O. Yildirim, N. M. Iverson, R. Qureshi, M. Talo, C. P. Ooi, U. R. Acharya, S. Gunasekaran, and T. Alam, ‘‘Electrochemical biosensing and deep learning-based approaches in the diagnosis of COVID-19: A review,’’ IEEE Access, vol. 10, pp. 98633–98648, 2022.
[19] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, ‘‘Grad-CAM: Visual explanations from deep networks via gradient-based localization,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 618–626.
[20] D. Doran, S. Schulz, and T. R. Besold, ‘‘What does explainable AI really mean? A new conceptualization of perspectives,’’ 2017, arXiv:1710.00794.
[21] B. Kailkhura, B. Gallagher, S. Kim, A. Hiszpanski, and T. Y.-J. Han, ‘‘Reliable and explainable machine-learning methods for accelerated material discovery,’’ NPJ Comput. Mater., vol. 5, no. 1, pp. 1–9, Nov. 2019.
[22] S. Lyskov, F.-C. Chou, S. Ó. Conchúir, B. S. Der, K. Drew, D. Kuroda, J. Xu, B. D. Weitzner, P. D. Renfrew, P. Sripakdeevong, B. Borgo, J. J. Havranek, B. Kuhlman, T. Kortemme, R. Bonneau, J. J. Gray, and R. Das, ‘‘Serverification of molecular modeling applications: The Rosetta online server that includes everyone (ROSIE),’’ PLoS ONE, vol. 8, no. 5, May 2013, Art. no. e63906.
[23] M. Sundararajan and A. Najmi, ‘‘The many Shapley values for the model explanation,’’ in Proc. Int. Conf. Mach. Learn., 2020, pp. 9269–9278.
[24] Y. Chen, J. Zhang, and X. Qin, ‘‘Interpretable instance disease prediction based on causal feature selection and effect analysis,’’ BMC Med. Information. Decis. Making, vol. 22, no. 1, pp. 1–14, Dec. 2022.
[25] H. Panwar, P. K. Gupta, M. K. Siddiqui, R. Morales-Menendez, P. Bhardwaj, and V. Singh, ‘‘A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-scan images,’’ Chaos, Solitons Fractals, vol. 140, Nov. 2020, Art. no. 110190.
[26] T. Kyono, Y. Zhang, and M. van der Schaar, ‘‘Castle: Regularization via auxiliary causal graph discovery,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 33, 2020, pp. 1501–1512.
[27] T. Kyono, Y. Zhang, A. Bellot, and M. van der Schaar, ‘‘MIRACLE: Causally-aware imputation via learning missing data mechanisms,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 34, 2021, pp. 23806–23817.
[28] S. Magliacane, T. Van Ommen, T. Claassen, S. Bongers, P. Versteeg, and J. M. Mooij, ‘‘Domain adaptation by using causal inference to predict invariant conditional distributions,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 31, 2018, pp. 1–11.
[29] A. Boggust, B. Hoover, A. Satyanarayan, and H. Strobelt, ‘‘Shared interest: Measuring human-AI alignment to identify recurring patterns in model behavior,’’ in Proc. CHI Conf. Human Factors Comput. Syst., Apr. 2022, pp. 1–17.
[30] I. Bica, D. Jarrett, and M. van der Schaar, ‘‘Invariant causal imitation learning for generalizable policies,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 34, 2021, pp. 3952–3964.
[31] S. Zhang, S. M. H. Bamakan, Q. Qu, and S. Li, ‘‘Learning for personalized medicine: A comprehensive review from a deep learning perspective,’’ IEEE Rev. Biomed. Eng., vol. 12, pp. 194–208, 2019.
[32] D. D. Wang, W. Zhou, H. Yan, M. Wong, and V. Lee, ‘‘Personalized prediction of EGFR mutation-induced drug resistance in lung cancer,’’ Sci. Rep., vol. 3, no. 1, pp. 1–8, Oct. 2013.
[33] L. Chin, J. N. Andersen, and P. A. Futreal, ‘‘Cancer genomics: From discovery science to personalized medicine,’’ Nature Med., vol. 17, no. 3, pp. 297–303, Mar. 2011.
[34] M. K. Hassan, A. I. El Desouky, S. M. Elghamrawy, and A. M. Sarhan, ‘‘Intelligent hybrid remote patient-monitoring model with cloud-based framework for knowledge discovery,’’ Comput. Electr. Eng., vol. 70, pp. 1034–1048, Aug. 2018.
[35] E. Vayena, A. Blasimme, and I. G. Cohen, ‘‘Machine learning in medicine: Addressing ethical challenges,’’ PLOS Med., vol. 15, no. 11, Nov. 2018, Art. no. e1002689.
[36] J. Ramesh, R. Aburukba, and A. Sagahyroon, ‘‘A remote healthcare monitoring framework for diabetes prediction using machine learning,’’ Healthcare Technol. Lett., vol. 8, no. 3, pp. 45–57, Jun. 2021.
[37] M. Gaudillère, C. Pollin-Javon, S. Brunot, S. Villar Fimbel, and C. Thivolet, ‘‘Effects of remote care of patients with poorly controlled type 1 diabetes included in an experimental telemonitoring programme,’’ Diabetes Metabolism, vol. 47, no. 6, Nov. 2021, Art. no. 101251.
[38] I. Villanueva-Miranda, H. Nazeran, and R. Martinek, ‘‘CardiaQloud: A remote ECG monitoring system using cloud services for eHealth and mHealth applications,’’ in Proc. IEEE 20th Int. Conf. e-Health Netw., Appl. Services (Healthcom), Sep. 2018, pp. 1–6.
[39] A. R. Dhruba, K. N. Alam, M. S. Khan, S. Bourouis, and M. M. Khan, ‘‘Development of an IoT-based sleep apnea monitoring system for healthcare applications,’’ Comput. Math. Methods Med., vol. 2021, pp. 1–16, Nov. 2021.
[40] P. Rajan Jeyaraj and E. R. S. Nadar, ‘‘Smart-monitor: Patient monitoring system for IoT-based healthcare system using deep learning,’’ IETE J. Res., vol. 68, no. 2, pp. 1435–1442, Mar. 2022.
[41] A. M. Froomkin, I. Kerr, and J. Pineau, ‘‘When AIs outperform doctors: Confronting the challenges of a tort-induced over-reliance on machine learning,’’ Arizona Law Review, vol. 61, p. 33, Feb. 2019. [42] X. He, K. Zhao, and X. Chu, ‘‘AutoML: A survey of the state-of-the-art,’’ 2019, arXiv:1908.00709. [43] J. Waring, C. Lindvall, and R. Umeton, ‘‘Automated machine learning: Review of the state-of-the-art and opportunities for healthcare,’’ Artif. Intell. Med., vol. 104, Apr. 2020, Art. no. 101822.
[44] U. Khurana, H. Samulowitz, and D. Turaga, ‘‘Feature engineering for predictive modeling using reinforcement learning,’’ in Proc. AAAI Conf. Artif. Intell., vol. 32, 2018, pp. 1–8.
[45] C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown, ‘‘Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms,’’ in Proc. 19th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Aug. 2013, pp. 847–855.
[46] G. M. Morris and M. Lim-Wilby, ‘‘Molecular docking,’’ in Molecular Modeling of Proteins. Cham, Switzerland: Springer, 2008, pp. 365–382.
[47] N. Nasir, A. Kansal, F. Barneih, O. Al-Shaltone, T. Bonny, M. Al-Shabi, and A. Al Shammaa, ‘‘Multi-modal image classification of COVID-19 cases using computed tomography and X-rays scans,’’ Intell. Syst. with Appl., vol. 17, Feb. 2023, Art. no. 200160.
[48] H. Shimizu and K. I. Nakayama, ‘‘Artificial intelligence in oncology,’’ Cancer Sci., vol. 111, no. 5, pp. 1452–1460, 2020.
[49] A. Chatterjee, N. R. Somayaji, and I. M. Kabakis, ‘‘Abstract WMP16: Artificial intelligence detection of cerebrovascular large vessel occlusion—Nine months, 650 patient evaluation of the diagnostic accuracy and performance of the Viz.AI LVO algorithm,’’ Stroke, vol. 50, no. 1, 2019, Art. no. AWMP16.
[50] A. Sharma, P. Singh, and G. Dar, ‘‘Artificial intelligence and machine learning for healthcare solutions,’’ in Data Analytics in Bioinformatics: A Machine Learning Perspective. Hoboken, NJ, USA: Wiley, 2021, pp. 281–291.
[51] Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SAMD), Food and Drug Admin., Silver Spring, MD, USA, 2019.
[52] S. Zhu, M. Gilbert, I. Chetty, and F. Siddiqui, ‘‘The 2021 landscape of FDA-approved artificial intelligence/machine learning-enabled medical devices: An analysis of the characteristics and intended use,’’ Int. J. Med. Informat., vol. 165, Sep. 2022, Art. no. 104828.
[53] B. Sahiner, A. Pezeshk, L. M. Hadjiiski, X. Wang, K. Drukker, K. H. Cha, R. M. Summers, and M. L. Giger, ‘‘Deep learning in medical imaging and radiation therapy,’’ Med. Phys., vol. 46, no. 1, pp. e1–e36, Jan. 2019.
[54] A. Ćirković, ‘‘Evaluation of four artificial intelligence–assisted selfdiagnosis apps on three diagnoses: Two-year follow-up study,’’ J. Med. Internet Res., vol. 22, no. 12, Dec. 2020, Art. no. e18097.
[55] M. B. Massat, ‘‘Artificial intelligence in radiology: Hype or hope?’’ Appl. Radiol., vol. 47, no. 3, pp. 22–26, Mar. 2018.
[56] L. A. Celi, L. Hinske Christian, G. Alterovitz, and P. Szolovits, ‘‘An artificial intelligence tool to predict fluid requirement in the intensive care unit: A proof-of-concept study,’’ Crit. Care, vol. 12, no. 6, p. R151, 2008.
[57] A. Shaukat, D. Colucci, L. Erisson, S. Phillips, J. Ng, J. E. Iglesias, J. R. Saltzman, S. Somers, and W. Brugge, ‘‘Improvement in adenoma detection using a novel artificial intelligence-aided polyp detection device,’’ Endoscopy Int. Open, vol. 9, no. 2, pp. E263–E270, Feb. 2021.
[58] J. Malwitz, ‘‘Fall risk screen development for episcopal homes,’’ Doctor Occupational Therapy, Dept. Occupational Sci./Occupational Therapy, St. Catherine University, Saint Paul, MN, USA, 2022.
[59] B. Meskó and M. Görög, ‘‘A short guide for medical professionals in the era of artificial intelligence,’’ NPJ Digit. Med., vol. 3, no. 1, pp. 1–8, Sep. 2020.
[60] S. P. Rajan and M. Paranthaman, ‘‘Artificial intelligence in healthcare: Algorithms and decision support systems,’’ in Smart Systems for Industrial Applications. Hoboken, NJ, USA: Wiley, 2022, pp. 173–197.
[61] (Jan. 2023). Berg, A Biotechnology Company to Combat Oncology, Neurology, and Rare Disease. [Online]. Available: https://www. berghealth.com/
[62] (2023). Atomwise, an AI Company for Drug Discovery, Artificial Intelligence for Drug Discovery. [Online]. Available: https://www. atomwise.com/
[63] D. Bairagya, H. K. Tripathy, A. K. Bhoi, and P. Barsocchi, ‘‘Impact of artificial intelligence in health care: A study,’’ in Hybrid Artificial Intelligence and IoT in Healthcare. Singapore: Springer, 2021, pp. 311–328.
[64] A. Philippidis, ‘‘Deep genomics identifies AI-discovered candidate for Wilson disease,’’ GEN Edge, vol. 1, no. 1, pp. 113–116, Jan. 2019.
[65] W. Raghupathi and V. Raghupathi, ‘‘Big data analytics in healthcare: Promise and potential,’’ Health Inf. Sci. Syst., vol. 2, no. 1, pp. 1–10, Dec. 2014.
[66] D. V. Dimitrov, ‘‘Medical Internet of Things and big data in healthcare,’’ Healthcare Inform. Res., vol. 22, no. 3, pp. 156–163, 2016.
[67] G. Papadatos, A. Gaulton, A. Hersey, and J. P. Overington, ‘‘Activity, assay and target data curation and quality in the ChEMBL database,’’ J. Comput.-Aided Mol. Design, vol. 29, no. 9, pp. 885–896, Sep. 2015.
[68] S. S. Lobodzinski, ‘‘ECG patch monitors for assessment of cardiac rhythm abnormalities,’’ Prog. Cardiovascular Diseases, vol. 56, no. 2, pp. 224–229, Sep. 2013.
[69] G. Manogaran and D. Lopez, ‘‘A survey of big data architectures and machine learning algorithms in healthcare,’’ Int. J. Biomed. Eng. Technol., vol. 25, nos. 2–4, pp. 182–211, 2017.
[70] D. Lahat, T. Adali, and C. Jutten, ‘‘Multimodal data fusion: An overview of methods, challenges, and prospects,’’ Proc. IEEE, vol. 103, no. 9, pp. 1449–1477, Sep. 2015.
[71] K. M. Boehm, E. A. Aherne, L. Ellenson, I. Nikolovski, M. Alghamdi, I. Vázquez-García, D. Zamarin, K. L. Roche, Y. Liu, and D. Patel, ‘‘Multimodal data integration using machine learning improves risk stratification of high-grade serous ovarian cancer,’’ Nature Cancer, vol. 3, no. 6, pp. 723–733, Jun. 2022.
[72] D. Zhou, Z. Gan, X. Shi, A. Patwari, E. Rush, C.-L. Bonzel, V. A. Panickan, C. Hong, Y.-L. Ho, and T. Cai, ‘‘Multiview incomplete knowledge graph integration with application to cross-institutional EHR data harmonization,’’ J. Biomed. Informat., vol. 133, Sep. 2022, Art. no. 104147.
[73] S. Amal, L. Safarnejad, J. A. Omiye, I. Ghanzouri, J. H. Cabot, and E. G. Ross, ‘‘Use of multi-modal data and machine learning to improve cardiovascular disease care,’’ Frontiers Cardiovascular Med., vol. 9, Apr. 2022, Art. no. 840262.
[74] Q. Cai, H. Wang, Z. Li, and X. Liu, ‘‘A survey on multimodal data-driven smart healthcare systems: Approaches and applications,’’ IEEE Access, vol. 7, pp. 133583–133599, 2019.
[75] S. C. Lee, B. Fuerst, K. Tateno, A. Johnson, J. Fotouhi, G. Osgood, F. Tombari, and N. Navab, ‘‘Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery,’’ Healthcare Technol. Lett., vol. 4, no. 5, pp. 168–173, Oct. 2017.
[76] J. Gao, P. Li, Z. Chen, and J. Zhang, ‘‘A survey on deep learning for multimodal data fusion,’’ Neural Comput., vol. 32, no. 5, pp. 829–864, May 2020.
[77] I. Van Mechelen and A. K. Smilde, ‘‘A generic linked-mode decomposition model for data fusion,’’ Chemometric Intell. Lab. Syst., vol. 104, no. 1, pp. 83–94, Nov. 2010.
[78] M. Turk, ‘‘Multimodal interaction: A review,’’ Pattern Recognit. Lett., vol. 36, pp. 189–195, Jan. 2014.
[79] G. Yang, Q. Ye, and J. Xia, ‘‘Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A minireview, two showcases and beyond,’’ Inf. Fusion, vol. 77, pp. 29–52, Jan. 2022.
[80] F. C. P. Navarro, H. Mohsen, C. Yan, S. Li, M. Gu, W. Meyerson, and M. Gerstein, ‘‘Genomics and data science: An application within an umbrella,’’ Genome Biol., vol. 20, no. 1, p. 109, Dec. 2019.
[81] E. A. Feingold and L. Pachter, ‘‘The ENCODE (encyclopedia of DNA elements) project,’’ Science, vol. 306, no. 5696, pp. 636–640, 2004.
[82] M. Hafner, M. Niepel, and P. K. Sorger, ‘‘Alternative drug sensitivity metrics improve preclinical cancer pharmacogenomics,’’ Nature Biotechnol., vol. 35, no. 6, pp. 500–502, Jun. 2017.
[83] M. Bouhaddou, M. S. DiStefano, E. A. Riesel, E. Carrasco, H. Y. Holzapfel, D. C. Jones, G. R. Smith, A. D. Stern, S. S. Somani, T. V. Thompson, and M. R. Birtwistle, ‘‘Drug response consistency in CCLE and CGP,’’ Nature, vol. 540, no. 7631, pp. E9–E10, Dec. 2016.
[84] W. Yang, J. Soares, P. Greninger, E. J. Edelman, H. Lightfoot, S. Forbes, N. Bindal, D. Beare, J. A. Smith, I. R. Thompson, S. Ramaswamy, P. A. Futreal, D. A. Haber, M. R. Stratton, C. Benes, U. McDermott, and M. J. Garnett, ‘‘Genomics of drug sensitivity in cancer (GDSC): A resource for therapeutic biomarker discovery in cancer cells,’’ Nucleic Acids Res., vol. 41, no. D1, pp. D955–D961, Nov. 2012.
[85] R. Qureshi, B. Zou, T. Alam, J. Wu, V. H. F. Lee, and H. Yan, ‘‘Computational methods for the analysis and prediction of EGFR-mutated lung cancer drug resistance: Recent advances in drug design, challenges, and future prospects,’’ IEEE/ACM Trans. Comput. Biol. Bioinf., vol. 20, no. 1, pp. 238–255, Jan. 2023.
[86] H. Zou, T. Hastie, and R. Tibshirani, ‘‘Sparse principal component analysis,’’ J. Comput. Graph. Statist., vol. 15, no. 2, pp. 265–286, 2004.
[87] W. Ahmad, H. Ali, Z. Shah, and S. Azmat, ‘‘A new generative adversarial network for medical images super-resolution,’’ Sci. Rep., vol. 12, no. 1, p. 9533, Jun. 2022.
[88] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, ‘‘Generative adversarial nets,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 27, 2014, pp. 2672–2680.
[89] X. Yi, E. Walia, and P. Babyn, ‘‘Generative adversarial network in medical imaging: A review,’’ Med. Image Anal., vol. 58, Dec. 2019, Art. no. 101552.
[90] H. Ali, M. R. Biswas, F. Mohsen, U. Shah, A. Alamgir, O. Mousa, and Z. Shah, ‘‘The role of generative adversarial networks in brain MRI: A scoping review,’’ Insights Into Imag., vol. 13, no. 1, pp. 1–15, Jun. 2022.
[91] G. Haskins, U. Kruger, and P. Yan, ‘‘Deep learning in medical image registration: A survey,’’ Mach. Vis. Appl., vol. 31, nos. 1–2, Feb. 2020.
[92] O. Yim and K. T. Ramdeen, ‘‘Hierarchical cluster analysis: Comparison of three linkage measures and application to psychological data,’’ Quant. Methods Psychol., vol. 11, no. 1, pp. 8–21, Feb. 2015. [93] Ö. Yildirim, ‘‘A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification,’’ Comput. Biol. Med., vol. 96, pp. 189–202, May 2018.
[94] A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley, ‘‘PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals,’’ Circulation, vol. 101, no. 23, pp. e215–e220, Jun. 2000.
[95] G. Fiscon, E. Weitschek, A. Cialini, G. Felici, P. Bertolazzi, S. De Salvo, A. Bramanti, P. Bramanti, and M. C. De Cola, ‘‘Combining EEG signal processing with supervised methods for Alzheimer’s patients classification,’’ BMC Med. Information. Decis. Making, vol. 18, no. 1, pp. 1–10, Dec. 2018.
[96] A. Khan, J. S. Roo, T. Kraus, and J. Steimle, ‘‘Soft inkjet circuits: Rapid multi-material fabrication of soft circuits using a commodity inkjet printer,’’ in Proc. 32nd Annu. ACM Symp. User Interface Softw. Technol. New York, NY, USA: Association for Computing Machinery, Oct. 2019, pp. 341–354.
[97] A. S. Nittala, A. Khan, K. Kruttwig, T. Kraus, and J. Steimle, ‘‘PhysioSkin: Rapid fabrication of skin-conformal physiological interfaces,’’ in Proc. CHI Conf. Human Factors Comput. Syst., Apr. 2020, pp. 1–10.
[98] A. S. Nittala, A. Karrenbauer, A. Khan, T. Kraus, and J. Steimle, ‘‘Computational design and optimization of electrophysiological sensors,’’ Nature Commun., vol. 12, no. 1, pp. 1–14, Nov. 2021.
[99] A. Khan, S. Ali, S. Khan, and A. Bermak, ‘‘Ultra-thin and skinconformable strain sensors fabricated by inkjet printing for soft wearable electronics,’’ in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), May 2022, pp. 1759–1762.
[100] A. Bender and I. Cortés-Ciriano, ‘‘Artificial intelligence in drug discovery: What is realistic, what are illusions? Part 1: Ways to make an impact, and why we are not there yet,’’ Drug Discovery Today, vol. 26, no. 2, pp. 511–524, Feb. 2021.
[101] A. Vourvopoulos, E. Niforatos, and M. Giannakos, ‘‘EEGlass: An EEGeyeware prototype for ubiquitous brain-computer interaction,’’ in Proc. Adjunct ACM Int. Joint Conf. Pervas. Ubiquitous Comput. Proc. ACM Int. Symp. Wearable Comput. New York, NY, USA: Association for Computing Machinery, Sep. 2019, pp. 647–652.
[102] G. Bernal, T. Yang, A. Jain, and P. Maes, ‘‘PhysioHMD: A conformable, modular toolkit for collecting physiological data from head-mounted displays,’’ in Proc. ACM Int. Symp. Wearable Comput. New York, NY, USA: Association for Computing Machinery, Oct. 2018, pp. 160–167.
[103] A. S. Nittala and J. Steimle, ‘‘Next steps in epidermal computing: Opportunities and challenges for soft on-skin devices,’’ in Proc. CHI Conf. Human Factors Comput. Syst., Apr. 2022, pp. 1–22.
[104] Y. Wang, L. Yin, Y. Bai, S. Liu, L. Wang, Y. Zhou, C. Hou, Z. Yang, H. Wu, J. Ma, Y. Shen, P. Deng, S. Zhang, T. Duan, Z. Li, J. Ren, L. Xiao, Z. Yin, N. Lu, and Y. Huang, ‘‘Electrically compensated, tattoo-like electrodes for epidermal electrophysiology at scale,’’ Sci. Adv., vol. 6, no. 43, Oct. 2020, Art. no. eabd0996.
[105] A. J. Bandodkar, P. Gutruf, J. Choi, K. Lee, Y. Sekine, J. T. Reeder, W. J. Jeang, A. J. Aranyosi, S. P. Lee, and J. B. Model, ‘‘Battery-free, skininterfaced microfluidic/electronic systems for simultaneous electrochemical, colorimetric, and volumetric analysis of sweat,’’ Sci. Adv., vol. 5, no. 1, Jan. 2019, Art. no. eaav3294.
[106] J. Karolus, F. Kiss, C. Eckerth, N. Viot, F. Bachmann, A. Schmidt, and P. W. Wozniak, ‘‘Embody: A data-centric toolkit for EMG-based interface prototyping and experimentation,’’ Proc. ACM Human-Computer Interact., vol. 5, 2021, pp. 1–29
[107] T. S. Saponas, D. S. Tan, D. Morris, and R. Balakrishnan, ‘‘Demonstrating the feasibility of using forearm electromyography for musclecomputer interfaces,’’ in Proc. SIGCHI Conf. Human Factors Comput. Syst., Apr. 2008, pp. 515–524.
[108] C. T. C. Arsene, R. Hankins, and H. Yin, ‘‘Deep learning models for denoising ECG signals,’’ in Proc. 27th Eur. Signal Process. Conf. (EUSIPCO), Sep. 2019, pp. 1–5.
[109] Ö. Yıldırım, P. Plawiak, R.-S. Tan, and U. R. Acharya, ‘‘Arrhythmia detection using a deep convolutional neural network with long duration ECG signals,’’ Comput. Biol. Med., vol. 102, pp. 411–420, Nov. 2018.
[110] U. R. Acharya, H. Fujita, S. L. Oh, Y. Hagiwara, J. H. Tan, and M. Adam, ‘‘Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals,’’ Inf. Sci., vols. 415–416, pp. 190–198, Nov. 2017.
[111] P. Kumari, L. Mathew, and P. Syal, ‘‘Increasing trend of wearables and multimodal interface for human activity monitoring: A review,’’ Biosensors Bioelectron., vol. 90, pp. 298–307, Apr. 2017.
[112] S. M. Park, B. Jeong, D. Y. Oh, C.-H. Choi, H. Y. Jung, J.-Y. Lee, D. Lee, and J.-S. Choi, ‘‘Identification of major psychiatric disorders from resting-state electroencephalography using a machine learning approach,’’ Frontiers Psychiatry, vol. 12, p. 1398, Aug. 2021.
[113] J. Claassen, K. Doyle, A. Matory, C. Couch, K. M. Burger, A. Velazquez, J. U. Okonkwo, J.-R. King, S. Park, S. Agarwal, D. Roh, M. Megjhani, A. Eliseyev, E. S. Connolly, and B. Rohaut, ‘‘Detection of brain activation in unresponsive patients with acute brain injury,’’ New England J. Med., vol. 380, no. 26, pp. 2497–2505, Jun. 2019.
[114] A. Fawzi, M. Balog, A. Huang, T. Hubert, B. Romera-Paredes, M. Barekatain, A. Novikov, F. J. R. Ruiz, J. Schrittwieser, G. Swirszcz, D. Silver, D. Hassabis, and P. Kohli, ‘‘Discovering faster matrix multiplication algorithms with reinforcement learning,’’ Nature, vol. 610, no. 7930, pp. 47–53, Oct. 2022.
[115] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, and G. Irving, ‘‘TensorFlow: A system for large-scale machine learning,’’ in Proc. 12th USENIX Symp. Operating Syst. Design Implement. (OSDI), 2016, pp. 265–283.
[116] D. Choi, A. Passos, C. J. Shallue, and G. E. Dahl, ‘‘Faster neural network training with data echoing,’’ 2019, arXiv:1907.05550.
[117] L. Floridi and M. Chiriatti, ‘‘GPT-3: Its nature, scope, limits, and consequences,’’ Minds Mach., vol. 30, no. 4, pp. 681–694, Dec. 2020.
[118] G. Hinton, O. Vinyals, and J. Dean, ‘‘Distilling the knowledge in a neural network,’’ 2015, arXiv:1503.02531.
[119] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li, ‘‘Learning structured sparsity in deep neural networks,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 29, 2016, pp. 1–9.
[120] M. Capra, B. Bussolino, A. Marchisio, M. Shafique, G. Masera, and M. Martina, ‘‘An updated survey of efficient hardware architectures for accelerating deep convolutional neural networks,’’ Future Internet, vol. 12, no. 7, p. 113, Jul. 2020.
[121] O. Ali, H. Ali, S. A. A. Shah, and A. Shahzad, ‘‘Implementation of a modified U-Net for medical image segmentation on edge devices,’’ IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 69, no. 11, pp. 4593–4597, Nov. 2022.
[122] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečny, S. Mazzocchi, H. Brendan McMahan, ` T. Van Overveldt, D. Petrou, D. Ramage, and J. Roselander, ‘‘Towards federated learning at scale: System design,’’ 2019, arXiv:1902.01046.
[123] I. Dayan, H. R. Roth, A. Zhong, A. Harouni, A. Gentili, A. Z. Abidin, A. Liu, A. B. Costa, B. J. Wood, and C.-S. Tsai, ‘‘Federated learning for predicting clinical outcomes in patients with COVID-19,’’ Nature Med., vol. 27, no. 10, pp. 1735–1743, 2021.
[124] Z. Li, V. Sharma, and S. P. Mohanty, ‘‘Preserving data privacy via federated learning: Challenges and solutions,’’ IEEE Consum. Electron. Mag., vol. 9, no. 3, pp. 8–16, May 2020.
[125] H. Ali, T. Alam, M. Househ, and Z. Shah, ‘‘Federated learning and Internet of Medical things—Opportunities and challenges,’’ in Advances in Informatics, Management and Technology in Healthcare. Amsterdam, The Netherlands: IOS Press, 2022, pp. 201–204.
[126] A. K. Pandey, A. I. Khan, Y. B. Abushark, Md. M. Alam, A. Agrawal, R. Kumar, and R. A. Khan, ‘‘Key issues in healthcare data integrity: Analysis and recommendations,’’ IEEE Access, vol. 8, pp. 40612–40628, 2020.
[127] T. Pereira, J. Morgado, F. Silva, M. M. Pelter, V. R. Dias, R. Barros, C. Freitas, E. Negrão, B. Flor de Lima, and M. Correia da Silva, ‘‘Sharing biomedical data: Strengthening AI development in healthcare,’’ healthcare, vol. 9, no. 7, p. 827, 2021.
[128] A. Callahan and N. H. Shah, ‘‘Machine learning in healthcare,’’ in Key Advances in Clinical Informatics. Amsterdam, The Netherlands: Elsevier, 2017, pp. 279–291.
[129] R. Li, B. Hu, F. Liu, W. Liu, F. Cunningham, D. D. Mcmanus, and H. Yu, ‘‘Detection of bleeding events in electronic health record notes using convolutional neural network models enhanced with recurrent neural network autoencoders: Deep learning approach,’’ JMIR Med. Informat., vol. 7, no. 1, Feb. 2019, Art. no. e10788.
[130] Y. Ma, J. Liu, Y. Liu, H. Fu, Y. Hu, J. Cheng, H. Qi, Y. Wu, J. Zhang, and Y. Zhao, ‘‘Structure and illumination constrained GAN for medical image enhancement,’’ IEEE Trans. Med. Imag., vol. 40, no. 12, pp. 3955–3967, Dec. 2021.
[131] K. Wang, Y. Zhao, Q. Xiong, M. Fan, G. Sun, L. Ma, and T. Liu, ‘‘Research on healthy anomaly detection model based on deep learning from multiple time-series physiological signals,’’ Sci. Program., vol. 2016, pp. 1–9, Sep. 2016.
[132] H. Kupwade Patil and R. Seshadri, ‘‘Big data security and privacy issues in healthcare,’’ in Proc. IEEE Int. Congr. Big Data, Jun. 2014, pp. 762–765.
[133] B. M. Marlin, D. C. Kale, R. G. Khemani, and R. C. Wetzel, ‘‘Unsupervised pattern discovery in electronic health care data using probabilistic clustering models,’’ in Proc. 2nd ACM SIGHIT Int. Health Information. Symp., Jan. 2012, pp. 389–398.
[134] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, ‘‘How transferable are features in deep neural networks?’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 27, 2014, pp. 1–9.
[135] B. Chu, V. Madhavan, O. Beijbom, J. Hoffman, and T. Darrell, ‘‘Best practices for fine-tuning visual classifiers to new domains,’’ in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2016, pp. 435–442.
[136] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, and L. Bottou, ‘‘Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,’’ J. Mach. Learn. Res., vol. 11, no. 12, pp. 1–38, 2010.
[137] M. Chen, Z. Xu, K. Weinberger, and F. Sha, ‘‘Marginalized denoising autoencoders for domain adaptation,’’ 2012, arXiv:1206. 4683.
[138] F. Zhuang, X. Cheng, P. Luo, S. J. Pan, and Q. He, ‘‘Supervised representation learning: Transfer learning with deep autoencoders,’’ in Proc. 24th Int. Joint Conf. Artif. Intell., 2015, pp. 1–7.
[139] Y. Sun, G. Yang, D. Ding, G. Cheng, J. Xu, and X. Li, ‘‘A GAN-based domain adaptation method for glaucoma diagnosis,’’ in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2020, pp. 1–8.
[140] M.-Y. Liu and O. Tuzel, ‘‘Coupled generative adversarial networks,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 29, 2016, pp. 1–9.
[141] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb, ‘‘Learning from simulated and unsupervised images through adversarial training,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 2242–2251.
[142] S. G. Langer, ‘‘Challenges for data storage in medical imaging research,’’ J. Digit. Imag., vol. 24, no. 2, pp. 203–207, Apr. 2011.
[143] J. C. Mazura, K. Juluru, J. J. Chen, T. A. Morgan, M. John, and E. L. Siegel, ‘‘Facial recognition software success rates for the identification of 3D surface reconstructed facial images: Implications for patient privacy and security,’’ J. Digit. Imag., vol. 25, no. 3, pp. 347–351, Jun. 2012.
[144] V. I. Iglovikov, A. Rakhlin, A. A. Kalinin, and A. A. Shvets, ‘‘Paediatric bone age assessment using deep convolutional neural networks,’’ in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham, Switzerland: Springer, 2018, pp. 300–308.
[145] S. J. Pan and Q. Yang, ‘‘A survey on transfer learning,’’ IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2009.
[146] B. Rawat, A. S. Bist, D. Supriyanti, V. Elmanda, and S. N. Sari, ‘‘AI and nanotechnology for healthcare: A survey,’’ APTISI Trans. Manage., vol. 7, no. 1, pp. 86–91, Jan. 2022.
[147] R. Shwartz-Ziv and N. Tishby, ‘‘Opening the black box of deep neural networks via information,’’ 2017, arXiv:1703.00810.
[148] N. Tishby and N. Zaslavsky, ‘‘Deep learning and the information bottleneck principle,’’ in Proc. IEEE Inf. Theory Workshop (ITW), Apr. 2015, pp. 1–5.
[149] M. A. Ricci Lara, R. Echeveste, and E. Ferrante, ‘‘Addressing fairness in artificial intelligence for medical imaging,’’ Nature Commun., vol. 13, no. 1, pp. 1–6, Aug. 2022
[150] I. Y. Chen, E. Pierson, S. Rose, S. Joshi, K. Ferryman, and M. Ghassemi, ‘‘Ethical machine learning in healthcare,’’ Annu. Rev. Biomed. Data Sci., vol. 4, pp. 123–144, Jul. 2020.
[151] R. Dale, ‘‘GPT-3: What’s it good for?’’ Natural Lang. Eng., vol. 27, no. 1, pp. 113–118, 2021.
[152] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, and S. Gehrmann, ‘‘PaLM: Scaling language modeling with pathways,’’ 2022, arXiv:2204.02311.
[153] R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H.-T. Cheng, A. Jin, T. Bos, L. Baker, and Y. Du, ‘‘LaMDA: Language models for dialog applications,’’ 2022, arXiv:2201.08239.
[154] E. A. M. van Dis, J. Bollen, W. Zuidema, R. van Rooij, and C. L. Bockting, ‘‘ChatGPT: Five priorities for research,’’ Nature, vol. 614, no. 7947, pp. 224–226, Feb. 2023.
[155] S. Wang, Z. Zhao, X. Ouyang, Q. Wang, and D. Shen, ‘‘ChatCAD: Interactive computer-aided diagnosis on medical image using large language models,’’ 2023, arXiv:2302.07257.
[156] S. Biswas, ‘‘ChatGPT and the future of medical writing,’’ Radiology, vol. 307, no. 2, Apr. 2023.

Keywords:

Artificial Intelligence, Elucidatable AI, Medical Imaging, Biosensors, Federated Learning, Domain Adaptation, Analytics Of Vast Datasets, and Extensive Language Models.