BLOG

Artificial Intelligence Poses Cybersecurity Risks in Healthcare

Written by Morgan Hague

The healthcare industry is undergoing a profound transformation driven by the integration of artificial intelligence (AI) into various facets of healthcare delivery, diagnosis, and treatment. AI technology has the potential to revolutionize healthcare, improve patient outcomes, reduce costs, and enhance overall efficiency. However, with technological advancements come increased cybersecurity risks. In this blog, we will explore the cybersecurity challenges that accompany AI in healthcare and discuss strategies to mitigate these risks.

What exactly is AI?

According to NIST, “An AI system is an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.[1]

According to Wharton, AI is “A branch of computer science dealing with the simulation of intelligent behavior in computers, or the capability of a machine to imitate intelligent human behavior.[2]

Use Cases for AI in Healthcare

According to the 2023 AI Index Report from Stanford University[3], the industry with the largest AI investments at present is healthcare ($6.1 billion); followed by data management, processing, and cloud ($5.9 billion); and FinTech ($5.5 billion).

  • Disease Diagnosis and Early Detection: AI-powered diagnostic tools, using image recognition algorithms, analyze medical images like X-rays, MRIs, and CT scans with remarkable accuracy. AI diagnostic tools enable early detection of diseases, improving treatment outcomes.
  • Personalized Treatment Plans: AI can analyze a patient's medical history, genetics, and real-time data to create a personalized treatment plan leading to more effective and efficient healthcare interventions.
  • Personalized Medicine: AI can analyze a patient's genetic makeup, lifestyle, and health history to recommend tailored treatment options and medications that are more likely to be effective with fewer side effects.
  • Predictive Analytics: AI can analyze patient data, including electronic health records (EHRs), genetic data, and lifestyle information, to predict disease risk and identify potential health issues before they become critical. AI predictive analytics enable proactive patient care and personalized treatment plans.
  • Virtual Health Assistants: AI-powered chatbots and virtual health assistants provide basic medical advice, answer patient queries, schedule appointments, and remotely monitor chronic conditions. Virtual AI assistants improve patient engagement and adherence to treatment plans and work 24/7/365 without breaks unlike their human counterparts.
  • Drug Discovery and Development: AI accelerates drug discovery by analyzing vast amounts of molecular data and simulating drug interactions. AI significantly reduces the time and cost required to develop new medications and improves the success rate of drug development.
  • Administrative Efficiency: AI can streamline administrative tasks, such as billing and coding, appointment scheduling, and resource allocation, reducing administrative burdens on healthcare professionals and improving overall workflow efficiency.
  • Medical Imaging Analysis: AI can be used to analyze medical images such as X-rays, CT scans, and MRIs, assisting in the detection of diseases like cancer, tumors, and other abnormalities. AI algorithms can enhance the accuracy and speed of diagnosis, helping clinicians make better-informed decisions.
  • Remote Patient Monitoring: AI-enabled wearable devices and sensors monitor patients remotely tracking vital signs, medication adherence, and disease progression. The continuous monitoring allows for early detection of changes in health status and timely interventions.
  • Natural Language Processing in Healthcare: NLP technologies enable AI systems to interpret and analyze human language. In healthcare, NLP is utilized for tasks like medical transcription, extracting information from clinical notes, and converting unstructured data into structured information for analysis.
  • Robot-Assisted Surgery: AI-powered robots assist surgeons in performing complex procedures with precision, minimizing the risk of human error, and improving patient outcomes.
  • Clinical Decision Support: AI systems can analyze patient data and medical literature to provide clinicians with evidence-based recommendations for diagnosis and treatment plans, leading to more accurate and timely decisions.

AI Poses Cybersecurity Risks in Healthcare

While AI technology improves healthcare outcomes, there are increased cybersecurity risks that must be addressed including:

  • Data breaches and patient privacy concerns
  • Malicious attacks on AI models
  • Ransomware attacks
  • Supply chain vulnerabilities
  • Lack of AI cybersecurity expertise

The Future of Privacy Forum groups AI cybersecurity risks into two broad categories: behavioral harms and informational harms.

The Wharton School[4] breaks these two categories down even further, as shown in the above illustration.

Let’s take a deeper dive into AI cybersecurity risks.

Data Breaches and Patient Privacy Concerns

Protecting patient data from breaches and ensuring compliance with privacy regulations is a significant concern when implementing AI in healthcare.

Healthcare organizations store vast amounts of sensitive patient data, and AI systems require access to this data for analysis, making AI systems prime targets for cyberattacks.

The International Association of Privacy Professionals (IAPP) estimates that more than half of AI governance approaches are being built on top of existing privacy programs. Additionally, IAPP estimates that only 20% of self-identified ‘mature’ organizations have begun rolling out formalized AI practices and guidelines.

From a compliance perspective, use cases for AI are like any ‘processing’ use case of personal information (PHI, PII) and are subject to the same guidelines as any other system within the context of HIPAA and OCR enforcement, including:

  • Use limitation and purpose specification
  • Fairness (e.g., handling data in a way consistent with what users expect)
  • Data minimization and storage limitations
  • Transparency
  • Privacy rights
  • Accuracy
  • Consent

Malicious Attacks on AI Models

AI models themselves can be targeted. Adversarial attacks can manipulate AI algorithms to provide incorrect diagnoses or treatment recommendations, potentially endangering patients' lives.

Malicious AI attacks include:

  • Data Poisoning Attacks
  • Input Manipulation Attacks
  • Membership Reference Attacks
  • Model Inversion Attacks

Data Poisoning Attacks

Data poisoning attacks can change training data (or labels of the data) and manipulate the behavior of the AI model. This can either sabotage the model or cause the model to make decisions in favor of the attacker. This attack can work like a Trojan horse so that the model appears to work in a normal way, but for specific manipulated inputs an incorrect decision is forced. (OWASP)

Input Manipulation Attacks

Input manipulation attacks fool AI models with deceptive input data. This attack can be done by experimenting with the model input (black box), by introducing maliciously designed input based on analysis of the model parameters (white box), and by basing the input on data poisoning that has taken place. (OWASP)

Membership Reference Attacks

Given a data record (e.g., a person) and black-box access to an AI model, membership reference attacks determine if the record was in the model’s training dataset. This is a non-repudiation problem where the individual cannot deny being a member of a sensitive group (e.g., a cancer patient, an organization related to a specific sexual orientation, etc.). (OWASP)

Model Inversion Attacks

By interacting with or by analyzing an AI model, a model inversion attack can estimate the training data with varying degrees of accuracy. This is especially a problem if the training data contains sensitive or copyrighted information.

Ransomware and Malware Attacks

Ransomware attacks on healthcare institutions have become increasingly common. Attackers encrypt patient data, demanding a ransom for its release. The integration of AI systems makes healthcare organizations even more appealing targets.

In a recent survey conducted by CyberArk, AI-supported malware was listed as a top concern by security professionals because malicious software augmented with machine learning (ML) provides a more capable means of traversing domains versus legacy malware.

Hyas, a research firm, developed and tested an AI-generated malware called ‘BlackMamba’ which successfully bypassed industry-leading endpoint detection and response tools in test environments. While the BlackMamba malware was only tested as a proof-of-concept and does not live in the wild, its existence means that AI will change the threat landscape.

Ransomware and Malware Attacks include:

  • Generative AI Augmentation Risks
  • Data Leakage Risks

Generative AI Augmentation Risks

‘Counterfeit Reality’ can be heavily supported using existing or in-development generative AI capabilities. Meant to define the development of inauthentic materials, recordings, or even virtual interactions that heavily mimic what a user would expect to see, counterfeit reality is an emerging threat that fundamentally changes the enterprise attack surface, as highly sophisticated AI enables convincing spoofing of a company brand, personal image, video, or voice at scale (Gartner).

Data Leakage Risks

Use of free tools (e.g., ChatGPT) warrants minimum necessary information (prompts, personnel contact, and personal information) gathering that could assist with reconnaissance as those tools are targeted for breach. Failure to properly address the acceptable use associated with information scraping or personnel involvement with generative AI tools represents a significant leakage risk, particularly for intellectual property and other secrets.

Supply Chain Vulnerabilities

Many healthcare organizations rely on third-party vendors for AI solutions. These vendors may introduce vulnerabilities into the healthcare system and any compromise in the supply chain can have far-reaching consequences.

Lack of AI Cybersecurity Expertise

Healthcare professionals may lack the expertise to effectively secure AI systems. This knowledge gap can result in misconfigured systems and inadequate protection against AI cyber threats.

Mitigating AI Cybersecurity Risks in Healthcare

AI has the potential to revolutionize healthcare, but adoption comes with significant cybersecurity risks. Healthcare organizations must prioritize cybersecurity as an integral part of their AI integration strategy. By implementing robust security measures, raising staff awareness, and collaborating with trustworthy vendors, the healthcare industry can harness the benefits of AI while safeguarding patient data and privacy.

In the age of AI, healthcare cybersecurity is not an option; it's a necessity.

  • Data Encryption and Access Control: Robust encryption protocols must protect sensitive data and restrict access to authorized personnel only. Implement strong role-based access control mechanisms to prevent unauthorized access to AI systems and patient records.
  • Regular Security Audits and Updates: Conduct regular security audits of AI systems and healthcare infrastructure. Employ patch management to ensure AI software and hardware components are up to date with the latest security patches and updates.
  • Employee Training and Awareness: Invest in AI cybersecurity training for healthcare staff, ensuring they are aware of AI risks and best AI cybersecurity practices. Establish a culture of AI cybersecurity vigilance within the organization.
  • Multi-Layered Defense: Employ a multi-layered AI cybersecurity strategy that includes firewalls, intrusion detection systems, and advanced threat detection. Use an approach that can detect and mitigate AI threats at various levels.
  • Third-Party Vendor Assessment: Thoroughly assess the cybersecurity practices of third-party vendors. Include vendors that provide AI systems as well as vendors that use AI in their business to ensure that vendors adhere to strict security standards and protocols.
  • Disaster Recovery and Incident Response Plans: Include AI in comprehensive disaster recovery and incident response plans to minimize downtime and data loss in the event of an AI cyberattack. Regularly test disaster recovery and incident response plans to ensure their effectiveness in responding to AI cyberattacks.

Maturing Controls and AI Sophistication

Digital risk protection services (DRPS) are emerging in response to increasingly sophisticated AI attacks. Due to multiple channels for exploitation and an expanding attack surface, organizations may not have the staff, skills, or support to constantly monitor the Internet and protect external assets from potential threats. DRPS utilizes deep machine learning, computer vision, and continuous reputation monitoring to track and remediate false information. (Gartner)

For organizations utilizing AI models, ensure access rights and permissions are strictly monitored in a manner consistent with database administration or production deployments (if not slightly elevated).

Leverage threat-hunting services and threat-intelligence knowledge bases that utilize dark web and social media scanning for reputation monitoring (e.g., DRPS and external attack surface management [EASM]), marketplace scanning for rogue applications and other emerging AI/ML risks. (Gartner)

Improve Training and Expand User Awareness Campaigns

Organizations must expand end-user awareness of deepfake technology by augmenting training and security awareness programs on exploits that leverage this technology:

  • Educate employees on the common patterns and techniques that adversaries leverage while exploiting this vector.
  • Communicate to users that highly sensitive messages are potential red flags of deepfake campaigns.

Credentialing and Personnel Controls

As with any high-impact field or critical assets, it’s important to ensure that the personnel working with AI systems are qualified and properly vetted.

Develop or Enhance Data Integrity Controls and Processes

An organization needs to leverage available technology (such as cryptographically generated digital signatures) to validate and ensure the authenticity and legitimacy of enterprise-created content (e.g., videos, photos, and audio for communications or marketing).

Implement controls (such as digital signatures) at a corporate level to validate content legitimacy. Comprehensive signature validation or similar can go a long way in ensuring the validity of any communications no matter the format. (Gartner)

For AI-driven or Developing Firms

For those organizations currently in the ‘cutting edge’ of AI adoption, the control standard is quite a bit higher than those organizations with ad hoc or informal use cases (e.g., ChatGPT via users). Beyond foundational controls, ensure you have a dedicated control program around a few key areas (OWASP):

  • Application security for the AI application and infrastructure, including hiding model parameters to protect against model attacks.
  • Protections for new development pipelines for data engineering and model engineering with standard security controls.
  • Data quality assurance and integrity validations.
  • The biggest concern here – and a novel control – data science model attack prevention – or the specific realm of data science used to prevent adversarial ML attacks.
  • Beyond security for the model itself, there are also some key controls to deliver control around the behavior of the AI model itself:
  • Minimizing privileges of AI models.
  • Oversight of AI model behavior (e.g., guardrails, human oversight).
  • Monitoring and incident detection to detect abuse / respond.
  • Limiting bulk access to the model.

Regulatory Landscape

While AI offers numerous benefits to healthcare, establishing standardized regulations and certifications for AI-powered healthcare tools is essential to maintain quality and safety.

Note: Similar rules or guidelines are in development globally.

It is important to note that certain regulations mandate specific limitations or explicit requirements regarding AI. In addition, there are some rapidly developing regulations around the use of AI and the data associated with the use of AI.

  • ENISA's Multilayer Framework: consists of three layers (cybersecurity foundations, AI-specific cybersecurity, and sector-specific cybersecurity for AI) and aims to provide a step-by-step approach on following good cybersecurity practices to build trustworthiness in their AI activities.
  • Google's Secure AI Framework: a conceptual framework to help collaboratively secure AI technology.
  • MITRE ATLAS Framework for AI: A knowledge base of adversary tactics, techniques, and case studies for machine learning (ML) systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research modeled after the MITRE ATT&CK framework.
  • NIST AI Risk Management Framework 1.0: Spurred by the National Artificial Intelligence Initiative Act of 2020, meant to act as a resource to organizations designing, developing, or using AI systems to help manage risks and promote trustworthy development.

Conclusion

AI is rapidly transforming the healthcare industry, offering unprecedented opportunities to improve patient care, increase efficiency, and reduce costs. As AI technologies continue to evolve, healthcare professionals, policymakers, and industry stakeholders must work together to address challenges and ensure that AI is deployed responsibly, ethically, and for the benefit of all patients. By implementing robust security measures, raising staff awareness, and collaborating with trustworthy vendors, the healthcare industry can harness the benefits of AI while safeguarding patient data and privacy.

Ready to discover how Meditology Services can transform your cybersecurity approach?

Speak to an expert to learn more.  

 

[1] https://www.nist.gov/itl/ai-risk-management-framework

[2] https://aiab.wharton.upenn.edu/research/artificial-intelligence-risk-governance/

[3] https://aiindex.stanford.edu/report/

[4] https://aiab.wharton.upenn.edu/research/artificial-intelligence-risk-governance/

Most Recent Posts
A Cybersecurity Professional's Guide to HIPAA-Compliant Online Tracking Read More
SOC 2 + HIPAA Examination Read More
Rise of Responsible AI Read More