BLOG

Rise of Responsible AI

by Morgan Hague 

At the advent of what seems to be the most significant technical leap since the introduction of the internet, organizations and individuals alike are struggling to reconcile the remarkable benefits of Artificial Intelligence (AI) with its capacity to make equally negative impacts. To guide general adoption and proper use, healthcare organizations and the industry at large have begun to leverage a prevailing discipline: Responsible AI 

What is Responsible AI?  

With a namesake true to its cause, Responsible AI is a combination of practice, ideology, and implementation geared towards ensuring that both malicious use and unintended negative impacts of AI are prevented or limited. In effect, Responsible AI represents the effort to design, develop, and deploy AI with positive intentions to support employees and organizations alike, ensuring fairness for customers and society at large (Responsible Artificial Intelligence Institute).  

While the core tenants of Responsible AI differ slightly across organizations or verticals, a number of key principles have taken a preeminent role: Fairness, Safety, Privacy, Interpretability, Accountability, and Transparency. 

Fairness 

Perhaps one of the more difficult attributes to validate, coming to a consensus on what constitutes ‘fairness’ alone can be challenging. In spirit, fairness within AI systems entails the need for controls that consider and look to remove or correct problematic biases in in models. In the event that medical treatments are recommended by AI toolkits, are those recommendations promoting appropriate treatment based upon demographic, or are there biases interfering with outcomes?  

Initially an issue with the data and legacy information, solving for fair intelligence will be a perpetual challenge for data scientists and technologists alike. 

Safety 

The principle most-often attributed to security teams and secure design of model pipelines; Safety in this context is meant to ensure that models (and their supporting systems) are built in a way to ensure that operations are consistent with intent. In effect, it is up to development teams to ensure that AI systems behave in a manner consistent with the original intent, prohibiting things like data poisoning or model inversion attacks.  

Challenges with safety in AI often correspond with the dynamic nature of the field. AI and Machine Learning are extremely flexible in design and output, so anticipating proper safeguards (especially against adversarial pursuits) can be difficult.  

Privacy 

An ever-present concern where data is in use, AI has rapidly jumped to the forefront of Privacy conversations. So much so that the International Association of Privacy Professionals (IAPP) has formalized their first AI-centric governance certification.  

In the context of AI, use of patient data or adjacent insights for healthcare organizations will be subject to the same considerations of notice, use, limitations, and other core principles that apply to all personal data usage. Due to the rapid adoption of AI and new service offerings announced daily by new and existing organizations, properly sourced sensitive data will be in increasingly high demand and privacy offices under increasing scrutiny to ensure controls are maintained and evaluated accordingly.  

Interpretability 

Less concerned with the underpinnings and data associated, and more with the processing and product, Interpretability speaks to the need for AI systems’ outputs and protocols to be maintained in a way that is human readable and accessible. The issues that are addressed with AI can be exceedingly complex, but simply enabling a ‘black box’ of processing power leads to a slippery slope for individuals and organizations alike. That’s where the ability to interpret and gain visibility into a model’s decision making, training means, or sources and other key components becomes critical.  

Failure to establish interpretability can lead to limited visibility for system owners who are unsure if they can trust their model, let alone meet regulatory requirements relevant to any associated data. For an available model to produce information with integrity in a manner consistent with its onset purpose, it is imperative for developers to ensure a manner of assurance is established for model inputs and outputs, including training data and training processes to ensure confidence and viability of their models.  

Accountability 

Similar to any industrial operations that may impact surrounding geographical areas, organizations must establish structures and protocols enabling accountability for their AI systems and the data or associated third parties. As a specific example, if a healthcare provider leverages a model for provisioning a patient recovery plan, that organization should ensure that internal oversight structures are in place prior to delivering any guidance, and relevant limitations of liability are established and properly communicated to patients.  

An organization that maintains and distributes outcomes from an AI model is ultimately responsible for those outputs, and it is imperative that due care is exercised in both design and delivery stages via governance and oversight.  

Transparency 

Similar to Privacy, the use of Transparency as a principle is generally consistent with analogous use cases in other data-driven functions. Organizations must provide transparency to both users and data subjects on where and how their information will be used if included in AI modeling, and to what degree. This concern is becoming a thread of note for organizations with third parties leveraging AI capabilities and is likely to spur a broader conversation across the third-party risk management space as the rise of ‘AI-enabled’ solutions and service offerings continues.  

How Do We Get Started? 

As an organization in the beginning stages of AI development, or even an organization who is maturing relevant capabilities, it is critical to ensure governance around Responsible AI is made a priority. At a minimum, consider drafting formal policies and procedures that outline the mandates, roles, and responsibilities that coincide with ensuring the above tenants are considered and ultimately guide AI development.  

To go a step further, and a common trend we’ll see across the industry, establishing a formal Responsible AI committee (or something similar) can help ensure visibility of the cause and can go a long way in prioritizing the safe and secure development of AI models and downstream systems.  

Interested in learning more about what your organization can do to establish the foundations of an AI governance program? Check out our webinar: “AI + Healthcare: The Evolving Cybersecurity Equation”. 

Meditology Services is a leading provider of risk management, cybersecurity, and regulatory compliance consulting services that is exclusively focused on serving the healthcare community. More than a provider of services, Meditology is a strategic partner committed to providing our clients actionable solutions to achieve their most pressing objectives. With experience serving healthcare organizations ranging in size, structure, and operational complexity, we uniquely understand the challenges our clients face every day and dedicate ourselves to helping solve them. 

Our service lines span cybersecurity certifications, security risk assessments, penetration testing, medical device security, incident response, staff augmentation, and more. Our team is run by former CISOs and privacy officers who have walked in our clients’ shoes, and our experienced consultants hold certifications spanning CISSP, CEH, CISA, HCISPP, CIPP, OSCP, HITRUST, and more. In addition, we maintain strong relationships with healthcare regulatory and standards bodies, including serving as HIPAA expert advisors to the Office for Civil Rights, providing us a uniquely thorough perspective on the healthcare cybersecurity landscape. 


Author 
MORGAN HAGUE | MANAGER, IT RISK MANAGEMENT 

Morgan is an experienced security and emerging technologies consultant, with varied expertise across information security, organizational governance, and IT audit practices. As the leader of the Privacy, Cloud Advisory, and Strategic Risk Transformation service lines at Meditology, he has led and contributed to hundreds of consulting engagements across public and private entities. Since 2019, he has served as lead architect and product owner of an innovative risk quantification, analysis, and reporting solution utilizing MITRE ATT&CK and similar authoritative sources to establish a data-driven and dynamic mechanism to assess, report on, and manage organizational risk – supporting a variety of premier healthcare organizations, including the nation’s largest hospital system. Morgan is currently an executive board member with InfraGard Atlanta, and a contributor to OWASP’s AI Security Guide. 

Most Recent Posts
A Cybersecurity Professional's Guide to HIPAA-Compliant Online Tracking Read More
SOC 2 + HIPAA Examination Read More
Navigating the Future: Unveiling the HITRUST AI Assurance Program Read More