Responsible AI: Ensuring safety, compliance, and trust in 2025

  •  
Updated:
March 18, 2025
  •  
0
 min read
Headshot of Derek Stephenson with text that reads " AI safety with Derek Stephenson"
Responsible AI: Ensuring safety, compliance, and trust in 2025
Written by 
Derek Stephenson
 and 
  —  
March 18, 2025

There’s no stopping the advancement of technology. And as technology continues to evolve, so do the challenges associated with using it. For instance, Generative AI (Gen AI) can uncover insights, drive efficiency, and transform the way we collaborate. However, it also introduces unique security and compliance challenges, particularly when deploying third-party AI applications.  Organizations should always ask, “What data is being used? How is it being used? Who has access? Where is it going? How is it secured?” 

CISOs are on the front lines ensuring their organizations effectively evaluate, adopt, implement, and monitor trusted and responsible AI. By aligning your Information Security and Legal teams on processes that assess and mitigate the risks of Gen AI models and data sets, CISOs can more confidently enable their organizations with new AI capabilities for faster, secure business growth.

As businesses race to integrate AI at scale, CISOs and their security teams have to balance the benefits of innovation with the necessity of safeguarding their organizations. They face an array of multi-layered risks they must address:

  • Data Security and Privacy: AI systems often require vast datasets for training and decision-making, which can include sensitive personal information like financial transactions and proprietary or confidential business data. One reason AI arguably poses a greater data privacy risk than earlier technological advancements is the sheer volume of information at play. Beyond the collection of sensitive data, privacy risks include the collection of data without consent, the use of data without permission, and data leakage. For example, in 2023, ChatGPT disclosed a bug that allowed some users to see the titles of other users’ conversation history, underscoring how improper security can lead to accidental disclosure.
  • Cybersecurity: AI models contain a trove of sensitive data that can be irresistible to threat actors. They essentially have big bullseyes on their backs. Threat actors may manipulate AI applications through carefully crafted inputs designed to deceive or corrupt the model’s behavior, causing unintended data exposure or faulty decision-making. Attackers can also deliberately introduce manipulated data into the training set, compromising model integrity. 
  • Transparency and Trust: One of the biggest concerns for executives is the lack of transparency and explainability of how an AI model makes decisions. “Black box” AI systems that obscure how data is processed increase the difficulty in detecting misuse and errors and identifying when and why it may produce biased results.
  • Regulatory Compliance: The rapid growth in commercialized data collection and the deployment of AI has created a new urgency to enact data privacy laws. The EU AI Pact considered the world’s first comprehensive regulatory framework for AI, prohibits some AI uses outright and implements strict governance, risk management, and transparency requirements for others. Similarly, in the U.S., NIST-AI-600-1 has been developed and publicized as a framework to aid businesses in AI risk management.

  • Supply Chain: Threat actors may also study systems and services that an organization uses and launch an operation against them to circumvent the controls within the business. This could result in the ability to leverage AI and gain access to your data if a supplier has a weaker security model. 

The role of a CISO is continuously evolving, the evolution and adoption of AI bring new risks, a new threat landscape, and new security tools and frameworks to an organization. It involves leading the charge in securely harnessing AI and setting standards that build trust, resilience, and strategic advantage for our organizations. To build responsibly, start with an ethical foundation for using AI, based on the principles of fairness, privacy, security, and transparency. These principles must be embraced at every stage of the AI lifecycle, from strategy and design to data collection and model training to deployment and optimization. 

Safe AI deployment starts with thoroughly evaluating vendors to see if they meet stringent security, privacy, and compliance standards. Contracts should detail how they handle sensitive data, with rigorous obligations for encryption, data storage, and access controls. Vendors should also be able to implement strong data classification strategies and anonymization techniques and clearly outline data retention policies and procedures. 

Realizing the true potential of AI models requires careful governance and strict adherence to privacy and data protection laws, as well as continuous monitoring to promptly detect anomalies, threats, or changes in performance and compliance. Regular internal audits should be scheduled to ensure adherence to security and compliance requirements. Independent 3rd party audits should also be leveraged to show unbiased and transparent adherence to the program established, an invaluable tool to the continuous improvement cycle.

A key part of evaluating AI models is prioritizing systems that are “explainable” so the way a model makes decisions is understood. Clear insights into algorithm methodologies and mechanisms to address potential biases are part of this transparency, which enables audits of decision-making processes and compliance validation.

Remember, this is a collaborative effort. It is key to work with stakeholders across departments, including legal, IT, and operations teams to ensure comprehensive consideration of risks and alignment of AI initiatives with organizational goals and compliance frameworks. 

As CISOs, we have a unique opportunity and responsibility to shape the adoption of AI within our organizations. AI can be safe and secure when approached with the right safeguards and oversight. At Mural, we’re committed to and have built our AI to meet the highest standards of regulatory compliance and give companies the power to adapt the technology to best fit their specific needs, enabling us to support our customers and push AI to unlock new levels of creativity, accelerate innovation, and drive meaningful business impact.

To learn more about Mural’s ongoing commitment to AI security and compliance, visit: https://mural.co/ai/safety-and-regulations.

Published on 
March 18, 2025