AI Governance Illuminated by Microsoft, L’Oréal and Other Leading Voices

Artificial intelligence (AI) is reshaping industries and transforming the way businesses operate. While its potential is immense, it’s not without challenges. In TechFuture’s latest webinar, Charting the Path of AI Governance, leading voices from Microsoft, L’Oreal, TR Labs and other experts shared their valuable insights on navigating the complex landscape of AI governance. Here are five key takeaways from the discussion.

Cultural shifts are necessary for AI governance

Generative AI revolutionising the business landscape. Organisations are at a turning point where it is not just about developing AI policies, but reimagining organisational culture.

AI governance, as highlighted by Legal Innovation Consultant, Anna Lozynski, is not just for lawyers to put into practice. There are regulatory and ethical considerations to consider, too. “AI governance is no different to usual governance… (It’s about) considering the impact of AI at an enterprise level,” said Anna.

Within such an evaluation, the crucial questions to answer are: does your organisation have an AI policy, what risks does it cover, and does it align with ethical and regulatory frameworks? Anna emphasised the importance of considering the opportunities that AI can safeguard, too.

Katherine Jones, Partner at Colin Biggers & Paisley and digital governance expert, underscored the need for a cultural shift within organisations. “We need to embrace AI. It’s not going away and it has more benefits than detriments, but it needs to have a proper framework,” she said.

Katherine urged that such a shift requires businesses to develop a culture where AI is not seen as a standalone tool but an integral part of their processes, supported by policies and guidelines. She highlighted the importance of understanding clients’ needs and defining the right parameters when developing a framework for AI use.

“AI…is not new. What is new is ChatGPT and the subset of AI called generative AI…which has been the fastest adopted technology in history.”

Anna Lozynski, Incite Legal Tech

Principles of responsible AI

Many organisations are diligently shaping their AI governance and Microsoft has been committed to establishing timeless standards for AI use for years. Clayton Noble, Head of ANZ Legal, highlighted that “what underpins so much of governance is…to ensure the development, deployment, and use of AI systems is safe and responsible.” Microsoft’s standards are grounded in the following six fundamental principle:

  1. Privacy
  2. Security
  3. Safety
  4. Lack of bias (fairness)
  5. Accountability
  6. Transparency

Professor Amandeep Sidhu, Academic Dean, Institute of Health Management, added to Microsoft’s principles stating that “…in the domains of education and healthcare particularly, every decision that generative AI makes these days needs to be inspectable and overridable… (as an) extension of explainable AI”.

“At the heart of it is a reinforced learning model built on human feedback,” he said.  The data and AI governance expert believes the technology being built will evolve on an almost weekly basis, therefore, it is critical that development teams:

  • are mindful that bias and inaccuracies exist,
  • accept this as part of the developmental process, and
  • learn from the mistakes.

“Whatever form of AI you use, whatever new form of AI that comes in the future, it will only find efficiencies to improve your productivity and business process.”

Professor Amandeep Sidhu, Academic Dean, Data and AI Governance expert

Transformational leadership in the age of AI

Alongside the rapid adoption of generative AI into widespread corporate use and evolving global regulations, it is crucial for organisations to build guardrails to ensure AI use is safe, compliant and managed. While legal teams are the strategic enablers in navigating AI governance, successful AI governance involves collaboration between the legal, operations and technology teams.

Data Privacy Officer and Legal Counsel at L’Oreal ANZ, Jessica Amos said, “…it’s about inspiring a shared vision, and for us, that’s building trust in AI…(with) our customers, employees and other stakeholders.

L’Oreal takes a ‘privacy by design’ approach to building this trust through assessing and addressing compliance issues throughout the AI lifecycle, and (when acquiring an AI system) requesting documentation from AI vendors for risk assessment.

“We have a very strong internal voice (on the responsible use of AI) that permeates from the top of our organisation.”

Jessica Amos, L’Oreal

Augmenting (not replacing) human expertise with AI

AI displacing lawyers is a growing concern among legal professionals, but Katherine Jones is confident that lawyers’ jobs are safe from eradication. “I can only see the good”, she says. “AI will remove some of the admin tasks and stop the months of standing by the photocopier for juniors, but it won’t replace the forensic task of being a lawyer.”

Professor Sidhu stressed the concept of AI as augmentation rather than replacement. “Irrespective of the sector, it [AI] is augmenting your job, your task; it is not ever replacing that,” he said. Such a perspective highlights AI’s role in enhancing human capabilities, boosting productivity, and improving business processes across various industries.

The Tech & the Law 2023 Report found that 40% of the legal professionals surveyed are already cautiously experimenting with generative AI. The panel encourage this experimentation to get to know the capabilities of the technology and how it can build your productivity or make you more efficient. “It will ultimately make doing your job as a lawyer infinitely better as we head into this era of AI,” said Anna.

Clayton agreed. “One of the things we think is going to happen for lawyers, and every information worker with these tools, is we’re going to eradicate the drudge work and it’s just going to make it so much faster and easier for us.”

“We call (these AI tools) Copilot for a reason, they’re not auto pilot…and (humans) have to remain in the loop and in control of the work that’s being provided”

Clayton Noble, Microsoft 

Data privacy and security remain paramount for AI governance

Safeguarding data privacy and security must be at the forefront of AI governance strategies. AI’s ability to process and analyse vast amounts of data comes with significant responsibilities, especially in sensitive sectors like healthcare. Professor Sidhu pointed out that in healthcare, data privacy remains a top concern.

“…It becomes more tricky, because even if you anonymise data…there is still sensitivity around the dataset and the questions you’re going to ask,” he said.

It is important to understand that AI is not a ‘plug and play’ solution. “You can’t just bolt something onto your system and expect it to work properly,” says Katherine. “It needs to have a proper framework.”

AI products are proliferating rapidly and new enterprise solutions are entering the marketplace. For Microsoft, what they all share is a commitment to enterprise-grade security and privacy, safeguarding sensitive data within defined boundaries. However, this does not always apply to third-party AI tools where there are limited regulation standards on privacy and security. The regulatory gap raises cybersecurity concerns that requires thorough investigation to align with privacy and security needs.

“We have guidelines around how you can use AI and how to be very careful with AI, making sure you don’t breach any privacy or client confidentiality, which is very crucial.”

Katherine Jones, Colin Biggers & Paisley

What is the future of AI governance?

The future of AI governance will continue to be guided by core principles to ensure safe and responsible use. However, the way AI is implemented is expected to radically change with ongoing improvements in areas like ‘explainability’. 

Clayton said: “We’re going to see vast improvements in that field. Explainability ensures that AI decisions are transparent and understandable, building trust with users. And people only want to use technology they can trust. Additionally, ‘controllability’, including the ability to make AI models forget specific data, is crucial for responsible AI usage.”

Jessica flagged that the next frontier for AI will be to advance personalisation, and create customer experiences in ways never seen before. Katherine believes that AI will influence the discovery of processes, make them more efficient and precise. Technology has evolved in the last five years, from a room full of boxes, to a hard drive full of emails and now, platforms where you can filter 10,000 datasets down to just 3,000 in an instant.

Katherine agrees “That has been the biggest change but I’m sure if you ask me in 12 months, I’ll have a different answer because there will are a lot of changes coming our way.”

Watch the full TechFuture webinar on demand

Related reading:

Understanding data governance and cyber security

How firms can prepare for generative AI

Tech & the law 2023 Report

Subscribe toLegal Insight

Discover best practice and keep up-to-date with insights on the latest industry trends.

Subscribe