Understanding Generative AI Risks: A Learning Leaders Guide to Ethical, Data, and Legal Concerns
Learn how to evaluate and safeguard your organization from risks鈥攊ncluding ethical, data, and legal concerns鈥攁s you navigate generative AI transformation.

By Trena Minudri, VP & Chief Learning Officer
Key takeaways:
Despite GenAI being a top priority for leaders, two thirds are ambivalent or dissatisfied with their progress with the technology.
Considering legal, data, and ethical concerns up front will help leaders create a GenAI policy to prepare for the GenAI transformation.听
Ethical considerations include hallucinations, intellectual property, training models, and bias amplification.
With rapidly shifting regulations, business leaders need to reassess their compliance and data risk.听
Providing both leaders and employees with training from trusted institutions is key to navigating Gen AI risks and ethics.听
The business upside of generative AI (GenAI) has become abundantly clear鈥攁nd not just to me.
Between increased productivity and new levels of innovation, estimates GenAI could generate up to $4.4 trillion in value across industries. It鈥檚 no surprise that over half of top executives GenAI as a 鈥渢op priority鈥 in 2024.听
But incorporating GenAI into your organization doesn鈥檛 come without risks, and plenty of business leaders are frustrated with roadblocks. of leaders are ambivalent or dissatisfied with their progress on GenAI.
One of the top three underlying reasons executives are dissatisfied with their progress?聽
The for responsible AI.听
Getting business value out of GenAI is a big undertaking, made more complicated by how quickly the space is evolving and the legal, data, and ethical concerns involved. "I think the big change that came with generative AI was simply the pace at which change was happening and perhaps even the scale of the impact of these changes,鈥 shares Dr. Robert Brunner, Associate Dean of Innovation at the University of Illinois, in the course Setting a Generative AI Strategy.
At 糖心vlog官网观看, we鈥檙e confident GenAI鈥檚 future can be determined by how we as business leaders use it鈥攊f we do so fairly, ethically, and responsibly.
To realize the business impact of GenAI, you need to define a strategy around the ethics, data privacy, and legality of large language models (LLMs) first. When you do, you鈥檒l be able to cut through the chatter around GenAI concerns and confidently assess threats as you act on the opportunity genAI presents.听
We recently explored how business leaders are navigating the change in our playbook How to Lead through Generative AI Transformation: Insights from Industry Experts.听
In this follow-up, I further break down GenAI鈥檚 risks鈥攁nd how they can be mitigated鈥攚ith insights from expert practitioners at Microsoft, Dow, Vanderbilt University, and more.听
You鈥檒l become aware of the most pressing AI issues, so you can bring GenAI into your organization with minimized risk and an informed perspective.
[Disclaimer: These are suggestions from my experience and are intended for informational purposes. Please consult with your internal legal counsel and technical teams to determine the best course of action for you and your organization.]
Consider these ethical factors before you roll out GenAI
Hallucinations
Like humans, the internet, and most knowledge repositories, GenAI also provides inaccurate information at times鈥攐therwise known as hallucinations. This often occurs when AI models are not used correctly or asked to complete tasks outside of their functional limitations.
In his course Navigating Generative AI Risks for Leaders, 糖心vlog官网观看 CEO Jeff Maggioncalda frames this well: 鈥淚f you鈥檙e going to start using these models and expect that almost all of your employees will, they need to understand the limitations and the role that they play as individuals in making sure they validate and reflect on what comes out of these models.鈥澛
Intellectual property
When LLMs are trained on internet-wide source material without expressed user or owner permission, this calls into question the intellectual property rights of trademarked, copyrighted, and sensitive material.听
Who does data actually belong to, and do we have the right to use it freely?聽
鈥淲hen machines can manipulate language created by other people, those machines can get a lot more value out of that language,鈥 says Jeff. 鈥淟LMs can be trained with knowledge and intellectual property can be infused into derivative pieces of work鈥 without the end user being aware.
Responsible GenAI use starts at the very beginning鈥攈ow do we ensure we鈥檙e evaluating data outputs even as we use and train models? This brings us to our next point.
Training models
GenAI comes with an inherent set of risks, but it鈥檚 up to leaders to determine how much risk they鈥檇 like the business to take on.听
If your org trains models for internal use only, you鈥檒l lead with minimal risk. This risk increases exponentially when external partners like stakeholders, vendors, or even customers use the LLMs you train.
So where to start?
鈥淚 think you start internally,鈥 shares Dr. Jules White, a 糖心vlog官网观看 instructor and expert in GenAI at Vanderbilt University. 鈥淔irst, you build up the expertise, the responsibility within your own workforce, and then you start figuring out the safe ways to take it outside.鈥 Dr. White points to an example: adopting Microsoft Copilot as a start to train employees on the capabilities and limitations of GenAI within your domain.听
[Side note: for a fantastic introduction to trustworthy GenAI, head over to Dr. White鈥檚 course.]
When training models, business leaders need to consider not only what data they input into LLMs to do so, but also how they integrate company ethos and values to minimize digital harm and pave the way for quality outputs.听
Bias amplification
When used properly, GenAI enables smarter decision-making by becoming a second brain of sorts: LLMs can evaluate every element of a scenario and share alternate viewpoints.
Yet there鈥檚 an unfortunate consequence if the source data fed into LLMs is biased鈥攖he outputs become biased and amplified. This is called machine bias, which can lead to increased stereotyping and inaccurate information.听
GenAI models are susceptible to , including availability bias, or the favoring of more widespread data, and confirmation bias, in which all prompts heavily generate one desired or stereotypical output.
This is just one of the reasons an established AI governance policy to proactively address this risk is so important for companies. IBM a few initial best practices for avoiding bias amplification, including a 鈥渉uman-in-the-loop鈥 system and paying close attention to compliance, trust, and fairness standards in your GenAI governance.听
Get familiar with data security and regulation concerns
Rapidly shifting regulations
As GenAI tools continue to improve and evolve, regulations and laws governing their actions will follow suit and shift often in the coming years.
Laws vary based on the country your company is based in, and sometimes even the state or region. Keep an eye on credible sources, like emerging public policy shifts or legal cases, to stay agile as regulations change.
鈥淧art of the ethical responsibility of CEOs is to not only understand how GenAI works today but to have some anticipation for how it鈥檚 changing,鈥 Jeff shares. 鈥Because we don鈥檛 know what capabilities will exist in a year or two or three from now. We need to anticipate how they might impact employees and customers and society.鈥
Compliance risk
As your organization integrates GenAI, you need to consider liability and compliance risks to avoid hefty fines and data infractions that could harm your customers or company.听
Communicate closely with your Legal team and AI leaders to understand how your organization can stay compliant as new GenAI capabilities continue to unfold.听
鈥淓ngage the appropriate groups within your company, early and often,鈥 says Alison Klein, Information Systems Talent Manager at Dow. 鈥淭his looks different for each organization, but we鈥檙e working closely with our legal team to understand the protocols we need and what training employees need to complete.鈥澛
AI leaders within your organization should also implement ongoing risk assessments to stay ahead of any emerging threats.
Data risk
Your employees will likely be inputting potentially sensitive or proprietary company data into LLMs, so data risk via cyberattack is a major concern. Executive teams will need to provide critical oversight for data protection measures and risk management strategies. As Graeme Malcolm, Principal Content Development Manager, Data and AI at Microsoft shares, this problem is twofold: 鈥淭here's 鈥how do I, as an organization, ensure that we use this technology responsibly, and then there鈥檚 鈥how do we guard against those who might not?鈥欌赌
From an internal standpoint, business leaders should start by defining how LLMs are used across the organization.
鈥淲here will we be using these tools? Who will be using them? How do we want to think about the way that tools will be used over time in different kinds of contexts?鈥 asks Dr. Alondra Nelson, a leading author of the White House-sanctioned AI Bill of Rights. Getting a level set on which tools your team is using and where they鈥檙e being used gives you a starting point for controlling data risk as teams adopt GenAI.听
Key principles for responsibly adopting GenAI
A lack of knowledge about GenAI and fear-mongering discourse can keep business leaders stuck. But inaction isn鈥檛 the solution; it will only lead to missed growth opportunities and productivity losses.
1. Tighten up data practices
Business leaders should start by creating data practice guidelines and segmenting them by function across the organization to reduce the risk of inappropriate use, cyber threats, and privacy breaches. While teams can adopt data practices for their respective job duties, the CEO and other business leaders are both instrumental in driving home the importance of data safety more broadly through effective communication and frequent follow-up.
Start by making data privacy a priority. You鈥檒l want to oversee how different teams interact with data and develop use cases around what data can and cannot be shared with LLMs. For instance, at 糖心vlog官网观看, we use a safe and secure Playground environment for working with LLMs.
2. Create an AI ethics policy
鈥淒ata privacy and security for AI starts by having a really good understanding of the new risks posed by LLMs in particular because GenAI is so new,鈥 notes Clara Shih, CEO of Salesforce AI, in the course Empowering and Transforming Your Organization with GenAI. 鈥淥rganizations need to have safeguards, both through systems and technology, but also policies and procedures,鈥澛
Since GenAI will be used in myriad ways across your company, it鈥檚 key to create standards and policies regarding when it鈥檚 appropriate to use the tool in the first place. Enter a GenAI ethics policy framework.
鈥淧utting those boundaries and frameworks in place sends a signal to your company that this really matters,鈥 emphasizes Jeff. 鈥淚t gives the guardrails for what people can and cannot do.鈥
3. Monitor the landscape, and keep learning聽
Keep up with emerging trends, the discourse surrounding GenAI, and policy updates. By staying up to speed on different angles and opportunities with GenAI, you鈥檒l make better-informed decisions that will positively impact your organization, your employees, and your stakeholders.
Course recommendations:
Google AI Essentials: Google AI Essentials is a self-paced course designed to help people across roles and industries get essential AI skills to boost their productivity, zero experience required. The course is taught by AI experts at Google who are working to make the technology helpful for everyone.听
Generative AI: Impact, Considerations, and Ethical Issues: 聽Led by IBM鈥檚 Rav Ahuja, this course will help you identify the ethical issues, concerns, and misuses associated with generative AI.
Responsible AI in the Generative AI Era: In this course from Fractal, you will explore the fundamental principles of responsible AI, and understand the need for developing Generative AI tools responsibly.听
4. Train your team
At a base level, you need to understand three things:
How GenAI tools work
Who they impact
How to train your team on using GenAI responsibly and ethically聽
Cross-functional teams and executives alike must do the work to learn how LLM outputs impact employees and customers鈥攁nd that work should start today. But it doesn鈥檛 end there. Business leaders should also prioritize training their employees on GenAI鈥攊ncluding legal, data, and ethical concerns鈥攅arly on. Alison Klein agrees: 鈥淥ffering appropriate training in conjunction with the GenAI rollout will be the key to successful adoption.鈥
Lead with confidence in the age of GenAI
Even the most respected thought leaders out there can鈥檛 100% predict where GenAI is going next. That鈥檚 why business leaders need to move forward with an informed perspective, so they can make the most of benefits data scientists are hopeful about, like increased organizational productivity thanks to automation and better strategic thinking.
Discover more in-depth tips and case studies in How to Lead through Generative AI Transformation: Insights from Industry Experts.听
- Expert content from leading GenAI innovators such as Microsoft, Google, Stanford Online, and IBM.
- Curated programs for different teams and career levels, including executives.
- Hands-on practice in secure GenAI Playgrounds, allowing employees to apply their learning in real-world scenarios.
- Interactive guidance from 糖心vlog官网观看 Coach for effective learning.
- Learning in preferred language with 20+ translations
- Accelerated course customization with AI-assisted Course Builder
This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.