The rise of Artificial Intelligence (AI) has reshaped our professional and personal lives to an extent that we may not ever realize. When we use spell check or redlining for documents, receive movie recommendations from streaming services, or answer prompts from customer service chatbots, we are using AI.

This summer, the American Bar Association issued formal guidance on generative AI in law practice, considering areas such as competence, confidentiality, informed consent, and fees. As lawyers, we ignore and use AI in law practice at our peril.  In the GC AdvantageSM webinar, “Artificial Intelligence: Practical Tips for Balancing Risk with Reward,” Tim Fraser, VP, Chief Legal Officer, and Corporate Secretary of Toshiba America, and Brian Conner, SVP, Chief Compliance and Risk Officer of Xeris Pharmaceuticals, joined me in discussing AI’s rapidly broadening role in business and legal/compliance departments. They shared how they navigate the challenges and opportunities of AI by applying the lens of risk management and using human judgment to prevent misuse and ensure ethical practices. Brian and Tim provided real-world examples and practical advice on implementing AI solutions responsibly and effectively in your organization.

This article is a summary of our conversation.

From the Terminator to Business Tool

To the uninitiated, AI may evoke images of the Terminator: a ruthless cyborg, personified, defying its creators with dystopian menace. In the four decades since that movie, breakthroughs in data analysis accelerated by the internet have shaped AI in multiple forms. AI is growing increasingly sophisticated, from reactive machines like Google searches or auto-correct functions to self-driving cars making independent decisions to the “social robot” companions now deployed in healthcare settings.

The role of legal counsel and compliance officers is to protect their company while supporting its growth. As with any new technology, navigating AI means weighing benefits against risks. Tim noted that AI offers untapped potential for GCs and compliance executives to work faster, more efficiently, and perhaps less expensively. On the other hand, this tool requires vigilance and agility to avoid sudden minefields. While integrating AI requires careful governance, clear guidelines, and human judgment, this technology presents opportunities for businesses to emerge as leaders and transform their industries.

Sampling the Alphabet Soup of AI

With its advanced capabilities and increasingly niche applications, AI is no longer a blanket term or monolith.

Machine learning, a subset of AI, is the ability of a computer to improve its predictions with experience, using information that was not present in its original code. Through a complex algorithm trained on ever-expanding data sets, the software can integrate user feedback to refine its capabilities. Netflix recommendations are an example of machine learning which improves over time by gaining knowledge from subscribers. Machine learning can also improve medical diagnoses: a radiologist will share their assessment with the AI tool upon receiving a provisional diagnosis based on an AI review of an X-ray. Machine learning will then incorporate the feedback to more quickly and accurately identify anomalies in an X-ray or other diagnostic tool.

Natural language processing is the interface between the software and human language, specifically the ability to analyze words and syntax to respond to prompts and interpret commands. Virtual assistants, chatbots, and spam email filters are common examples.

Finally, large language models (LLMs) that power generative AI, such as Open AI, ChatGPT, Gemini, and DALL-E, involve an algorithm called deep learning. This algorithm processes data through elaborate “neural networks” that mirror the human brain.

The rising use of generative AI can now manipulate existing information to create things that have not previously existed. Beyond the arts and entertainment implications (AI-written movie scripts, AI-executed paintings), generative AI can write a code of conduct, answer an email from a client, summarize the thickest of case files, draft a contract, or even craft company policy. While AI holds the promise of creative stamina and firmwide productivity, GCs and compliance officers must bear in mind that LLM-based business applications are not acting out of a true understanding of company culture or business dynamics; they are simply predictive engines that weigh various probabilities based on vast swaths of data, to yield the content that, by their calculation, is accurate and appropriate.

A BarkerGilmore survey indicates that 78 percent of companies currently use AI as a business tool, 46 percent of legal and compliance departments use AI as a business tool, and 79 percent of respondents knowingly use AI in their personal lives. Of the 165 legal professionals who responded to a survey by the firm Lowenstein Sandler and the ACC of New Jersey, 43 percent expressed little to no confidence in understanding AI, and only a quarter of respondents had received training. Those who had integrated generative AI into their legal department workflows primarily employed it for general research, document generation, and summarization tasks.

AI Solutions for Legal Teams

Legal counsel evaluating AI must assess the risk of disclosing privileged or confidential information to third parties or training algorithms.

Tim’s risk assessment around using AI at Toshiba has informed the company’s AI governance. Because Toshiba relies on third-party tools and resources such as Microsoft Copilot, the company’s primary risk involves inadvertent disclosure of confidential information. As the company leverages more AI technology, two key questions have emerged: which company data will AI be permitted to access, and by whom? Toshiba has recruited executive leaders, including those in compliance, to design and implement an AI program supporting a culture of experimental innovation and responsible governance.

Toshiba is testing the waters of AI through embedded solutions that support efficiency and create value. Besides Copilot for email summaries or meeting takeaways, the company has integrated Ironclad, a document-management solution for redlining contracts, Salesforce for CRM, GitHub for software coding, and a generative AI tool that creates customer quote drafts for employees to review. Toshiba has also launched an AI-enabled legal research tool that is used by in-house staff to sift through past legal opinions for the best answer without the cost of outside counsel research. However, employees using non-contracted AI tools cannot share company information. Moreover, human judgment takes precedence: Copilot will not be deployed to analyze a transcript of senior leadership discussing sensitive information.

AI Solutions for Compliance Teams

Brian pointed out that compliance roles have always faced resource constraints. Given generative AI’s power to help departments accomplish more with fewer resources, Brian calls it the right tool at the right time—but adds that understanding the tool is critical.

At Xeris, a biopharma company with approximately 500 employees, AI-powered compliance software can automate tasks such as monitoring data, performing audits, and following trends so employees can work more efficiently. AI-assisted email searches can highlight keywords and the tone or emotion conveyed to better guide responses and strategies based on client relations. AI is also used for contract management (including redlining) and contracts with healthcare professionals.

However, Brian notes that his company’s sensitive content (including IP, personal data, and patient health information) has prompted Xeris to implement strict guardrails, including policies that forbid entering confidential information into AI solutions. For example, a Human Resources executive may not include Xeris’ name or information in a job description that they ask AI to generate. Likewise, employing AI to generate a purchase order or create an e-mail marketing template must not include company or supplier details.

Bringing All Employees on Board

Brian reflects that when e-mail technology first arose, many firms dismissed or downplayed it. When it became clear that e-mail was essential, policies and protocols needed to evolve. He believes AI will follow a similar trajectory, with the need to remain open to change, compromise, and collaboration among IT, compliance, and legal departments.

Because AI technology is changing monthly, weekly, and even daily, policies must shift with equal agility. Companies must educate employees on AI’s rewards and risks to remain even more proactive.

Tim stressed that for employees to explore AI willingly, they must understand how to augment their work while resting assured that AI does not pose a threat to their jobs. Toshiba’s AI innovation program begins by introducing employees to the technology and providing chances to experiment, reflect, and share feedback with the company: What worked? What failed? Companies must frame AI as a resource for employees to streamline their tasks, enhance their performance, or pursue more rewarding projects.

At Xeris, training includes an overview of AI terminology and the current landscape and a focus on risk assessment, i.e., weighing potential advantages against dangers. The company asked individual departments, from Research and Development to Human Resources, how they anticipated using AI and then wrote the policy to align with those needs, with room for refinement.

Issues to Raise with Outside Counsel

Pointing out that outside counsel fees keep rising, Tim expects law firms to leverage AI technology to reduce their billing rates and increase their efficiency. On the other hand, the obligation to ensure confidentiality is equally important. When law firms develop their AI suite, especially within enterprise solutions, GCs and compliance officers should insist on privacy walls and other safeguards to protect trade secrets, confidential information, and firm IP. Both inside and outside counsel should disclose the use of AI tools upfront, just as they would ask permission to record a conversation.

Cross-Reference AI with Human Intellect

Due to notorious AI cases generating false data and fabricated citations, Tim likens AI to a first-year associate, with others mindful of the need to check their work. He admits that the veracity of AI’s output is one of the most insidious risks, as humans are wired to believe the information we read, especially ostensible facts generated from sophisticated algorithms. For Brian, who likes to cook, vetting the output of AI is as critical as tasting a dish before serving it to guests.

BarkerGilmore’s seasoned professionals are ready to demystify, recommend, and advise on AI tools and initiatives that enhance your legal and compliance strategy while aligning with risk management policy. Please reach out if you or your organization may benefit from our recruitingleadership development and coaching, or legal and compliance department consulting services.


Marla Persky and our team of professionals are happy to help accelerate the initiatives that you’re already pursuing or to supplement your current strategic thinking to help you realize your vision. Please reach out if you or your organization may benefit from our recruitingleadership development and coaching, or legal and compliance department consulting services. Let BarkerGilmore help you build and optimize your legal and compliance departments.

Connect with a legal recruiting advisor

* indicates required fields

Name*
Primary Area of Interest*
Blog Subscription?