3 Key Takeaways:
1. Users who treat AI like Google (simple queries, simple answers) miss the real value. The power comes from building complex, multi-step workflows that handle entire processes end-to-end.
2. Make complex processes repeatable. The most valuable AI use cases aren't one-off queries - they're frameworks you can run every time a similar task comes up: trademark knockout searches, defamation risk assessments, ToS change rollout plans, delegation emails from contract comments.
3. Delegate the admin work, keep the legal judgment. AI excels at the non-legal administrative tasks that consume attorney time, drafting follow-up emails, organizing action items, summarizing competitor filings, freeing you to focus on actual legal analysis.
Over half of professionals now report using AI daily in some capacity. The question is no longer whether AI belongs in legal workflows. The real question is whether you are using it like Google, or whether you are building systems that change how your team operates.
At the 2026 GC AI Summit, Solutions Attorney Lindsay Smith shared six, practical examples of how the best in-house lawyers are working today.
1. Trademark Knockout Searches in Minutes
One GC AI user shared a prompt she runs every time her product team proposes a new name.
Instead of manually searching databases, reviewing marks jurisdiction by jurisdiction, and drafting a summary email, she runs a structured “comprehensive trademark knockout search” prompt that:
Searches multiple relevant systems
Identifies similar marks
Assesses strength and descriptiveness
Flags jurisdictional conflicts
Produces a risk table
Recommends next steps
In one test run, the AI immediately flagged that a proposed product name already existed in the software space in Germany. That single flag can prevent months of branding work from going down the wrong path.
The power here is not speed alone. It is repeatability.
Each time a new product name appears, she runs the same framework. The output is structured, the risks are categorized, and the next steps are clear.
2. A First-Pass Defamation Risk Framework
Publication clients face constant defamation exposure. Reviewing every draft article from scratch is time-consuming and cognitively heavy.
One GC AI prompt, built in collaboration with a customer, creates a structured defamation risk assessment framework that:
Researches jurisdiction-specific plaintiff friendliness.
Builds a scoring system tied to the elements of defamation.
Evaluates the target of the statement.
Produces a risk classification.
Generates action items.
In a test scenario using a recent article, the AI assessed the piece as low to moderate risk, identified specific factual support, and highlighted where risk would increase if certain statements lacked documentation.
It even flagged that the jurisdiction itself increased exposure.
The lawyer still makes the final call. Legal judgment remains human. But the framework ensures that every article gets reviewed against the same structured criteria.
As Lindsay put it:
“Complex things, let’s make them repeatable every single time we do it.”
3. End-to-End Terms of Service Change Management
Updating Terms of Service is not only about redlines. It involves:
Internal change summaries
FAQ creation
Customer-facing notices
Website banner copy
Support team talking points
Internal action item tracking
One customer built a single prompt to handle the entire rollout process.
Legal AI:
Generated a detailed table summarizing material changes
Translated legal edits into customer-friendly explanations
Drafted FAQ responses
Anticipated likely user pushback
Created communication plans
Instead of starting from scratch each time, the legal team now runs the same structured workflow whenever terms change.
The lawyer focuses on the substantive edits. The administrative and coordination work is systematized.
4. Delegation Emails from Contract Comments
This example may resonate most strongly with anyone who has practiced in Big Law or in-house.
Reviewing agreements is legal work. Drafting 12 follow-up emails asking different teams to confirm schedules, pricing, or product names is administrative coordination.
As Lindsay said:
“Far too much of your time is spent doing things that are like—this is not legal work. It has to be done, right? But this is not legal work.”
In her own practice, she built a prompt that:
Reads contract comments
Identifies which notes are directed to which teams
Translates legal shorthand into plain English
References the correct section numbers
Drafts tailored emails for each stakeholder
The lawyer already did the hard part: reviewing and marking up the agreement. The AI handles the orchestration.
This is where leverage becomes tangible. You preserve your legal analysis time and delegate the coordination layer.
5. Mock Jury Role-Playing
One user asked an unconventional question: Can GC AI act as a mock jury?
The experiment began by researching common jury personality archetypes. The AI summarized publicly available research on personality trends and jury dynamics.
Using the McDonald’s hot coffee case as a hypothetical, the prompt then:
Modeled likely jury reactions
Assessed plaintiff sympathy
Identified reputational risk themes
Suggested voir dire questions
The AI predicted that even a generally defense-leaning jury could react negatively to the fact pattern. It recommended identifying jurors who prioritize personal responsibility and have concerns about frivolous litigation.
This is not a substitute for trial counsel. It is a structured brainstorming partner that draws from broad public data.
For litigation strategy sessions, that kind of structured modeling can sharpen preparation.
6. Competitor SEC Filing Summaries
In-house lawyers increasingly straddle legal and business roles. Understanding competitor disclosures is part of that responsibility.
One GC AI user built a prompt that:
Pulls recent SEC filings from competitors
Summarizes financial pressures
Identifies disclosed risks
Highlights industry-wide trends
In a test example involving defense contractors, the AI flagged inflationary cost pressures, government budget uncertainty, supply chain constraints, and guidance adjustments.
Without AI, a lawyer would read multiple filings and manually synthesize themes. With a structured prompt, you can walk into a business meeting with a concise industry snapshot.
You can iterate further:
“Summarize in five bullet points for a board update.”
“Highlight risks that overlap with our own disclosures.”
Administrative synthesis becomes instant.
The common thread across all six examples is that the highest-value legal AI work is not a better one-off question. It is a repeatable framework. When you treat AI like Google, you get a quick answer and move on. When you treat AI like infrastructure, you build something you can run every time the same category of work shows up, with the same structure, the same outputs, and the same quality bar.
The practical next step is to pick one complex, recurring process in your practice and turn it into a framework. Map the steps the way you already do them in your head.
Decide what inputs you need, what outputs your stakeholders expect, and what “good” looks like. Then build a prompt you can run every time. That is how you move from experimenting with AI to operationalizing it, and it is how legal teams start getting the real leverage Lindsay was pointing to.

