Decision&LawAI Legal Intelligence
For: Litigation support, e-discovery specialists, trial attorneys
22 min read · Updated March 2026

E-Discovery in the Generative AI Era

Introduction

E-discovery has always been an early adopter of technology. From early keyword searches to Technology-Assisted Review (TAR), the litigation support field has continuously integrated new tools to manage exploding data volumes. Generative AI represents the most significant shift since TAR—and it is arriving faster than most organizations are prepared for.

This guide addresses how generative AI is transforming e-discovery workflows, the challenges of managing AI-generated data, and the practical steps litigation teams should take to leverage these tools responsibly.

How Generative AI Is Changing E-Discovery

Three phases of AI adoption have shaped e-discovery:

  • TAR 1.0 (2010s): Predictive coding using supervised learning. Effective for document categorization but limited to trained categories.
  • TAR 2.0 (late 2010s): Continuous active learning. More efficient training, better handling of edge cases.
  • Generative AI (2020s): Natural language understanding, summarization, synthesis, and conversational interfaces.

New Capabilities

FunctionTraditional E-DiscoveryGenerative AI
ReviewCategorization, codingSummarization, narrative generation
AnalysisKeyword hits, metadata patternsTopic modeling, relationship extraction
ProductionBates numbering, format conversionContextual redaction, smart privilege
StrategyManual case assessmentEvidence synthesis, argument development

Managing AI-Generated Data

Generative AI creates new data types that require special handling in e-discovery. Organizations and their counsel must understand what AI-generated content looks like and how to handle it.

Types of AI-Generated Data

  • AI assistant transcripts: Slack, Teams, and other platforms where AI assistants have participated in conversations
  • AI-generated drafts: Documents created by AI tools that may later be revised or used
  • AI summaries: Condensed versions of longer documents created by AI
  • Prompt/response logs: Records of interactions with AI systems
  • AI system outputs: Reports, analyses, or recommendations generated by AI

Preservation Considerations

  • ☐ Identify all AI tools used by custodians
  • ☐ Determine if AI assistant data is within scope
  • ☐ Issue preservation notices that reference AI tools
  • ☐ Collect AI system logs where relevant
  • ☐ Document AI usage in litigation hold acknowledgments

AI-Assisted Review: Best Practices

Generative AI enables hybrid workflows that combine machine efficiency with human judgment. The key is understanding when to rely on AI and when human review remains essential.

Workflow Integration Framework

  1. Early case assessment: Use AI to quickly understand case landscape, identify key themes, and prioritize custodians
  2. Processing and enrichment: Apply AI to extract entities, identify relationships, and generate summaries
  3. First-pass review: Use AI-assisted review for initial coding with human QC
  4. Deep-dive analysis: Apply AI synthesis tools for privilege review, redaction, and production preparation
  5. Trial preparation: Leverage AI for deposition preparation, exhibit organization, and witness preparation

Quality Assurance Protocol

  • Initial validation: Test AI on known documents before production use
  • Ongoing sampling: QC AI outputs on 5-10% of documents
  • Edge case review: All privilege and highly sensitive material reviewed by humans
  • Documentation: Record AI tools used, settings, and validation results

Chain of Custody in AI Environments

Traditional chain of custody documentation must be extended to account for AI involvement. Courts and opposing counsel increasingly scrutinize how AI was used in discovery.

AI Documentation Requirements

  • Tool identification: What AI tools were used and what versions?
  • Prompt records: What queries or instructions were given to AI systems?
  • Output records: What outputs were generated and how were they used?
  • Human review: Who reviewed AI outputs and what decisions were made?
  • Validation data: What testing was performed to verify accuracy?
If challenged by opposing counsel about AI-assisted decisions, your ability to explain the AI's reasoning and your review process will be critical to defending your approach.

Case Law Update

Courts are beginning to address AI use in discovery. Key developments to monitor:

Recent Decisions

IssueKey HoldingImplication
AI-generated briefsAttorneys must review AI outputs; blind reliance is inadequateMaintain human judgment in all AI-assisted work
AI privilege reviewAI does not create new privilege; analysis is discoverableDocument AI review carefully; consider clawback provisions
AI hallucinationSubmitting fabricated AI citations can constitute sanctionsAlways verify AI-generated citations before filing

Vendor Selection Criteria for GenAI E-Discovery

Evaluate GenAI e-discovery vendors on:

  • ☐ Data security: Where is data processed? SOC 2 compliance?
  • ☐ Training data: Is data used to train AI models?
  • ☐ Output accuracy: What validation data demonstrates reliability?
  • ☐ Human oversight: How does tool support human review?
  • ☐ Audit trail: Does system log all AI interactions?
  • ☐ Integration: Works with existing e-discovery platforms?
  • ☐ Transparency: Can vendor explain how AI reaches conclusions?

Authoritative Resources

This guide is part of the Decision&Law Practice Guides series.

Contact us