Decision&LawAI Legal Intelligence
case-lawregulatory-enforcement

AI as Prosecutor: INDECOPI Fines Scotiabank 202.96 UIT

Elena Markov
May 7, 2026
13 min read
algorithmic-enforcementautomated-decision-makingAI-accountabilityconsumer-protectionstatistical-samplinghuman-in-the-loopregulatory-compliance

Educational Content – Not Legal Advice

This article provides general information. Consult a qualified attorney before taking action.

Disclaimer

This analysis is for educational purposes only and does not constitute legal advice. The information provided is general in nature and may not apply to your specific situation. Laws and regulations change frequently; verify current requirements with qualified legal counsel in your jurisdiction.

Last Updated: May 7, 2026

The Machine That Filed Charges

On March 16, 2026, Peru's Consumer Protection Commission No. 3 (INDECOPI) fined Scotiabank Perú S.A.A. 202.96 UIT for violating Article 58.1.e of the Consumer Protection and Defense Code — specifically, placing promotional phone calls without prior, informed, express, and unequivocal consumer consent. The conduct itself was unremarkable. What makes this ruling a landmark is the method the authority used to detect and prove it: 1,207,166 audio files automatically transcribed using Faster-Whisper large-v3-turbo, classified by a Python lexical co-occurrence algorithm, and then subjected to human review of just 385 calls — 0.058% of the initial universe.

Scotiabank's defense was technically sophisticated by any standard, let alone Latin American administrative law. The bank invoked the Uber case (Amsterdam Court of Appeal, 2023), the SyRI case (The Hague District Court, 2020), Instruction 2/2026 of the Spanish General Council of the Judiciary, the EU AI Act, Council of Europe Recommendation CM/Rec(2020)1, and Peru's own AI Regulation Decree (DS 115-2025-PCM). It challenged the algorithmic black box, the garbage in, garbage out principle, the model's Word Error Rate, combined uncertainty propagation, and the absence of explicit legal authorization for using AI as evidence in a sanctioning proceeding. The Commission rejected every procedural argument and confirmed the fine. One commissioner dissented on the mitigating factor.

This ruling does not close four key doctrinal questions — it opens them.


AI as an Investigative Tool: The Line Between Support and Substitution

Scotiabank's core argument was that the Directorate of Inspection (DFI) had not conducted an investigation but had delegated the determination of imputable facts to an automated system. The distinction it drew is conceptually precise: there is a meaningful difference between using technology to manage information and using AI to pre-classify the legal fact at issue. The algorithm did not merely organize the audio files; it defined the universe from which the sample would be drawn and, by doing so, structurally conditioned the scope of the entire indictment.

The Commission responded with a three-stage architecture: (i) automated technical filtering to delimit the universe of potentially promotional communications; (ii) statistically representative sampling of that universe; (iii) direct human review of the sample. From this sequence, it concluded that AI operated as a "first technical filter" and that "at no point did these tools substitute human evaluation or determine, by themselves, the existence of an administrative infraction."

The thesis is defensible, but it demands qualifications the ruling does not fully develop. Human supervision that occurs after massive algorithmic filtering is not equivalent to an original verification of the unfiltered universe. If the system classifies 59% of audio files as "promotional" and human review operates only on a sample extracted from that 59%, human intervention is posterior to the bias, not prior to it. For human control to be effective in the sense demanded by the Uber standard — which the Commission itself acknowledges as an interpretive reference — it must have "a real capacity to influence or modify the final decision" based on all relevant data. A review that operates on an algorithmically pre-filtered universe does not satisfy that standard in strict terms.

This is where the analysis grows complicated. Peruvian law, as the ruling acknowledges, does not require special legal authorization when technology operates in a support role and fact-finding remains under direct, autonomous human control. The real debate is not whether AI can help — no one disputes that — but at what point "support" becomes functional substitution. The Commission places that threshold at the human signature on the final act. Scotiabank places it earlier: at the moment the algorithm defines what evidence exists for the case file. Neither position is manifestly wrong, which is precisely why this ruling does not settle the question.

The most practically relevant statement in the ruling concerns methodological transparency: sharing Python code, prompts, and sample transcriptions with the respondent satisfies "the essential content of the right of defense." There is no requirement to disclose the model's internal parameters or to reproduce the computational environment. This transparency standard — lower than what European precedents demand for higher-criticality systems — will be contested in future proceedings.


Consent as an Element of the Offense: Two Regimes, One Verification

The second axis of doctrinal conflict concerned jurisdictional competence and the applicable consent standard. Scotiabank argued that the National Data Protection Authority (ANPDP) has exclusive competence to evaluate the quality, sufficiency, and validity of consent under the Personal Data Protection Law (LPDP), and that INDECOPI could only verify the presence or absence of a consent act, not its substantive attributes.

The Commission rejected this reading with a construction that is doctrinally sound. The object of the sanctioning proceeding was not to verify the legality of personal data processing under the LPDP, but to determine whether aggressive commercial methods were used under the Consumer Code. The protected legal interest is different: consumer tranquility and freedom of choice, not the fundamental right to personal data protection. Verifying whether a log entry, a contractual clause, or an audio recording proves authorization for promotional communications is a "minimum and indispensable examination" to configure the Article 58.1.e offense — not an encroachment on ANPDP's exclusive domain.

The seventh supplementary provision of the LPDP reinforces this conclusion by expressly recognizing that its provisions do not affect the competences of other entities in their respective material domains. The competences are complementary, not mutually exclusive.

That said, the ruling opens a tension it does not fully resolve. The Commission rejected Crediscotia clauses that authorized communications "through third parties" because they did not nominally identify Scotiabank. In doing so, the authority applies a provider specificity standard that is difficult to distinguish from the informed consent requirement under the LPDP. The argument that this is verification of the "enabling authorization" under the Code — and not assessment of "legal validity" under the LPDP — is formally correct but materially very close. The boundary between the two regimes will remain contested terrain.


Statistical Sampling and Material Truth: The Problem of Projected Violations

The third question is arguably the most novel for Latin American administrative sanctioning law. Scotiabank argued that probabilistic sampling — with a ±5% error margin and 95% confidence level — is incompatible with the principle of material truth in a sanctioning proceeding, because it substitutes full, individualized fact-finding with a statistical estimate. Each sample element represented approximately 805 universe communications, meaning the indictment was based on projections, not individual determinations.

The Commission rejected this argument by invoking simple random sampling, the point estimator doctrine grounded in the central limit theorem, and an internal INDECOPI working document establishing that a 95% confidence level and 5% error are "appropriate" for administrative consumer enforcement.

However, Scotiabank's argument has a dimension the ruling does not adequately address: the difference between using sampling as a preliminary enforcement tool — to decide whether to open a proceeding — and using it as evidentiary foundation — to calculate the number of violations and calibrate the fine. In the first use, sampling is reasonable and efficient. In the second, it creates genuine tensions with the presumption of lawfulness, because the respondent is held responsible for violations that were never individually established.

The algorithmic stage compounds the problem, as Scotiabank noted through the GIGO (garbage in, garbage out) principle: if the universe subject to sampling was pre-classified by a model with a recognized error rate of approximately 7.7%, uncertainties accumulate. The Commission rejected this accumulation by arguing that Whisper's model error is "not cumulative at the sample level" because the sample elements were reviewed manually. This is technically correct regarding the 385 audited recordings, but it does not answer the question about error in the 657,221 universe audios that no human reviewed individually and whose noncompliance rate was extrapolated from the sample.

The definitive answer to whether a sanctioning proceeding can rest on statistically projected violations over an algorithmically classified universe is not provided here. The Commission validates it pragmatically for this case, but the underlying doctrine is not consolidated and will be challenged before the Specialized Chamber.


The Dissent and the Divisibility of Acknowledgment

Commissioner Héctor Ferrer Tafur dissented on one specific point: the rejection of the mitigating factor for partial acknowledgment of responsibility. The majority required "total, clear, and unequivocal" acknowledgment of all imputed facts to trigger the benefit under Article 257 of the General Administrative Procedure Law. Ferrer Tafur argued that this disincentivizes cooperation when the indictment aggregates autonomous and independent facts: if acknowledging 135 of 168 cases has the same sanctioning effect as acknowledging none, the system rewards obstruction and penalizes partial cooperation.

The dissent captures a genuine tension in the design of the mitigating factor. The majority position is coherent with a unified view of the sanctioning proceeding, where acknowledgment produces effects only when it closes all controversy. The minority position is coherent with an atomized view, where each call is an autonomous fact with its own evidentiary record and liability is divisible. Neither position is irrational; they reflect different conceptions of the nature of sanctioning proceedings facing mass and serial violations.

What the dissent reveals, more broadly, is that the procedural instruments of Peruvian administrative sanctioning law were not designed for proceedings of this scale — hundreds of thousands of audio files, more than three hundred thousand potentially affected consumers, statistical samples, classification algorithms. The ruling does not adapt its categories to the new reality; it applies them to it as best it can. That adjustment, for now, produces reasonable but doctrinally incomplete results.


Key Takeaways

  • Peru's INDECOPI has validated AI-assisted enforcement: using Faster-Whisper and Python to classify over 1.2 million audio files is lawful when human review of the final sample is preserved. The line between "support" and "substitution" remains undefined and will be litigated.
  • Sharing Python code, prompts, and sample transcriptions satisfies the respondent's right of defense. Disclosure of internal model parameters or computational environments is not required under current Peruvian law — a lower bar than European precedents demand.
  • INDECOPI can assess whether a provider had valid consent for promotional contact under the Consumer Code without encroaching on the data protection authority's competence, as long as the analysis is limited to establishing authorization and does not evaluate data processing legality under the LPDP. In practice, the distinction is razor-thin.
  • A "third parties" authorization clause does not automatically extend to other companies in the same corporate group. Promotional communications require express, nominally identified authorization from the specific provider making the contact.
  • Statistical sampling with a 95% confidence level and ±5% error is a valid evidentiary foundation for both the indictment and fine calculation in mass consumer enforcement. The unresolved question is how accumulated uncertainty — model error plus sampling error — should be propagated and reported.
  • Partial acknowledgment of responsibility across a proceeding that aggregates hundreds of autonomous facts does not trigger the Article 257 mitigating reduction under the majority view. The dissent's proportionality argument is strong and likely to resurface before the Specialized Chamber.
  • This is the first significant Latin American precedent on algorithmic enforcement in consumer sanctioning proceedings. Its criteria — and its gaps — will be a mandatory reference for regulators, counsel, and companies across the region.
Back to News