In 2018, "cheating on an online assessment" was a niche concern with a familiar shape. A learner Googled the question. Maybe they pasted it into Stack Exchange or Quora. Maybe a classmate texted them the answer. Detection was occasional, manual, and the cost-benefit math for the cheater was real — the effort to find a credible answer wasn't trivial, and most learners on paid programs weren't motivated enough to systematically game them.

By 2026, that shape has changed beyond recognition. The shift isn't a worsening of the same problem; it's a new problem that looks superficially like the old one. Most of what online course platforms call "anti-cheating" in their marketing is calibrated for the 2018 threat model — block tab-switching, restrict copy-paste, watch for unusual timing patterns. These defenses are not useless. They're just defending a perimeter the threat no longer crosses.

This piece walks through what AI-assisted cheating actually looks like in practice, the four distinct surfaces it operates on, why most platform defenses miss the most common forms, and what defending against it at architecturally serious depth requires.

The shift, in one paragraph

When ChatGPT crossed into mainstream use in late 2022, it didn't just give learners a faster way to find answers. It changed the psychology of using assistance during an assessment. Looking up an answer in 2018 felt like cheating; asking ChatGPT in 2024 increasingly feels like "checking your work", "clarifying the question", or "making sure I'm on the right track." Multiple studies of student behavior since 2023 have documented this normalization — the share of learners using AI assistance during nominally-unsupervised assessments has risen sharply, and most of them don't characterize their behavior as cheating 1. The category itself has softened. The result, from the assessment's perspective, is the same: what the test was meant to measure, it no longer measures.

This matters most for credentials that are supposed to mean something. A learner who uses AI to pass a quiz they don't actually understand still receives the same certificate. The certificate's signal value to a recruiter or certifying body collapses in proportion to how routinely this happens — which, by 2026, is most of the time on unproctored assessments.

The four surfaces of AI-assisted cheating

To defend against AI cheating, you have to know where it operates. There are four distinct surfaces, each requiring different defenses. Most platforms address one or two; almost none address all four.

Surface 1: The browser tab

The simplest form. The learner takes the assessment in one browser tab; ChatGPT, Claude, Gemini, or Copilot is open in another. They alt-tab between them, reading the question, asking the AI, transcribing the answer.

This is the surface most platform defenses are calibrated for. "Lockdown browser" features — preventing tab-switching, blocking certain keyboard shortcuts, detecting window-focus changes — are designed to detect this specific pattern. They work, partially, against this surface.

But this is also the surface most cheaters have moved past. The constraint of tab-switching is annoying enough that learners default to other surfaces when the platform makes Surface 1 inconvenient.

Surface 2: The mobile second-screen

The learner takes the assessment on their laptop. Their phone sits next to it, screen on, ChatGPT app open. They read the question on the laptop, type or speak it into the phone, get the answer, type the answer back into the laptop.

No browser-level defense can detect this. The phone is a separate device, on a separate network in many cases, completely outside the platform's observation. From the platform's logs, the assessment looks normal — no tab-switching, no copy-paste, no unusual timing.

This is the most common form of AI-assisted cheating in 2026. It's also the most invisible. Browser-based proctoring tools cannot see the phone. Audio-based proctoring can sometimes detect the learner's voice if they read the question aloud, but learners increasingly type their queries silently. Camera-based proctoring can detect the phone if it's in frame — but learners angle the phone deliberately or hold it below the camera's line of sight.

Defending Surface 2 requires defenses that don't exist at the browser level at all — environmental analysis (camera surveillance of the room, not just the user), audio-pattern analysis, behavioral pattern recognition (eye movements that suggest reading off a second screen), and in some cases physical exam environments. Most platforms cannot do any of this.

Surface 3: The voice assistant

The learner is wearing earbuds. The earbuds are connected to their phone. The phone runs an AI voice assistant — increasingly Siri, Google Assistant, or a custom GPT voice integration. The learner reads the question silently, subvocalizes the question, and receives the answer through the earbud. The platform sees nothing — no tab-switching, no phone usage, no audible queries.

This surface emerged seriously in 2024-2025 as voice-AI integration improved. By 2026, it's a known technique that requires specific defenses to address: ear-canal detection, Bluetooth/wireless device monitoring, behavioral anomaly detection (long pauses while listening to responses, mouth-movement patterns suggesting subvocalization).

Almost no course platform has any defense for Surface 3. It requires hardware-level integration with the device the assessment is being taken on — which, for browser-based platforms, simply isn't possible.

Surface 4: Controlled Assessment Environment injection and overlays

The most sophisticated surface. The learner runs an application — increasingly available as off-the-shelf software — that presents an AI assistant as an overlay on top of the assessment window itself. The overlay reads what's on screen, queries an AI in the background, and returns the answer in a small window that floats above the assessment.

These tools weren't widely available in 2022. By 2025-2026, they're commodity software with consumer pricing, marketed (not always honestly) as productivity tools. Learners install them once and use them across every assessment they take.

Browser-based defenses cannot see Surface 4 because the browser doesn't know the overlay exists. The overlay runs at the OS level, above the browser, invisible to anything the browser can detect. Defending Surface 4 requires the assessment platform to operate at levels of tech expertise — controlling which applications can run during the assessment, blocking accessibility-API access for non-essential applications, monitoring the screens at the kernel level rather than the browser level.

This is the surface where most platform defenses fail completely. "Lockdown browser" is a contradiction in terms when the threat operates above the browser.

Why most platform defenses fail

The pattern that emerges across these four surfaces is structural, not incidental. Most course platforms are built as web applications. They run in a browser. Their defenses operate at the browser level — the layer of the system they actually have access to. Browsers can detect browser events. They cannot detect what's happening on the phone next to the laptop, in the earbuds, or in the Desktop overlay above them.

This means the deepest defenses available to most platforms are essentially calibrated for Surface 1. Some platforms add audio-based or camera-based proctoring as third-party add-ons — partial Surface 2 coverage. Almost none address Surfaces 3 and 4 at all.

The marketing language obscures this gap. "Secure assessments," "AI-resistant proctoring," "anti-cheating technology" — these phrases describe a category, not a level of defense. Two platforms can both claim "anti-cheating" while one defends only Surface 1 and the other defends all four. The buyer cannot tell from the marketing language alone.

A useful sanity check, when evaluating any platform's anti-cheating claims: ask which of the four surfaces it defends, specifically. The platforms that defend all four can answer this directly and operationally. The platforms that defend only the browser will pivot to vague language about "comprehensive integrity" or "advanced detection."

What architecturally serious defense actually requires

Defending against all four surfaces is not a feature; it's an architectural commitment. Three structural things have to be true:

1. The assessment environment has to operate at the operating-system level, not the browser level. This means a managed-environment application that the learner runs to take the assessment — not a webpage they navigate to. The application controls which other applications can run during the assessment, block overlay tools, prevents screen-sharing, monitors any event stream. Skolarli Integrity Browser is the implementation of this for Skolarli's assessments. It's not a browser plugin; it's a full-blown Integrity Browser.

2. The integrity infrastructure has to extend beyond the device. Camera-based proctoring with environmental analysis (not just face detection), audio-pattern analysis, and behavioral-pattern recognition layered into the proctoring AI. These cover Surfaces 2 and 3 — the phone next to the laptop, the earbud feeding answers in. Skolarli's AI Proctoring layer is calibrated specifically for these patterns rather than treating proctoring as a generic surveillance feed.

3. The assessment design itself has to be AI-resistant by structure, not just by environment. Even with Surfaces 1-4 fully defended, an assessment whose questions can be solved by an AI in the moments before the assessment starts (memorization, single-answer recall, predictable formats) is partially compromised. Caselets and scenario-based assessment design — where the answer requires judgment across multiple steps, contextual reasoning, and decisions whose evaluation depends on the path taken rather than the final answer — are structurally harder for an AI to game even if a determined cheater bypasses every environmental defense. This is also why Skolarli's primary assessment format is caselet-based rather than MCQ-based: the format itself is part of the defense.

For coding assessments specifically, the defense problem is sharper still. AI tools have been extraordinarily effective at solving traditional algorithm-style problems since 2023. Kodr.run, Skolarli's coding-assessment product launching first week of June 2026, is built around problems where the candidate's reasoning across the assessment matters more than the final code being correct — moving coding evaluation closer to caselet logic than algorithm-puzzle logic. The format anticipates that AI will continue to solve any individual problem; the integrity comes from evaluating the process.

What this means for evaluating platforms

A practical question framework, derived from the four surfaces:

  1. Surface 1 (browser tab): Does the platform prevent tab-switching, alt-tab, and copy-paste during assessment? (Almost all serious platforms do.)
  2. Surface 2 (mobile second-screen): Does the platform offer camera-based environmental analysis — not just face detection, but room/environment analysis that can detect a phone in the workspace? (Most platforms do not.)
  3. Surface 3 (voice assistant): Does the platform monitor for Bluetooth/wireless audio devices and behavioral patterns suggesting earbud-mediated assistance? (Few platforms do.)
  4. Surface 4 (Desktop overlay): Is the assessment environment designed to prevent any external assistance from interacting with what the candidate sees or inputs during the test? (Very few platforms do.)
  5. Assessment design: Beyond the environment, does the platform support assessment formats that are AI-resistant by structure — caselets, scenario-based reasoning, process-evaluated coding tasks? (Almost no general-purpose course platform does.)

A platform that can answer 1-5 with operational specifics is defending the actual threat. A platform that pivots to vague "comprehensive integrity" language is defending the marketing perimeter, not the assessment.

If your program's credentials need to mean something to a recruiter, certifying body, or employer in 2026, the answer to those five questions matters more than any feature comparison. Skolarli Kreator is built around this defense architecture as the core of the platform, not as an add-on. If the threats this post describes match what your assessments actually face, the defenses described here are what your platform needs to support.

The next post in this series, examining how this assessment-integrity layer connects to the verifiable-credentials infrastructure that makes the resulting credentials defensible to third parties, is in development. Post 2 in this series covers the credential side of the same problem.


NOTE:

Illustration generated with Google nano banana (Gemini 2.5 Flash Image), curated by the Skolarli design team. AI-augmented illustration is part of our forthcoming Skolarli Marketplace integrations.