A real IDE with 30+ languages, sandboxed execution, and live test cases — running inside the most aggressive AI-defense layer in Indian tech hiring. Powered by the kodr(run) engine.
30+ languages · Live test cases · 95% AI tool block · DPDP 2023 compliant
kodr(run)--challenge backend_eng
42:18
Py
Python 3.12
.run
1deffind_duplicates(nums):
2 seen = set()
3 dupes = []
4for n in nums:
5if n in seen:
6 dupes.append(n)
7 seen.add(n)
8return dupes
Test ResultsOutputConsole
3/3 passed
✓test_no_duplicates12ms
✓test_single_duplicate14ms
✓test_multiple_duplicates11ms
AI tool blocked
Cursor desktop · 41:52
30+
Programming languages supported
95%
Of popular AI tools blocked at OS level
100K+
Candidates assessed across cohorts
4.5/5
Average CSAT across hiring teams
What it is
A real IDE for engineering hiring — not a textarea with syntax highlighting.
Most coding assessment tools give candidates a stripped-down code editor that wouldn't pass for a hobby project. Skolarli runs the full kodr(run) engine — Monaco editor (the same engine that powers VS Code), 30+ languages on current versions, sandboxed execution with real test cases, and IDE features candidates expect from their actual day-to-day tools.
What that means in practice: candidates write code in the environment they're used to, not in a marketing screenshot. The signal you get back reflects how they actually code — not how well they fight your tool.
And in 2026, raw problem-solving isn't the only signal worth measuring. Skolarli runs in two modes — strict mode locks AI tools out for screening rounds where you need to see how a candidate thinks unaided. AI Co-pilot mode hands them a Skolarli-provided AI assistant and surfaces telemetry on how productively they use it. You decide which mode fits each role.
Assessment types
Nine ways to test engineering signal. One platform.
Algorithm rounds, system design, bug fixes, technical knowledge — the full spectrum of what engineering teams actually test for, in one assessment session.
Core skill
Algorithm & Data Structures
LeetCode-style problems with hidden test cases, sandboxed execution, and time-bound complexity expectations. Built on the kodr.run engine.
Live test execution per submission
Hidden & visible test cases
Per-test runtime tracking
Custom problem creation
Range
Multi-language Coverage
30+ languages across modern, systems, JVM, scripting, functional, and academic categories. Current versions, not stale runtimes.
Buggy code with passing and failing tests. Tests how candidates debug under pressure — closer to real engineering work than greenfield problems.
Real-world style bugs, not artificial puzzles
Failing-test-driven feedback
Strong predictor of debugging skill
Configurable difficulty levels
Senior signal
System Design Scenarios
Open-ended architecture questions with structured written response. Pairs with caselet evaluation by SkoAI for human-decided scoring.
Scalability, tradeoff, and capacity questions
Free-form written response with rubric
SkoAI-assisted human evaluation
Best for senior & staff-level hires
Knowledge
Technical MCQ
Concept-level multiple choice for languages, frameworks, databases, networking, and OS fundamentals. Quick screening before deeper rounds.
Pre-built banks across 50+ topics
SkoAI Quiz can generate from your stack docs
Calibrated difficulty levels
Auto-scored, instant results
Backend
SQL & Database Queries
Live SQL execution against sample schemas. Tests JOIN reasoning, aggregation, and query optimisation — all with real result-set verification.
PostgreSQL, MySQL, SQLite supported
Result-set comparison with expected output
Query plan inspection (advanced)
Sample schemas pre-loaded
Frontend
Frontend & UI Challenges
HTML / CSS / JavaScript challenges for frontend candidates. React, Vue, and vanilla JS supported with browser-based preview.
Live preview pane in candidate IDE
Component-style problem sets
Responsive & accessibility checks
Visual diff scoring
Take-home
Project Assessments
Multi-file, multi-hour project assignments delivered in a sandboxed environment. Tests how candidates structure code and reason across files — not just function-level skill.
Multi-file workspace
Configurable time windows
Auto-graded test suites
Replaces external take-home tooling
New · AI literacy
AI Co-pilot Assessments
For senior roles where AI literacy is the signal. Candidates use a Skolarli-provided AI assistant during the assessment — you see exactly how productively they used it.
Sandboxed in-IDE AI assistant
Prompt count and token usage telemetry
Full chat transcript on the dossier
Powered by Skolarli AI models on AWS — in your VPC
Why Skolarli Coding Assessments (Powered by kodr.run)
A coding platform built for 2026 hiring, not 2018.
Most coding test platforms were built before AI cheating or AI-Assistance in coding became a daily practice. kodr.run was built for the world we're hiring in now.
OS-level AI tool blocking
Cursor, Copilot, ChatGPT desktop, Claude desktop, and other AI assistants blocked at the OS level via Skolarli Integrity Browser. Network-level domain blocking shuts down web AI tools simultaneously.
Real IDE, not a textarea
Monaco editor (the engine that powers VS Code) with syntax highlighting, IntelliSense-style autocomplete, multi-file workspaces, and the keyboard shortcuts engineers expect.
Current language versions
Python 3.12, Java 17, Node 22, Rust 1.85, GCC 14, TypeScript 5.6 — and counting. Most legacy platforms still run versions from 2019. Candidates write code in environments that match their actual workflow.
Voice fingerprinting
Speaker biometrics confirm the same candidate is taking the entire assessment. No coached cousins, no swapped chairs mid-test.
Trust score with every result
Every coding submission carries a 0–100 trust score with severity-weighted violation log. You see the code, the test results, and the integrity context — together.
Magic-link delivery
Send a link, candidate clicks, IDE loads. No sign-up, no install (until SIB is required), no friction. Candidates start coding in under 30 seconds.
Common questions
Coding assessments, upfront.
How long does a typical coding assessment take?
Most coding assessments run between 45 and 90 minutes for screening rounds (1-3 algorithm problems), 2 hours for take-home-style project rounds, and up to 4 hours for senior-level assessments combining algorithms, system design, and bug fixes. You configure the duration based on the role.
Which programming languages does kodr.run support?
30+ languages across modern web (Python, JavaScript, TypeScript, Java, Go), systems (Rust, C, C++, Assembly), JVM (Kotlin, Scala, Clojure, Groovy), scripting (Ruby, Perl, Lua, Bash), functional (Haskell, OCaml, Elixir, Erlang), and academic (COBOL, Fortran, Prolog, Pascal, BASIC). All languages run on current versions.
How does kodr.run prevent candidates from using ChatGPT, Cursor, or Copilot during the test?
Two layers. First, the Skolarli Integrity Browser locks the operating system during the assessment — Cursor, Copilot, ChatGPT desktop, Claude desktop, and other AI assistants are blocked at the process level. Second, network-level domain blocking shuts down access to web-based AI tools (chat.openai.com, claude.ai, perplexity.ai, etc.). Every blocked attempt is logged and surfaces in the candidate's trust score.
Are test cases hidden from the candidate, or visible?
Both. Each problem can be configured with public test cases (visible to the candidate, run on submit) and hidden test cases (run after submission, used for scoring). This mirrors how production code is actually evaluated — public cases give candidates feedback, hidden cases prevent over-fitting to known examples.
Can we add our own custom problems and test cases?
Yes. Customers can author custom problems with full editor support — problem statement, starter code, sample inputs, public test cases, hidden test cases, time limits, and memory limits. Custom problems stay private to your tenant. SkoAI Quiz can also generate calibrated problems from your own engineering documentation or job descriptions.
How does code execution work — is it really running my candidate's code?
Yes. Candidate code runs in sandboxed containers with strict CPU, memory, and time limits. Each language has its own runtime environment with the standard library available. Results, including stdout, stderr, runtime, and memory usage, are captured and surfaced in the dossier. We never share candidate code across tenants or train AI on it.
How do scoring and exporting results work?
Each problem produces an automated score based on test cases passed, runtime efficiency, and memory usage. Combined with the candidate's trust score, the assessment outputs a complete dossier — code submissions, test results, violation log, AI-assisted summary, and a final recommendation. Dossiers export as PDF or push to your ATS via webhook or REST API.
""
Ready when you are
Hire engineers on actual code.
Not on what AI wrote for them.
Book a 30-minute walkthrough and we'll show you exactly how Skolarli's coding assessments — powered by kodr.run — fit into your engineering hiring funnel.