zero2sudo’s Mock SWE Interview Rubric
The tech interview rubric I use to conduct irl mock interviews with intern and new grad software engineering candidates
Understanding the Interview Rubric
This rubric is an evaluation tool modeled after assessment frameworks used by different tech companies during technical interviews. It is specifically calibrated for intern and new grad software engineering interviews that involve whiteboarding or shared virtual coding exercises.
What This Rubric Provides
A structured framework for evaluating performance across eight key dimensions of technical interviews
Clear descriptions of performance levels from “Exceptional” to “Deficient” for each dimension
Time management guidelines for both interviewers and candidates
Example decision-making criteria used by hiring committees
How to Use This Rubric for Practice
For Individual Practice:
Self-assessment: After completing practice problems, score yourself in each dimension to identify strengths and weaknesses
Time management: Use the suggested phase timings to practice pacing yourself under real interview conditions
Focus areas: Pay special attention to dimensions weighted more heavily (Algorithmic Approach, Coding Implementation, Communication)
For Mock Interview Practice with Partners:
Role-playing: Take turns as interviewer and candidate, using the rubric to provide structured feedback
Targeted feedback: Use the specific criteria in each dimension to offer concrete improvement suggestions
Red flag awareness: Be mindful of the red-flag patterns to avoid common pitfalls
For Interview Preparation Groups:
Standardized evaluation: Use the rubric to create consistent assessment across different practice sessions
Peer review: Have multiple observers rate the same performance to calibrate what “good” looks like
Progress tracking: Document scores over time to measure improvement in specific dimensions
Key Tips for Success
Problem Comprehension (Dimension 1): Restate problems and identify constraints before diving into solutions
Algorithm Selection (Dimension 2): Articulate trade-offs between different approaches
Code Quality (Dimension 3): Produce clean, readable code even under pressure
Testing Rigor (Dimension 4): Develop a systematic approach to testing solutions with diverse cases
Communication (Dimension 6): Think aloud clearly while solving problems
Implementation Strategy
Select practice problems that match real interview complexity (Leetcode Premium, look at company tagged problems!)
Set up a realistic environment (whiteboard, shared doc, or IDE without autocomplete)
Use a timer to enforce phase durations
Record sessions for later review (for example, use Samsung Notes voice recordings and transcriptions)
Track scores across dimensions to monitor improvement
This example rubric has been calibrated for intern and new-grad whiteboard interviews.
The Rubric
Dimension 1 · Problem Comprehension & Clarification
What’s Being Evaluated: How quickly and thoroughly the candidate frames the problem, elicits constraints, restates examples
4 — Exceptional: Surfaces all constraints and edge-cases unprompted; proposes follow-up examples revealing hidden complexity
3 — Strong / Hire: Clarifies key inputs and outputs and at least one edge-case; asks logical questions
2 — Mixed / Leaning No-Hire: Needs multiple prompts to articulate basic constraints; misses obvious edge-cases
1 — Deficient: Starts coding without confirming understanding; significant misinterpretation persists
Dimension 2 · Algorithmic Approach & Trade-offs
What’s Being Evaluated: Quality of the high-level solution, evidence of CS fundamentals, exploration of alternatives
4 — Exceptional: Identifies optimal asymptotic solution and at least one plausible alternative; reasons clearly about trade-offs in time, space, and simplicity
3 — Strong / Hire: Arrives at an asymptotically optimal or near-optimal solution with coherent reasoning
2 — Mixed / Leaning No-Hire: Produces a workable but non-optimal approach; trade-off discussion is superficial or incorrect
1 — Deficient: Cannot articulate a complete approach; relies on brute force without justification
Dimension 3 · Coding Implementation
What’s Being Evaluated: Code correctness, readability, idiomatic style, use of data structures and APIs
4 — Exceptional: Produces nearly bug-free code on first pass; uses clear names, modular helpers, idiomatic constructs; layout is crisp and legible
3 — Strong / Hire: Correct solution after a few minor fixes; names, spacing, and structure are mostly clear
2 — Mixed / Leaning No-Hire: Compiles only after heavy interviewer support; repetitive, unclear, or non-idiomatic style
1 — Deficient: Does not complete core logic; syntax and structural mistakes dominate
Dimension 4 · Testing & Debugging
What’s Being Evaluated: Rigor of self-testing, ability to locate and fix issues methodically
4 — Exceptional: Designs thorough unit-style tests (happy path and edge-cases); self-identifies and fixes bugs without prompting
3 — Strong / Hire: Tests main path and one edge-case; finds and fixes most errors with light hints
2 — Mixed / Leaning No-Hire: Minimal tests; relies on interviewer to surface bugs; fixes are ad-hoc
1 — Deficient: No tests; cannot locate failures even with guidance
Dimension 5 · Complexity Analysis
What’s Being Evaluated: Ability to analyze and articulate Big-O for time and space
4 — Exceptional: Quickly derives and explains precise complexity, including dominating terms and memory overhead (When I was an intern candidate, I noticed people tended to be quite lenient here with intern / new grad candidates — most did not expect complexity analysis upfront in the beginning of the problem)
3 — Strong / Hire: Provides correct Big-O for time and space with brief explanation
2 — Mixed / Leaning No-Hire: Gives partially correct or hand-wavy analysis; omits space or mis-states key terms
1 — Deficient: Unable to analyze complexity
Dimension 6 · Communication & Collaboration
What’s Being Evaluated: Clarity of thought, structured narration, receptiveness to feedback
4 — Exceptional: Thinks aloud in a clean, logical story; diagrams data flow; checks alignment with interviewer
3 — Strong / Hire: Explains assumptions and next steps; adapts gracefully to hints
2 — Mixed / Leaning No-Hire: Disorganized narration, or defensive responses
1 — Deficient: Communication breakdown; interviewer cannot follow reasoning
Dimension 7 · Execution & Time Management
What’s Being Evaluated: Ability to pace through phases (understand, design, code, test) within 35–40 minutes
4 — Exceptional: Finishes complete cycle with headroom; priorities are explicit
3 — Strong / Hire: Completes critical path; minor scope is trimmed or rushed
2 — Mixed / Leaning No-Hire: Significant sections incomplete or rushed; loses track of time
1 — Deficient: Stuck in early phase; no runnable solution by end
Dimension 8 · Technical Foundations & Fluency
What’s Being Evaluated: Breadth of CS concepts shown organically (data structures, recursion, concurrency, language features)
4 — Exceptional: Demonstrates depth beyond role level; introduces advanced but relevant concepts correctly
3 — Strong / Hire: Employs appropriate APIs and data structures for the task; correct terminology
2 — Mixed / Leaning No-Hire: Misuses or confuses fundamental concepts; needs coaching to select structures
1 — Deficient: Fundamental gaps that block solution
Scoring & Decision Guide
Primary signal: Dimensions 1–6
Scale:
Average score of 3.25 or above with no dimension below 3 → Strong Hire.
Average 2.8–3.2 → Weak Hire (panel discussion).
Any dimension scored 1 or overall average below 2.8 → No-Hire
How to Run the Interview (Intern/New-Grad Focus)
The time allotted below are merely suggestions. Many problems can require far more time for coding and debugging.
Phase: Kick-off (2 minutes)
Set problem, clarify goal, share top three dimensions so expectations are explicit
Phase: Clarify & Design (7 minutes)
Listen for Dimension 1; nudge if constraints are unclear
Phase: Algorithm Deep-Dive (8 minutes)
Probe alternatives; watch Dimension 2 reasoning style
Phase: Coding (12 minutes)
Observe structure, variable names, incremental code; capture artifacts verbatim
Phase: Testing & Analysis (6 minutes)
Prompt for edge cases; ask about time and space
Phase: Behavioral Minute (3 minutes)
Optional: “Tell me about a bug you fixed recently”
Phase: Wrap-up (2 minutes)
Allow candidate questions; jot immediate scores and verbatim quotes
Red-Flag Patterns to Note Quickly
Starts implementing before agreeing on approach
Ignores hints or becomes argumentative
Cannot derive any test other than the sample input
