Take-Home Exercises for AI-Era Interviews

Illustration of a human head with a magnifying glass examining the brain, a robotic hand, a lightbulb, and a checklist

This page accompanies the article Engineering Interviews and AI’s Philosophical Problem published on The General Partnership Blog.

As AI coding assistants become ubiquitous, engineering interviews need to evolve. The examples below are sourced from research into how companies are already adapting their interview processes. They represent high-level descriptions of assignment types rather than ready-to-use questions—intended as inspiration for designing exercises that test what matters in an AI-augmented world: judgment, architectural thinking, debugging skills, and the ability to work with ambiguity.

Assignment Examples

In-Memory Database Builder

Build a simple in-memory database with progressively complex features:

What it tests: Progressive complexity handling, architectural decisions as requirements grow, prioritization under time pressure.

Debug Full-Stack AI Chat Application

Receive a codebase with intentional bugs and fix them:

What it tests: Debugging over greenfield coding, ability to understand existing systems, documentation skills. AI tools explicitly encouraged.

Drag-and-Drop Calendar Functionality

Implement drag-and-drop interface for a calendar application:

What it tests: UI/UX implementation skills, framework selection, judgment about AI assistance, technical communication.

Feature Implementation with Easter Egg

Build a functional application per provided requirements:

What it tests: Attention to detail, instruction-following, ability to meet precise specifications.

Extend Existing Application

Receive a small existing codebase and extend it:

What it tests: Working with existing code, respecting established patterns, iterative development process.

Real Dataset Analysis

Work with messy, real-world data:

What it tests: Data engineering skills, judgment with imperfect information, documentation of decision-making.

Product Feature with AI Integration

Build a feature that requires AI functionality:

What it tests: Understanding of AI capabilities and limitations, architectural judgment, metacognition about tooling.

Multi-Tenant Rate Limiter

Design a rate limiting system from scratch:

What it tests: System design skills, scalability thinking, handling of complex requirements.

Recommendation Engine

Build a recommendation system:

What it tests: Algorithm selection, performance considerations, ability to explain technical choices.

Standard Requirements

Duration & Timeline

Expected Deliverables

  1. Working, runnable code
  2. Comprehensive README including:
    • Setup instructions
    • Design decisions and rationale
    • Trade-offs considered
    • How AI was used (if applicable)
  3. Clean commit history showing iterative work
  4. Tests or verification of functionality

Evaluation Criteria

What Makes These AI-Era Appropriate

These assignments share characteristics that make them effective even when candidates use AI coding tools:

  1. Cannot be solved with a single AI prompt — require iterative refinement and judgment calls
  2. Include deliberate ambiguity — candidates must make decisions or ask clarifying questions
  3. Test architectural judgment — not just implementation ability
  4. Include debugging or extension — not just greenfield coding
  5. Require personal context — e.g., “How would you scale this for your previous company?”
  6. Make AI usage visible — documentation requirements surface how tools were used

The goal isn’t to prevent AI usage—it’s to ensure the interview reveals the judgment, taste, and communication skills that matter for the role.