Take-Home Exercises for AI-Era Interviews

This page accompanies the article Engineering Interviews and AI’s Philosophical Problem published on The General Partnership Blog.
As AI coding assistants become ubiquitous, engineering interviews need to evolve. The examples below are sourced from research into how companies are already adapting their interview processes. They represent high-level descriptions of assignment types rather than ready-to-use questions—intended as inspiration for designing exercises that test what matters in an AI-augmented world: judgment, architectural thinking, debugging skills, and the ability to work with ambiguity.
Assignment Examples
In-Memory Database Builder
Build a simple in-memory database with progressively complex features:
- Start with basic SET/GET/DELETE operations
- Add filtered scan functionality
- Implement TTL (time-to-live) features
- Add file compression capability
What it tests: Progressive complexity handling, architectural decisions as requirements grow, prioritization under time pressure.
Debug Full-Stack AI Chat Application
Receive a codebase with intentional bugs and fix them:
- Trace logic through the system to diagnose errors
- Fix bugs using AI tools to accelerate debugging
- Document what you found and how you fixed it
What it tests: Debugging over greenfield coding, ability to understand existing systems, documentation skills. AI tools explicitly encouraged.
Drag-and-Drop Calendar Functionality
Implement drag-and-drop interface for a calendar application:
- Use existing frameworks/libraries as needed
- Make architectural decisions about complexity
- Show judgment about when to use vs. not use AI assistance
- Document trade-offs in implementation approach
What it tests: UI/UX implementation skills, framework selection, judgment about AI assistance, technical communication.
Feature Implementation with Easter Egg
Build a functional application per provided requirements:
- Include a specific Easter egg as instructed
- Follow provided code style guidelines
- Write a comprehensive README
What it tests: Attention to detail, instruction-following, ability to meet precise specifications.
Extend Existing Application
Receive a small existing codebase and extend it:
- Add new features or functionality as specified
- Integrate with existing architecture
- Maintain code style and patterns
- Show commit history with iterative development
- Write README explaining integration decisions
What it tests: Working with existing code, respecting established patterns, iterative development process.
Real Dataset Analysis
Work with messy, real-world data:
- Clean and process data appropriately
- Handle edge cases and data quality issues
- Build analysis or visualization
- Document data quality decisions
- Explain handling of ambiguous cases
What it tests: Data engineering skills, judgment with imperfect information, documentation of decision-making.
Product Feature with AI Integration
Build a feature that requires AI functionality:
- Show judgment about AI versus traditional approaches
- Implement appropriate AI tool usage
- Document when and why AI was used
- Balance AI-generated code with custom logic
- Explain architectural decisions
What it tests: Understanding of AI capabilities and limitations, architectural judgment, metacognition about tooling.
Multi-Tenant Rate Limiter
Design a rate limiting system from scratch:
- Handle multiple tenants with different policies
- Implement various rate limiting strategies
- Consider scalability and performance
- Document design trade-offs
- Test edge cases
What it tests: System design skills, scalability thinking, handling of complex requirements.
Recommendation Engine
Build a recommendation system:
- Handle real-time updates
- Process user behavior data
- Implement appropriate algorithms
- Balance accuracy and performance
- Explain recommendation strategy choices
What it tests: Algorithm selection, performance considerations, ability to explain technical choices.
Standard Requirements
Duration & Timeline
- Actual work time: 2-8 hours (3-4 hours most common)
- Calendar time: 7-10 days to complete
- Format: Open book (internet, any IDE, AI tools permitted)
Expected Deliverables
- Working, runnable code
- Comprehensive README including:
- Setup instructions
- Design decisions and rationale
- Trade-offs considered
- How AI was used (if applicable)
- Clean commit history showing iterative work
- Tests or verification of functionality
Evaluation Criteria
- Code quality and architecture
- Problem-solving approach (visible in commits)
- Communication in documentation
- AI usage judgment (when to use/not use)
- Handling of edge cases
- Attention to requirements
- Ability to explain decisions in follow-up
What Makes These AI-Era Appropriate
These assignments share characteristics that make them effective even when candidates use AI coding tools:
- Cannot be solved with a single AI prompt — require iterative refinement and judgment calls
- Include deliberate ambiguity — candidates must make decisions or ask clarifying questions
- Test architectural judgment — not just implementation ability
- Include debugging or extension — not just greenfield coding
- Require personal context — e.g., “How would you scale this for your previous company?”
- Make AI usage visible — documentation requirements surface how tools were used
The goal isn’t to prevent AI usage—it’s to ensure the interview reveals the judgment, taste, and communication skills that matter for the role.