---
title: 20 Technical Interview Questions and Answers [2026]
description: Master 20 technical interview questions and answers across coding, system design,
  and behavioral rounds. Each includes what interviewers check and red flags.
type: article
url: https://www.foundrole.com/blog/20-technical-interview-questions-answers
date: 2026-04-02T09:55:54Z
og_description: 20 technical interview questions with what interviewers actually check and the
  one red flag that sinks each answer. Coding, system design, and STAR method.
og_image: https://www.foundrole.com/img/pages/w2nv8o/20-technical-interview-questions-answers.png?v=2
breadcrumbs:
  - label: Home
    url: https://www.foundrole.com/
  - label: Blog
    url: https://www.foundrole.com/blog
  - label: Interview Tips
    url: https://www.foundrole.com/blog/category/interview-tips
---

**Author:** Alex Mercer
**Reading time:** 16 minutes
**Tags:** Soft Skills, Virtual Interview, Behavioral Interview, Technical Interview

Even strong engineers bomb technical interviews. [Interviewing.io's analysis of platform data](https://interviewing.io/blog/technical-interview-performance-is-kind-of-arbitrary-heres-the-data) found that candidates who consistently score well (mean \~3 out of 4) still fail roughly 22% of the time. Not because they lack skill. Because interview performance is volatile, a single bad session can erase months of preparation.

That volatility sits on top of a deeper problem. [The CoderPad and CodinGame 2025 hiring survey](https://coderpad.io/survey-reports/coderpad-and-codingame-state-of-tech-hiring-2025/) reported that 54% of developers cite lack of relevance to actual job roles as their top complaint about coding assessments. You know the format doesn't reflect real work. You still have to pass it.

Add the 2026 wrinkle: [HackerRank's 2024 Developer Skills Report](https://www.prnewswire.com/news-releases/hackerranks-2024-developer-skills-report-highlights-new-trends-in-the-hiring-and-upskilling-of-software-developers-302053236.html) found that 76% of developers use AI assistance to code daily. Interviewers know this too, and they're actively listening for answers that sound AI-generated rather than lived. Authenticity is the new differentiator.

This guide covers 20 questions grouped into five themes: coding and algorithms, system design, behavioral (STAR method), collaboration and conflict, and 2026-specific context. Each question includes what the interviewer actually checks, a strong answer example, the one red flag that sinks otherwise qualified candidates, and a format note on remote versus in-person delivery. Pick 8 to 10 questions based on your role level using the picker below, then prep those first.

## How to Use These 20 Questions by Role Level

The 20 questions in this guide span five distinct themes, and your role level determines which ones deserve most of your prep time.

**The five themes:** Coding and Algorithms (Q1-Q5), System Design (Q6-Q10), Behavioral/STAR (Q11-Q15), Collaboration and Conflict (Q16-Q18), and 2026 Role-Specific (Q19-Q20).

How the mix shifts by level: junior roles skew roughly 80% toward coding and algorithm problems, plus a behavioral round or two. Mid-level loops add system design. Senior and staff interviews flip the ratio entirely, with 50% or more of the loop devoted to system design and deep behavioral assessment. TPMs spend most of their time on collaboration, conflict resolution, and role-specific questions.

If you're switching into tech from another field, the good news is that collaboration and behavioral themes transfer directly. Your industry vocabulary changes, but the conflict structures don't. Map your existing stories to the STAR framework and focus your technical prep on the coding fundamentals. If you're looking for guidance on [how to ace your first tech interview](https://www.foundrole.com/blog/your-first-tech-interview-how-to-ace-it-with-no-experience) without prior experience, that companion guide walks through the basics.

One format note that catches people off guard: [InterviewQuery's analysis of 300,000+ interviews](https://www.interviewquery.com/p/ai-interview-trends-tech-hiring-2025) found that in-person interview rounds rebounded from 24% in 2022 to 38% in 2025, particularly for behavioral and design rounds. Prepare for both whiteboard and screen-share formats in the same loop.

Identify your role level, then write down your 8 to 10 priority question numbers. Work through those first and return to the rest after your interview is scheduled.

## Coding & Algorithm Questions (Q1-Q5)

Coding rounds test your problem-solving process more than your ability to produce syntactically correct code. The interviewer is watching how you think under constraint, not just whether you get the right answer.

[Underdog.io's 2025 analysis](https://underdog.io/blog/reality-of-tech-interviews-2025) found that successful junior FAANG candidates typically solve 150 to 200 coding problems before interviewing, and structured preparation with mock interviews increases pass rates by approximately 30% compared to self-directed study. Volume matters, but deliberate practice matters more.

### Q1: Reverse a linked list

**What the interviewer checks:** Pointer manipulation fundamentals and edge case awareness (empty list, single node).

**Strong answer:** State your approach before touching the keyboard. "I'll use three pointers: previous, current, and next. I'll iterate through the list, reversing the pointer direction at each node, then return the previous as the new head." Then code it, then test with an empty list input.

**Red flag:** Jumping straight to code without verbalizing your approach. Silence is the single most common failure mode in coding rounds, regardless of the problem.

### Q2: Find two numbers in an array that sum to a target

**What the interviewer checks:** HashMap intuition and your ability to reason about time/space trade-offs.

**Strong answer:** "The brute-force approach checks every pair in O(n squared). I can improve to O(n) time and O(n) space with a hash set: for each number, check if (target minus current) exists in the set." Then discuss when the brute-force approach might actually be acceptable (tiny input size, memory constraints).

**Red flag:** Implementing the O(n squared) solution without acknowledging the trade-off or attempting to improve it.

### Q3: Return the level-order traversal of a binary tree

**What the interviewer checks:** BFS versus DFS decision-making and queue usage.

**Strong answer:** "Level-order means BFS. I'll use a queue, process nodes level by level, and collect results into a list of lists." The key is explaining *why* BFS fits this problem rather than just reaching for recursion by default.

**Red flag:** Reaching for recursive DFS without explaining why when the problem explicitly asks for level-order output.

### Q4: Implement an LRU cache

**What the interviewer checks:** Data structure design (doubly linked list plus hash map) and real-world applicability.

**Strong answer:** Walk through the two operations (get and put), explain why a hash map alone isn't enough (you need ordering for eviction), and then implement the eviction logic for when capacity is exceeded.

**Red flag:** Describing the solution conceptually but failing to implement the eviction logic. This is a design-and-build question, not a whiteboard sketch.

### Q5: Design a rate limiter function

**What the interviewer checks:** Awareness of sliding window versus token bucket patterns and how they behave under load.

**Strong answer:** Start by clarifying requirements: per-user or global? Fixed window or sliding? Then implement one approach and explain when you'd choose the other. Mention what happens under concurrent requests.

**Red flag:** Ignoring concurrency and race conditions when the interviewer asks follow-up questions about production deployment.

**Before vs. After: The Narration Difference**

Weak approach: A candidate reads the problem, goes silent for three minutes, types a solution, says "Done." The interviewer has no insight into their reasoning and can't give partial credit for a sound approach with a small bug.

Strong approach: "I see this is asking for... My first instinct is brute force, which would be O(n squared). Let me think about whether a HashMap can improve that. Yes, here's my plan..." The interviewer can evaluate problem decomposition, trade-off reasoning, and communication skills even if the final code has a minor off-by-one error.

Pick one problem from this list, set a 25-minute timer, and solve it while narrating aloud. Record yourself if possible. Then review where you went silent.

## System Design Questions (Q6-Q10)

These rounds reveal how you reason about trade-offs at scale. The strongest signal you can send is driving the conversation with structured thinking rather than waiting for the interviewer to prompt each step.

Use this framework for every system design question:

1. **Scope:** Clarify scale, constraints, and SLAs
2. **Estimate:** Requests per second, data volume, read-to-write ratio
3. **Sketch:** Components and data flow at a high level
4. **Deep dive:** The most critical component in detail
5. **Trade-offs:** What you'd change with more time or different constraints

### Q6: Design a URL shortener (e.g., [bit.ly](http://bit.ly))

**What the interviewer checks:** Whether you scope requirements before diving into architecture. Database choice rationale. Hashing strategy.

**Strong answer:** Start with "How many URLs per day? What's the read-to-write ratio?" Then discuss hash collision handling, database partitioning, and whether you'd use a relational DB or a key-value store, with reasoning for your choice.

**Red flag:** Jumping to architecture without clarifying scale. Designing for 100 users and 100 million users produces fundamentally different systems.

### Q7: Design a real-time notification system

**What the interviewer checks:** Pub/sub versus polling decision-making. WebSockets versus Server-Sent Events.

**Strong answer:** Scope the notification types (push, email, in-app), then discuss the message queue architecture. Explain why you'd choose WebSockets for bidirectional needs or SSE for one-way push. Address what happens when a notification fails to deliver.

**Red flag:** Ignoring failure modes and retry logic entirely.

### Q8: How would you design a distributed caching layer?

**What the interviewer checks:** Cache invalidation strategies and the consistency versus availability trade-off.

**Strong answer:** Discuss cache-aside versus write-through patterns, TTL-based expiration, and what happens during a cache stampede (thundering herd). Name your eviction policy (LRU, LFU) and justify it based on access patterns.

**Red flag:** Saying "just use Redis" without discussing eviction policy, invalidation strategy, or the thundering herd problem.

### Q9: Design a search autocomplete feature

**What the interviewer checks:** Trie versus inverted index decision, latency budgets, and CDN/edge caching awareness.

**Strong answer:** Define the latency target (under 100ms for each keystroke), discuss how you'd store and rank suggestions, and explain whether you'd precompute results or query in real time based on the corpus size.

**Red flag:** No discussion of latency requirements. Autocomplete without latency constraints isn't autocomplete.

### Q10: Walk me through how you'd make a monolith more scalable

**What the interviewer checks:** Ability to diagnose bottlenecks based on evidence rather than defaulting to a religious preference for microservices.

**Strong answer:** "First, I'd profile to find the actual bottleneck: is it CPU, memory, I/O, or database? Then I'd consider vertical scaling, read replicas, caching, and async processing before recommending service extraction." Discuss the operational overhead microservices introduce.

**Red flag:** Recommending microservices as the first move without acknowledging deployment complexity, distributed tracing needs, and team size requirements.

Practice Q6 (URL shortener) end-to-end using the five-step framework above. It covers all the signals interviewers care about and is by far the most frequently asked system design question at the mid-level.

## Behavioral Questions Using the STAR Method (Q11-Q15)

STAR-structured answers (Situation, Task, Action, Result) separate candidates who tell compelling stories from those who recite rehearsed scripts. The key: spend roughly 20% of your answer on Situation and Task combined, then invest the remaining 80% in what you personally did and the measurable outcome.

A word on authenticity in 2026. [CoderPad's 2025 survey](https://coderpad.io/survey-reports/coderpad-and-codingame-state-of-tech-hiring-2025/) found that 40% of recruiters report experiencing candidate cheating during technical assessments, and among those who admitted cheating, nearly half used AI tools. AI-prepped STAR answers tend to follow the same cadence and use the same generic language. Differentiate with specific numbers, named colleagues (first names are fine), and honest admissions about what didn't go perfectly.

### Q11: Tell me about a time you disagreed with a technical decision

**What the interviewer checks:** Whether you can advocate a position while respecting the team's ultimate decision.

**Strong answer:** "Our team chose MongoDB for a new service. I believed the data relationships warranted PostgreSQL and presented a comparison showing join query performance differences. The team chose Mongo anyway for consistency with our existing stack. I committed to that decision and built the data access layer. Six months later, we did migrate one collection to Postgres for a specific reporting feature, which validated some of my original concerns."

**Red flag:** Framing the story as "I was right, they were wrong." Interviewers want evidence of disagree-and-commit, not vindication.

### Q12: Describe a project that failed or missed its deadline

**What the interviewer checks:** Self-awareness, accountability, and your learning posture.

**Strong answer:** Name the project, what went wrong, and what you personally contributed to the failure. "I underestimated the integration testing effort by two sprints because I scoped based on unit test coverage alone. I've since built integration test budgets into every project estimate I own."

**Red flag:** Blaming external factors (the PM changed requirements, the other team was slow) without any personal reflection on what you could have done differently. For more on handling vulnerability-based questions well, see how to [answer the greatest weakness question](https://www.foundrole.com/blog/what-is-your-greatest-weakness-15-best-answers-with-examples) with specific examples.

### Q13: Tell me about your most complex technical project

**What the interviewer checks:** Your ability to communicate complexity to a non-expert and demonstrate clear ownership.

**Strong answer:** Pick a project where you made key decisions. Describe the complexity in terms the interviewer can follow (data volume, latency constraints, team coordination), then zoom into the part you owned. Use "I" more than "we."

**Red flag:** Describing "we did X" for five minutes without ever saying "I specifically did Y." The interviewer can't assess your contribution if everything is attributed to the team.

### Q14: How do you keep up with fast-moving technology?

**What the interviewer checks:** Genuine curiosity versus buzzword-dropping.

**Strong answer:** "I spend about two hours a week on focused learning. Right now I'm working through a distributed systems course on \[platform\], and I recently shipped a side project using \[specific tool\] to understand its trade-offs firsthand." Name real things you've built or read, not tools you've "heard about."

**Red flag:** Listing AI tools or frameworks you've "been exploring" without a single concrete example of applying them.

### Q15: Tell me about a time you had to prioritize under pressure

**What the interviewer checks:** Decision-making framework and stakeholder communication.

**Strong answer:** Describe a situation with genuinely competing priorities (not just "I had a lot to do"). Explain how you decided what to cut or delay and how you communicated that trade-off to stakeholders. "I told the PM we could ship the core feature by Friday but the analytics dashboard would slip to the following Tuesday. She agreed because the dashboard didn't block the product launch."

**Red flag:** A story with no actual trade-off, where the resolution is "I just worked harder" or "I stayed late." That demonstrates effort, not judgment.

Most behavioral questions boil down to self-awareness and specificity. If you want to practice structuring your personal narrative before the interview, [nail your "tell me about yourself" answer](https://www.foundrole.com/blog/tell-me-about-yourself-best-answers-for-any-interview) first, since it usually opens the conversation and sets the tone for every STAR story that follows.

**Before vs. After: STAR Specificity**

Weak: "We improved performance on the project and the client was happy with the results."

Strong: "I refactored the three heaviest database queries, cutting p99 latency from 800ms to 120ms. Customer support tickets related to timeouts dropped 40% in the following two weeks."

Write out a STAR answer for Q12 (project failure). Pick a real story from your career. Time yourself speaking it aloud. Aim for 90 seconds. If you're over two minutes, cut Situation and Task.

## Collaboration & Conflict Questions (Q16-Q18)

These questions test whether you'll make the team stronger or cause friction that drags everyone down. At top companies, these signals carry weight equal to technical performance. "Culture fit" is often shorthand for "will this person escalate conflicts productively or let them fester?"

The pattern across all three questions below: interviewers want evidence of specific behavior, not your management philosophy. "I believe in open communication" scores zero. "I scheduled a 1:1 and said, 'I've noticed the last two PRs missed the deadline we agreed on. What's blocking you?'" scores high.

### Q16: How do you handle a teammate who consistently misses deadlines?

**What the interviewer checks:** Directness balanced with empathy, and your judgment about when to escalate.

**Strong answer:** "I pulled them aside after the second missed deadline and asked if something was blocking them. Turns out they were overcommitted on another project. I helped them surface the conflict to our manager, and we redistributed two tasks. The next sprint, they delivered on time."

**Red flag:** Jumping straight to HR or management before having a direct conversation. Also, a red flag: "I just picked up the slack myself" without addressing the root cause.

### Q17: Tell me about a time you had to give difficult feedback

**What the interviewer checks:** Your ability to be constructive when the conversation is uncomfortable.

**Strong answer:** Name the person (first name), what the feedback was about, and how you delivered it. "I told Marcus that his code reviews were blocking the team because he was requesting stylistic changes that weren't in our style guide. I suggested we formalize a style guide together, which he ended up owning."

**Red flag:** A story where you avoided the feedback entirely, softened it so much the person didn't understand the issue, or delivered it publicly.

### Q18: Describe working with someone whose style was very different from yours

**What the interviewer checks:** Adaptability and communication across working styles.

**Strong answer:** Describe the style difference concretely (async vs. synchronous, big-picture vs. detail-oriented, fast-shipping vs. thorough-testing). Explain what you adjusted in your own approach. "I started writing more detailed PR descriptions because Priya processed information better in writing than in our standups."

**Red flag:** Framing the other person as the problem without showing any adaptation on your part.

Career switchers: these stories transfer directly from any industry. The organizational dynamics of missed deadlines, difficult feedback, and style clashes are universal. Use your real stories.

For each of Q16 through Q18, recall one specific story from your career. Write one sentence for each, summarizing what you did (not what the team did or what happened to you).

## Role-Specific & 2026 Context Questions (Q19-Q20)

These two questions are new additions to interview loops at companies that have updated their rubrics for 2026. They test whether you've integrated current tools and realities into your working approach rather than just listing them on your resume.

### Q19: How do you use AI tools in your day-to-day engineering work, and where do you draw the line?

**What the interviewer checks:** Pragmatic AI fluency versus uncritical dependence.

[HackerRank's 2024 report](https://www.prnewswire.com/news-releases/hackerranks-2024-developer-skills-report-highlights-new-trends-in-the-hiring-and-upskilling-of-software-developers-302053236.html) found 76% of developers already use AI assistance to code. The question isn't whether you use AI. It's whether you can articulate where AI helps and where you override it.

**Strong answer:** "I use GitHub Copilot for boilerplate code and test scaffolding. I use ChatGPT to draft documentation and explore unfamiliar APIs. I don't use AI for architecture decisions or security-sensitive code, because I need to understand every line in those contexts. Last month, Copilot suggested a regex that passed my test cases but would have catastrophically backtracked on certain inputs. I caught it during code review."

**Red flag:** Either extreme. "I don't use AI tools" sounds behind the curve. "AI handles most of my work" raises questions about your independent capability. The sweet spot is specific, bounded usage with clear judgment about limits.

### Q20: How do you approach remote collaboration and async communication on a distributed team?

**What the interviewer checks:** Written communication clarity, async discipline, and visibility without micromanagement.

**Strong answer:** "I default to async: detailed PR descriptions, Loom videos for complex explanations, and a daily standup update in Slack. For real-time decisions, I schedule focused 25-minute calls with an agenda. I over-document decisions in our wiki so the team in different time zones has full context without asking."

**Red flag:** "I prefer working in person" without qualifying how you handle distributed teams when required. Most companies now have at least some remote or hybrid component.

[InterviewQuery's data](https://www.interviewquery.com/p/ai-interview-trends-tech-hiring-2025) shows in-person rounds have climbed back to 38% in 2025. You may face both formats in the same interview loop, so prepare for whiteboard sessions in-person and collaborative coding tools remotely. Basic logistics matter: stable connection, camera at eye level, narrating your screen-sharing steps.

Meanwhile, [CoderPad's 2025 survey](https://coderpad.io/survey-reports/coderpad-and-codingame-state-of-tech-hiring-2025/) found that 84% of talent acquisition professionals expressed concern about AI-enabled plagiarism in assessments. Interviewers are specifically listening for authentic specificity. Generic, polished answers that could have been generated by a chatbot raise suspicion. Personal details, named tools, concrete numbers, and honest limitations don't.

Write a 60-second spoken answer to Q19 that names at least two specific AI tools and one task you explicitly don't offload to AI. Record yourself and listen back for anything that sounds like a press release instead of a real person.

## Your 2026 Tech Interview Prep Plan

These 20 questions span five themes, and each one tests something distinct. A gap in any single theme can sink an otherwise strong interview loop. The coding sections test your problem-solving process. System design tests your ability to reason about scale. Behavioral questions test your self-awareness. Collaboration questions test whether you'll make the team better. And the 2026 context questions test whether you're building with current tools or coasting on habits from two years ago.

The differentiator in this guide: every question includes what the interviewer checks. That framing is what separates a memorized answer from a persuasive one. When you understand the signal the interviewer is looking for, you can adapt your real experience to fit rather than reciting a scripted response.

Your next steps: choose your 8 to 10 questions by role level. Prep two STAR stories for each behavioral theme. Run at least one timed coding problem with full narration. Then ask a friend or colleague to run a mock behavioral round using Q11, Q12, and Q16 so you hear your answers out loud before the real thing.

Once your prep is solid, the next move is finding roles worth interviewing for. [Search for software engineering roles on FoundRole](https://www.foundrole.com/search?utm_source=blog&utm_medium=article&utm_campaign=top-20-tech-interview-questions-answers&utm_content=cta-conclusion), set alerts for your target stack and level, and [track your applications and interview rounds on FoundRole](https://www.foundrole.com/job-tracker?utm_source=blog&utm_medium=article&utm_campaign=top-20-tech-interview-questions-answers&utm_content=cta-tracker) so you know exactly which rounds you've completed and which are pending. Structured prep deserves structured execution. And once you land the offer, don't leave money on the table. FoundRole's guide to [tech salary negotiation](https://www.foundrole.com/blog/tech-salary-negotiation-base-equity-scripts-2026) covers base, equity, and signing bonus scripts you can use word for word.
## Latest Articles

- [Interview Questions and Answers: Top 20 Examples (2025 Guide)](https://www.foundrole.com/blog/interview-questions-and-answers-top-20-examples-2025-guide)
- [Tech Interview Tips for Beginners: Ace It in 2026](https://www.foundrole.com/blog/your-first-tech-interview-how-to-ace-it-with-no-experience)
- [What Is Your Greatest Weakness? 15 Interview Answers](https://www.foundrole.com/blog/what-is-your-greatest-weakness-15-best-answers-with-examples)
- [How to Prepare for an Interview: Complete Guide (2026)](https://www.foundrole.com/blog/how-to-prepare-for-an-interview-complete-guide)
- [Entry-Level Interview Tips: Scripts That Get You Hired](https://www.foundrole.com/blog/entry-level-interview-tips-the-scripts-that-get-you-hired-even-without-experience)


## Frequently Asked Questions

### What are the most common technical interview questions for software engineers?

Coding and algorithm problems covering linked lists, arrays, trees, and dynamic programming appear in virtually every screening round. System design questions like designing a URL shortener or notification system are standard for mid-level and senior roles. Behavioral STAR questions appear in every interview loop regardless of level, as companies use them to assess culture fit and collaboration skills.
### How long should my answers be in a technical interview?

For coding problems, narrate your thinking continuously since silence beyond 60 seconds is a red flag. System design rounds typically run 40 to 50 minutes, split across scoping, high-level design, deep dive, and trade-offs. Behavioral STAR answers should be 90 to 120 seconds when spoken aloud, with Situation and Task under 30 seconds so most time goes to Action and Result.
### What is the STAR method and how do I use it for tech interviews?

STAR stands for Situation, Task, Action, Result. It structures behavioral answers so your story stays focused and evidence-based. Invest most of your answer in Action, describing what you personally did, and Result with a quantified outcome. Prepare two to three STAR stories per behavioral theme such as failure, conflict, and leadership so you can adapt them to different questions.
### Do tech interviews always include behavioral questions, or just coding?

Almost all tech interview loops include at least one behavioral round, even for purely individual contributor roles. At FAANG and high-bar startups, behavioral rounds carry equal weight to coding, and a weak behavioral score can eliminate an otherwise strong technical candidate. The ratio shifts with seniority: junior loops may have one behavioral round while staff-level loops often have two or three.
### How is AI changing technical interviews in 2026?

76% of developers now use AI tools to code daily, and interviewers expect you to articulate how you use AI rather than whether you use it. Meanwhile, 40% of recruiters report observing cheating in technical assessments, making AI-generated answers increasingly recognizable by their generic cadence. The strongest approach is demonstrating AI fluency with specific tools and use cases while differentiating through concrete numbers and personal decisions.
### What system design questions should a mid-level engineer know?

URL shortener, notification system, and distributed caching layer are the three most commonly tested at mid-level. Use the Scope, Estimate, Design, Deep Dive, Trade-offs framework and drive the conversation rather than waiting for prompts. Mid-level candidates are expected to handle follow-up questions on failure modes, retry logic, and consistency versus availability trade-offs without being guided there by the interviewer.
### What if I don't know the answer to a technical interview question?

Say so clearly and immediately, then reason out loud. Something like "I haven't solved this specific problem before, but here's how I'd approach it" works well. Interviewers generally prefer a candidate who transparently problem-solves over one who silently struggles or guesses incorrectly. For system design questions, asking clarifying questions buys legitimate thinking time since scoping is itself a scored signal.
---

[Browse all articles](https://www.foundrole.com/blog)