Back to Posts

Tech interviews in 2026 - what works and what don't

Tech interviews. We've all been there. The stakes are high, you are locked in trying to impress your interviewers, and at the end of many grueling rounds of interviews, you receive a verdict on what could potentially change your life.

In recent tumultuous times, I went through a great volume of interviews in search of a new role. Having seen a wide spectrum of how different companies do their interviews, ranging from startups to big tech to quant companies, I decided to write this post to share more about my observation and thoughts on tech interviews. This blog is largely split into 2 parts: Going through my recent interview experiences/insights, and what I think interviews should look like moving forward.

Recent technical interview experience

I won't disclose how many interviews I have done in recent times, but let's just say Central Limit Theorem applies. The datapoints I've collected are definitely enough to spot certain trends, although I would do my best to not generalize unnecessarily. I was also once an interviewer in a previous company, so some of my opinions stem from once being on the other side of the interview table.

I have gotten >50% success rate in my interviews. This means that, given that I managed to make it to a technical screen, I've got a more than 50% chance of getting an offer from a company.

I have also been interviewing at mostly the L4-L5 level, so it's important to note that my perspectives are heavily skewed towards being a mid to senior level candidate. Most roles I apply to lies within the realm of Software Engineering for AI products or Machine Learning Infrastructure, both of which I've had experience building before. I believe these interview loops are generalizable across all Software Engineering roles.

Do note that I will not zoom in to any particular interview experience in this blogpost (NDA of course).

Final disclaimer, 95% of my interview experiences are for roles located in the San Francisco Bay Area, and the rest of the roles are in Singapore. Company types are roughly equally split at 33%/33%/33% between startups, mid-tech and big-tech.

Types of interviews I got

Here are the list of interview types that I went through over the past couple months.

1. AI-assisted Coding Tests

A few months back, Meta rolled out a new interview format, which is the AI-assisted coding tests for their interview loop. Companies have slowly started to follow this trend (or have done so even before Meta did). These coding tests could manifest in different forms, and here are the ones I have experienced:

a. Build a full service/product with assistance from an AI coding tool

These interviews typically last 1 hour. You are typically expected to build an end-to-end service, which could either be a full-stack web app, or a backend worker service that fulfills a certain task like parsing a csv or ingesting files.

You can typically pull up an AI coding tool of your choice, which could be Claude Code, Cursor, or whatever you are used to. I always maintain my Claude Code subscription for building my own side projects, so these subscriptions come in handy during such interviews. If you do not happen to have a subscription on hand, I believe the companies will provide API keys and set up instructions for you.

b. Half AI-assisted/Half non-AI-assisted interviews

This is a much more interesting format that I have seen being practised by smaller companies. The interview typically lasts 1 hour still. For the first 30 minutes of the interview, you are expected to build a full-stack web app using AI coding tools. For the last 30 minutes, you are asked to switch off all AI coding tools (screen shared of course), and you are asked to build a feature on top of the vibecoded app that you just made for the first 30 minutes.

I have also seen the reverse arrangement - building a small-scale OOP-based system in the first 30 minutes without AI coding tools, e.g. Connect-4 game using class-based implementation, and then spending the last 30 minutes of the interview vibecoding a full-stack app that builds on top of whatever has been implemented thus far.

c. Debug existing codebase and/or add features using AI coding tools

This one seems to be used by bigger companies more frequently, and I hypothesize that it is likely because this format is much more standardized and structured, but yet it still involves testing a candidate's proficiency in using AI coding tools.

In this format, you likely start with an existing codebase consisting multiple files and classes interacting with each other. You are then asked to prompt your way into new features or bug fixes on that existing codebase.

What I'd recommend

In 2025, we've already seen a ton of companies adopt some form of AI-assisted coding tests in their interview loop. As more companies start adopting such interview structure, it's important to start learning and practicing vibecoding workflows to succeed in such interviews.

One of the biggest factors that helped me succeed in such interviews is knowing the right amount of context to provide to these AI coding tools. While it is important to assess how much context is enough for each prompt, it is also important to ensure you are not spending too much time providing too much context, leading to not having enough time to finish all the requirements of the interview.

However, one big caveat is that in a typical interview loop of at least 6 interviews (of which 4 of them are usually technical), there will typically only be 1 interview that allows for AI coding tools. All other interviews conform back to your standard leetcode/system design structures. Thus, invest amounts of time into AI-assisted coding interviews as you deem fit, but not neglecting preparation for other interview types will be my prime advice here.

2. Coding Test on Standard Practices in Production Codebases

I have also seen a large volume of questions that tests your understanding of production codebases. These questions typically start out as a "Write a class that has a method that does an API call. Now that you are done with the core logic, what will you add before pushing to this code to production? Go ahead and implement them".

For such questions, your standard exception handling, status code handling and retry logic knowledge gets put to test.

3. Coding Test on Concurrency/Parallelism Concepts

For many companies out there, at least one interview involves drilling deep into testing for knowledge in parallelism/concurrency. These questions typically follow a variant of the following train of thought:

  1. Implement a class method/function for calling an API, process the data and store the data in-memory or in a json object
  2. What are the bottlenecks that you are facing? (I.e. I/O bottleneck, compute bottleneck; typically the former)
  3. For I/O bottleneck, what are then different methods to make these API calls parallel/concurrent?
  4. What are the tradeoffs between the different methods you propose above?
  5. Edit the code you wrote in step 1 and implement your concurrent/parallelized solution

In such cases, your knowledge of asyncio/multithreading/multiprocessing libraries get put to test. You can typically refer to API docs, but usually, no AI coding tools are allowed for this section. Your ability to implement a working, sensible solution under time pressure will then be assessed.

4. Low-Level Design (LLD)

As compared to 2 years back, there seems to be more companies testing LLD questions. These questions test your knowledge in object-oriented programming and design. For instance, design a parking lot system using a language of your choice. The interviewer wants to see your ability to design a system given vague requirements, testing your ability to gather more context about the system you are designing and solving curveball questions that they will ask along the way.

5. Leetcode

Dread it, run from it, leetcode still arrives. Some startups drop this entirely, but any mid-to-big tech company is almost guaranteed to have a few rounds of leetcode as part of their interview guantlet. Solving 2 medium questions in 45 minutes seem to be the new baseline expectation now, but I have also gotten 1 medium and 1 hard in 45 minutes by the big blue tech company iykyk.

What I'd recommend

Leetcode premium is one of the MOST helpful and worth-it subscription I've gotten for interview preparation. My typical preparation revolves around solving and re-solving Blind 75, and then doing company-specific questions on Leetcode.

6. System Design

In the age of AI-everything, system design seems to have become one of the most important interview components for all candidates, at least based on recruiter feedbacks I have gotten. The types of questions are still the usual, and you will find these questions through online resources too.

What I'd recommend

This component hasn't changed - just eat, live and breath Hello Interview for a few days. This component has became my favorite over time because I always manage to learn new things about building production technology through watching/reading new system design resources. If you are just as eager to learn, or if you are one of those who preaches "these companies aren't testing actual skills used on the job it's bs", then this is your interview to ace.

7. Build a Full Product for One-Day Onsite (Only Startups)

Some startups will want you to go in for a full-day onsite and build a product, either zero-to-one or built upon an existing codebase. These interviews will pretty much always allow AI coding tools, so go equipped with your Claude Code or equivalent. Some will allow you to expense for API subscription, and I managed to get Claude Max for a month this way :)

8. Culture Fit

Standard questions once again. You can find question banks of this everywhere online.

What I'd recommend

Please prepare for this. Don't slack off and think you get this for free just because you are generally a sociable yapper. Research into the company and the role you are applying to and understand where there's a strong alignment. Search up on possible culture fit questions and prepare for them.

Also, search up on STAR format for answering culture fit questions.

Observations

1. Startup interviews can be extremely demanding, especially Founding Engineer roles

These interview loops can last up to 10 rounds. I've easily invested up to 20 hours for single startups before. From take-home tests to 4-hour OAs to full-day onsites to week-long work trials, the variance of interview experiences can vary greatly between startups. Ensure you are sinking time at the right startups you truly want to join, or else it's probably not worth the investment of your time.

On the flip side, mid-tech to big-tech interviews are usually fairly standard: phone screen, onsite, hiring manager rounds, decision. Variances aren't usually that huge.

2. Companies are still figuring out how best to do coding interviews without AI assistance

With AI tools, it's become way too easy to cheat, and companies are still in the midst of figuring out anti-cheat methods. Companies are resorting to full screen-sharing (not just browser, but the entire screen) to prevent cheating. Some companies insist on physical on-site interviews, which make a ton of sense to me. I've also, interestingly, had instances where interviewers mentioned explicitly no AI tools are allowed, but google searching implementation methods for e.g. multithreading API can be done, all these just for Google Overview to immediately return a fully workable answer upon a Google search. Everyone is still figuring it out.

3. Trend towards AI-assisted coding interviews

As AI tools grow rampant across the workforce, interviews are starting to embrace the use of AI tools in coding interviews.

How I think interviews should be done in 2026

Having done a ton of interviews recently, and having once been an interviewer myself, I had a retrospect of which interview structure I would choose if I was a company looking for top-tier engineers in the field. I remembered all the interviews I sat through and thought about the signals I would have gotten if I were the interviewer instead. I also retrospected, from a candidate's perspective, which interviews felt great in terms of giving me the freedom to express my knowledge and thinking through problems, and which interviews allowed me to exhibit my skills and efficiency most as an engineer. In this section, I detail my insights gained from this retrospection.

Firstly, if I got the chance to decide how to structure my interview gauntlet, here's how I think interviews should be done in 2026:

1x Phone Screen - Technical Coding Round
1x Phone Screen - System Design (generic system design)

If the candidate passes these two, they progress to the physical onsite, where they will get to meet the team and ideally have a meal scheduled in between the onsite interviews with the team. Here's how the onsites would look like:

1x Onsite - Coding Test on Production Codebases + Concurrency/Parallelism
1x Onsite - AI-assisted Coding Round
1x Onsite - System Design (ideally based on a relevant project)
1x Onsite - Hiring Manager Interview for Past Project Walkthrough + Culture Fit

The interview loop I propose above covers as much breadth as possible, testing candidates on their knowledge across every breadth of Software Engineering - from general algorithms, to iterating quickly with AI coding tools, to optimizing and understanding of concurrency and parallelism, to rigorous system design.

Notice how there are two phone screens in this loop. I was given this structure for some companies, and I found this to be the most effective arrangement from a company's perspectives. The technical coding round ensures that candidate is sound in data structures & algorithms (standard things), while the system design round ensures the candidate has a good high-level architectural understanding of things.

In my opinion, having a system design round for phone screen is extremely important for signalling whether a candidate has proficient understanding in technology. It is also much harder to cheat in system design than in a leetcode round during a technical phone screen, so that helps with getting more accurate signal. The goal here is to protect your employee's time and to make sure you don't spend >5 hours just to find out the candidate do not have basic understanding of system design (which, in my opinion, is a much higher bar than leetcode proficiency).

The onsite interviews are the most important. First off, a coding test on a candidate's proficiency with writing production codebases are extremely important. That's where your error handling, retry/backoff logic, or concurrency/parallelism concepts really come into play, and these are things we should encounter really often in production systems, so having an interview that covers this breadth is imperative.

Next, the AI-assisted coding round is going to be really helpful in testing the candidate's development velocity, as well as his/her ability to work with AI tools, which I believe is a vital skill in today's workplace. I particularly liked interview type 1(b) which I introduced above, where we do a half AI-assisted/half non-AI-assisted interview. Spending the first 30 minutes of the interview session vibecoding allows interviewers to judge a candidate's ability to provide enough context to an AI assistant from nothing, navigating vague requirements and creating a product zero-to-one. The last 30 minutes, where the candidate turns off their AI assistance and builds a new feature, allows interviewers to judge how the candidate reacts to an unfamiliar codebase created by vibecoding. This also allows interviewers to test whether the candidate can quickly understand the codebase and work on top of it.

The remaining are fairly standard. In my opinion, System Design done on a relevant project done in the Hiring Manager's team will help the Hiring Manager understand if the candidate can ramp up quickly and get a high-level understanding of the system quickly upon starting. I also think that past project walkthroughs are important, as it directly shows you whether the candidate is just overexaggerating his/her resume (a tactic people often confess to doing in online forums), or if they really did do the work they claimed to do and understood what they were building.