Microsoft SDE-2 Interview Experience | Selected
I recently gave the Microsoft SDE-2 interview and honestly it was a mix of DSA, some low-level/system design and a lot of focus on problem-solving. I'm just sharing my round-by-round experience here, hoping it helps anyone who's preparing for something similar.
Round 1 – Technical (DSA Problem)
- Level: Medium
- Duration: 60 mins
- Mode: Remote
Question Asked : Shortest Distance of Open Cells to Nearest Mine
The interviewer gave me a maze represented as a grid of characters:
O→ Open cellX→ Blocked cellM→ Mine
The task was to find the shortest distance of every open cell from its nearest mine.
Rules
- Movement allowed only in 4 directions (up, down, left, right).
- Distance of a mine to itself = 0.
- Blocked and unreachable cells should return
-1.
My Approach
The problem is essentially asking for the shortest distance from every open cell to the nearest mine.
A naive approach would be to start from each open cell and search for the closest mine, but that would be very inefficient.
Instead, the trick is to reverse the perspective:
Key Idea
Instead of starting from open cells, start the search from all the mines simultaneously.
This is where multi-source BFS comes into play.
Approach
1. Initialization
- Scan the entire grid once.
- For every mine (
M), set its distance as0and push it into a queue. - For every blocked cell (
X), mark it as-1since it’s not traversable. - For all open cells (
O), initialize them as-1(unvisited).
2. BFS Propagation
- Perform a Breadth-First Search (BFS) starting from all mines at once.
- At each step:
- Pop a cell from the queue.
- Look at its 4 neighbors (up, down, left, right).
- If a neighbor is an open cell and hasn’t been visited yet:
- Update its distance as
current_distance + 1. - Push it into the queue.
- Update its distance as
3. Result
- Because BFS explores in layers, the first time an open cell is reached, it is guaranteed to be from the nearest mine.
- Every open cell automatically gets the shortest distance to its nearest mine.
- Any cell that was never reached remains
-1(unreachable).
Why Multi-Source BFS Works Here
-
BFS naturally finds the shortest path in an unweighted grid.
-
By pushing all mines into the queue initially, we spread out from them simultaneously, so each open cell gets updated with the minimum distance to the closest mine in a single pass.
-
This avoids redundant searches and brings down the complexity to O(N × M) for an N × M grid.
Time Complexity
- O(n × m), since each cell is visited at most once.
Code Snippet (what I wrote in the interview)
#include <bits/stdc++.h>
using namespace std;
void findShortestDistance(vector<vector<char>> &adj) {
int n = adj.size(); // rows
int m = adj[0].size(); // columns
vector<vector<int>> dis(n, vector<int>(m, -1));
queue<pair<int, int>> q;
// Push all mines to the queue
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
if (adj[i][j] == 'M') {
q.push({i, j});
dis[i][j] = 0;
}
}
}
// 4-directional movement
int di[] = {0, 0, 1, -1};
int dj[] = {1, -1, 0, 0};
while (!q.empty()) {
auto [row, col] = q.front();
q.pop();
for (int k = 0; k < 4; k++) {
int newi = row + di[k];
int newj = col + dj[k];
if (newi >= 0 && newj >= 0 && newi < n && newj < m &&
dis[newi][newj] == -1 && adj[newi][newj] == 'O') {
dis[newi][newj] = dis[row][col] + 1;
q.push({newi, newj});
}
}
}
// Print matrix as asked by interviewer
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
cout << dis[i][j] << " ";
}
cout << endl;
}
}
int main() {
vector<vector<char>> adj = {
{'O', 'M', 'O'},
{'O', 'X', 'O'},
{'O', 'O', 'O'}
};
findShortestDistance(adj);
return 0;
}
Edge Cases I Discussed
- Empty grid
- Grid with no mines
- Grid with all blocked cells
- Multiple mines close to each other
Round 2 – DSA + Low-Level System Design
Level: Medium
Duration: 60 mins
Mode: Remote
Question Asked
The scenario was from a startup called BookNest, facing issues due to frequent DB hits for book previews.
I had to design and implement a cache system that:
- Stores a fixed number of book records.
- Supports get(bookID) and put(bookID, bookData) operations.
- Evicts the least recently used (LRU) book if the cache is full.
- Both operations must run in O(1).
My Approach
The problem was essentially to design an LRU (Least Recently Used) Cache that can store book records and support get and put in constant time.
The key challenge was not just implementing caching, but also ensuring O(1) efficiency for both operations while handling evictions properly.
Approach
1. Identifying the Data Structures
I immediately thought of the classic LRU Cache design.
- A HashMap would help in quickly locating any book record (
bookID -> node) in constant time. - A Doubly Linked List would help in maintaining the order of usage, with the most recently used book at the head and the least recently used at the tail.
This combination ensures O(1) get/put while maintaining usage order.
2. How Operations Work
-
get(bookID)- Look up in HashMap.
- If found, move the node to the head of the list (marking it as most recently used) and return the book.
- If not found, return
-1.
-
put(bookID, bookData)- If the book already exists, update its data and move it to the head.
- If it’s a new entry:
- If the cache is full, evict the node at the tail (least recently used).
- Insert the new book at the head and update the HashMap.
3. Complexity
Both operations (get and put) take O(1) because:
- HashMap gives instant access.
- Doubly Linked List allows quick node removals and insertions.
4. Abstraction & Clean Design
I wrapped this logic into a BookService interface and BookServiceImpl class to show abstraction, which the interviewer appreciated. This showed how caching could be integrated cleanly with a real-world service layer.
Why This Approach Works
- Efficiency: Meets the strict O(1) requirement.
- Scalability: Can handle large datasets as long as memory permits.
- Real-world Relevance: Caching book previews reduces frequent DB hits, which was exactly the pain point in the BookNest scenario.
Code
#include <bits/stdc++.h>
using namespace std;
class Book {
public:
int bookId;
string author;
Book(int id, const string &info) {
bookId = id;
author = info;
}
};
class LRUCache {
private:
int capacity;
list<pair<int, Book>> accessList;
unordered_map<int, list<pair<int, Book>>::iterator> cacheMap;
public:
LRUCache(int cap) : capacity(cap) {}
Book* get(int bookId) {
if (cacheMap.find(bookId) == cacheMap.end()) return nullptr;
accessList.splice(accessList.begin(), accessList, cacheMap[bookId]);
return &(cacheMap[bookId]->second);
}
void put(int bookId, const string &bookData) {
if (cacheMap.find(bookId) != cacheMap.end()) {
cacheMap[bookId]->second = Book(bookId, bookData);
accessList.splice(accessList.begin(), accessList, cacheMap[bookId]);
return;
}
if (accessList.size() == capacity) {
auto last = accessList.back();
cacheMap.erase(last.first);
accessList.pop_back();
}
accessList.push_front({bookId, Book(bookId, bookData)});
cacheMap[bookId] = accessList.begin();
}
};
Round 3 – System Design (Rate Limiter)
Level: Medium
Duration: 60 mins
Mode: Remote
Problem
In this round, I was asked to design a Rate Limiter, which is basically a system to control how many requests a user or client can make within a given time, so that APIs or services don't get abused. It is a classic system design problem.
Design a Rate Limiter that:
- Limits number of API requests per user (e.g., 100 requests/min).
- Works efficiently under high load.
- Supports multiple users and scales well.
My Approach
I began by talking about the different rate limiting techniques such as
- Fixed Window Counter
- Sliding Window Log
- Sliding Window Counter
- Token Bucket Algorithm
Finally, I chose Token Bucket for its ability to handle burst traffic more smoothly.
Key Points I Discussed
- Use a HashMap to store token counts per user.
- Refill tokens at a fixed rate.
- Reject requests if tokens are exhausted.
- In distributed setups, Redis can be used to sync tokens across nodes.
Trade-offs & Scalability
- Accuracy vs Performance: Sliding Window gives better accuracy, but Token Bucket provides better performance under high load.
- Scalability: Extend across multiple servers using Redis or distributed coordination.
Round 4 – System Design (Bus Reservation Service)
Level: Medium
Duration: 60 mins
Mode: Remote
Problem
In this round, I had to design a bus reservation system for Microsoft employees. There were 25 fixed routes, each with a set of stops. In the morning, buses only picked up employees from their home stops and dropped them at the Microsoft campus, while in the evening the flow was reversed, boarding started from the campus and drop-offs happened at home stops.
Requirements
The problem requirements were to design a database schema that could handle passengers, drivers, bookings and routes. On top of that, I also had to design some basic APIs that a frontend application for employees could use.
My Approach
Database Design (Key Entities)
- Passenger →
passenger_id, name, start_stop, end_stop, date_of_travel - Driver →
driver_id, name, assigned_bus_id - Bus →
bus_id, route_id, capacity, assigned_driver_id - Route →
route_id, list_of_stops - Stop →
stop_id, location_name, geo_coordinates - Booking →
booking_id, passenger_id, bus_id, status, trip_type
APIs
- POST /register → Passenger signup
- POST /login → Login
- GET /routes/{route_id}/stops → View route stops
- POST /booking → Create booking
- GET /booking/active → View active bookings
- GET /stops/nearby?lat=..&lng=.. → Nearby stops
Discussion Areas
- Morning/evening rules validation.
- Seat availability handling.
- Extensibility for future features (e.g., notifications, live bus tracking, cancellations).
Round 5 : Hiring Manager Round
Level: Medium
Duration: 60 mins
Mode: Remote
This round was completely different from the coding and design ones. It wasn't about writing code or drawing system diagrams. it was more about me as a professional: how I think, how I handle pressure and how I approach situations that don't have straightforward answers.
What Happened
The manager literally walked through my resume line by line. Every project, every achievement, even small impact statements all were questioned.
For each point, I had to explain:
- Why I did something the way I did
- How exactly I achieved it
- What happened after — was it successful, did it create an impact, did I learn something?
On top of that, I was given hypothetical scenarios:
- “What if a key project is already failing and you’re brought in late — how would you handle it?”
- “What if a senior architect strongly pushes back against your design — what would you do?”
- “You join a brand-new domain/team with no prior experience — what’s your 30-day plan?”
These weren’t trick questions. The intent was clear: they wanted to see how I think under pressure, how I communicate and how I deal with ambiguity when there's no obvious right answer.
What They Were Really Testing
- My ability to stay calm and think clearly under stressful situations.
- How adaptable I am to new teams and unfamiliar domains.
- Whether I can communicate ideas simply and effectively.
- Most importantly, whether I take ownership — not just doing tasks, but driving impact and taking responsibility for outcomes.
