gitGood.dev
Perplexity

Software Engineer Interview Prep

SWE / Senior / Staff / Principal (~2-12+ YOE)

Prep for Perplexity's engineering loop - AI-native search, retrieval and RAG depth, LLM serving at scale, and the velocity expected at one of the fastest-growing AI products.

414
Practice MCQs
152
Coding challenges
7
Interview rounds

About this loop

Perplexity is an AI-native answer engine - the user asks a question, the system retrieves relevant sources from the open web (and increasingly from premium and proprietary indexes), runs a retrieval-augmented generation pipeline with LLMs to synthesize an answer, and presents the answer with cited sources. The interview reflects what the company actually builds: a hybrid system where classical IR (web crawling, indexing, retrieval, ranking) meets modern LLM serving at consumer-internet scale. The level ladder runs SWE (mid-level, 2-5 YOE) through Senior, Staff, and Principal Engineer. As one of the fastest-growing AI products of the post-ChatGPT era (with consumer subscription, Perplexity Pro, and an enterprise product line), the engineering culture prizes velocity - shipping product surfaces, model upgrades, and infrastructure improvements at a pace that surprises engineers from larger companies. Coding rounds skew Medium-to-Hard with applied framing; many problems come from real Perplexity engineering challenges (chunking documents for retrieval, scoring snippets against a query, handling streaming LLM responses). System design rounds frequently center on AI-native search problems Perplexity engineers actually solve: a retrieval pipeline that combines web search, custom indexes, and LLM-driven reformulation; LLM serving at scale with cost and latency budgets that work for a freemium product; evaluation systems for measuring answer quality and grounding; the ranking and routing decisions that determine which model serves which query. The cultural anchor is AI-product velocity - Perplexity ships product surfaces and model upgrades weekly, and engineers are expected to operate with high autonomy in a fast-moving environment. Behavioral signal screens for ownership, comfort with ambiguity (the AI landscape evolves faster than the product roadmap), and pragmatism about shipping in a domain where the underlying technology is changing month-over-month.

The interview loop

  1. 1
    Recruiter screen
    30 minutes. Background, level calibration (Senior vs Staff is the most contested call), team alignment - Perplexity recruits across search (crawling, indexing, retrieval, ranking), LLM serving (model routing, latency optimization, cost management), product surfaces (consumer app, Pro features, enterprise), evaluation and quality (answer grounding, hallucination measurement, ranking quality), and platform infrastructure (data, observability, scaling).
  2. 2
    Technical phone screen
    60 minutes. One coding problem at Medium difficulty. Most teams accept any modern language - Python and TypeScript most common. Some interviewers include a domain probe (retrieval, embedding similarity, prompt design) if you've been matched to an AI-heavy team.
  3. 3
    Onsite: Coding round 1
    60 minutes. Algorithmic problem with attention to clean implementation. Trees, graphs, heaps, hash maps, and string processing common. Some loops include a problem with retrieval flavor (e.g., 'rank these snippets by relevance to a query using these signals').
  4. 4
    Onsite: Coding round 2
    60 minutes. Often more applied - debug a working snippet, extend an existing retrieval or LLM-serving service, implement a small piece of RAG pipeline logic. Working code with tests expected. For AI-team candidates, may involve embedding similarity, prompt versioning, or streaming response handling.
  5. 5
    Onsite: System design
    60-75 minutes. AI-native search flavored. Common prompts: design a RAG pipeline that combines web search + custom indexes + LLM synthesis with sub-3-second latency, design LLM serving infrastructure that routes queries across model tiers based on complexity and cost budgets, design evaluation infrastructure that measures answer quality and grounding at scale, design real-time crawling for breaking news that can surface in answers within minutes. Depth on retrieval, LLM serving, latency budgets, and cost tradeoffs expected.
  6. 6
    Onsite: AI / ML domain depth (most teams)
    60-75 minutes. Team-specific. Search / retrieval: embedding strategies, hybrid retrieval (lexical + semantic), reranking, chunking strategies, freshness vs quality tradeoffs. LLM serving: model routing, prompt caching, speculative decoding, context window management, cost optimization. Quality / evaluation: hallucination measurement, grounding evaluation, ranking quality metrics, A/B test design for AI products.
  7. 7
    Onsite: Hiring manager / behavioral
    45-60 minutes. AI-product velocity focused. Stories about shipping fast in environments where the underlying technology is changing month-over-month, navigating ambiguity, owning end-to-end product outcomes, operating with high autonomy. Generic narratives fail - Perplexity wants engineers who get genuinely energized by the AI product velocity.

What Perplexity actually evaluates

  • AI product velocity - shipping product surfaces, model upgrades, and infrastructure improvements at the pace AI products require
  • Retrieval and RAG sophistication - hybrid retrieval, chunking, reranking, grounding measurement
  • LLM serving depth - model routing, latency budgets, cost optimization, prompt caching, streaming responses
  • Ownership - end-to-end product outcomes, not just tasks; Perplexity's small-team culture rewards engineers who own scope
  • Evaluation discipline - measuring answer quality, hallucination, grounding, and ranking quality at scale
  • Comfort with ambiguity - the AI landscape evolves faster than any roadmap; engineers who freeze in ambiguous environments struggle

Topics tested

System Design

Core68 MCQs · 2 coding challenges

AI-native search flavored. Practice RAG pipelines, LLM serving architecture, evaluation infrastructure, hybrid retrieval (lexical + semantic), real-time crawling for freshness, and the specific cost/latency tradeoffs of running an AI product at consumer scale. Knowing how AI search products actually work gives concrete vocabulary.

Algorithms

Core77 MCQs · 80 coding challenges

Medium-to-Hard difficulty. Cleanliness and explicit narration matter. Trees, graphs, heaps, hash maps, and string processing all common. Some problems carry retrieval flavor - ranking, scoring, similarity.

Python

Core36 MCQs

Dominant on Perplexity's backend, especially for ML, retrieval, and LLM-serving teams. Familiarity with modern Python (async patterns, type hints, performance-aware idioms) helps for these teams.

Data Structures

Important44 MCQs · 30 coding challenges

Heaps, queues, hash maps, tries, graph structures. The right structure under retrieval and ranking constraints is the insight Perplexity cares about.

Databases

Important49 MCQs · 25 coding challenges

Comes up in system design. Vector databases (pgvector, Pinecone, Vespa) and traditional indexes both surface; sharding strategies for embeddings at scale, hybrid retrieval architectures, freshness vs precision tradeoffs all show up.

Behavioral

Important63 MCQs

AI-product velocity focused. Specific stories about shipping fast in ambiguous environments, owning end-to-end product outcomes, operating with high autonomy, navigating tradeoffs in fast-moving technology landscapes.

Networking

Occasional48 MCQs

Surfaces in LLM-serving design - HTTP semantics for streaming responses, server-sent events, retry/backoff for upstream model providers. Useful background.

TypeScript

Occasional29 MCQs · 15 coding challenges

Used heavily on the frontend and on Node-based product surfaces. Familiarity helps for full-stack and frontend roles.

System design topics tested in this loop

Curated walkthroughs for the bounded designs that show up in Perplexity's system design rounds. Capacity estimation, architecture, deep-dives, and trade-offs.

Behavioral themes tested in this loop

Sample STAR answers, common prompts, pitfalls, and follow-up strategies for the behavioral themes that decide Perplexity's loop.

Curated practice questions

414 MCQs and 152 coding challenges, grouped by topic. Free preview shows question titles - premium unlocks full content.

Sign up free to start practicing. Premium unlocks every question across all packs.

System Design · 68 MCQs

Browse all in System Design
CAP Theorem
QuizMedium
Load Balancer Algorithms
QuizEasy
Database Sharding Strategy
QuizHard
Cache Invalidation Strategy
QuizMedium
Microservices Communication
QuizMedium
Content Delivery Network
QuizMedium
Rate Limiting Strategies
QuizMedium
Event Sourcing Pattern
QuizHard
+ 60 more System Design MCQs

Algorithms · 77 MCQs

Browse all in Algorithms
Sorting Algorithm Stability
QuizEasy
Dynamic Programming Recognition
QuizMedium
Shortest Path Algorithm Selection
QuizMedium
Time Complexity Analysis
QuizHard
Binary Search Application
QuizMedium
Two Pointer Technique
QuizEasy
Recursion vs Iteration
QuizMedium
Greedy vs Dynamic Programming
QuizHard
+ 69 more Algorithms MCQs

Python · 36 MCQs

Browse all in Python
Dynamic Typing
QuizEasy
Mutable vs Immutable Types
QuizEasy
is vs ==
QuizEasy
Pass by Object Reference
QuizMedium
Global Interpreter Lock
QuizMedium
Memory Management
QuizMedium
List vs Tuple
QuizEasy
Dictionary Implementation
QuizMedium
+ 28 more Python MCQs

Data Structures · 44 MCQs

Browse all in Data Structures
Hash Table Collision Resolution
QuizEasy
Binary Tree Traversal
QuizEasy
Implementing Queue with Stacks
QuizMedium
Heap Operations Complexity
QuizMedium
Trie Data Structure
QuizMedium
LRU Cache Implementation
QuizHard
Bloom Filter
QuizHard
Graph Representation
QuizMedium
+ 36 more Data Structures MCQs

Databases · 49 MCQs

Browse all in Databases
ACID Properties
QuizEasy
Database Indexing
QuizMedium
NoSQL Database Selection
QuizMedium
Transaction Isolation Levels
QuizHard
Database Normalization
QuizMedium
Database Replication
QuizHard
SQL Join Types
QuizEasy
Query Optimization
QuizHard
+ 41 more Databases MCQs

Behavioral · 63 MCQs

Browse all in Behavioral
Handling Disagreements
QuizEasy
Learning from Failure
QuizMedium
Task Prioritization
QuizMedium
Handling Ambiguity
QuizHard
Tell Me About Yourself
QuizEasy
Greatest Strength
QuizEasy
Greatest Weakness
QuizEasy
Why This Role?
QuizEasy
+ 55 more Behavioral MCQs

Networking · 48 MCQs

Browse all in Networking
TCP vs UDP
QuizEasy
HTTP Status Codes
QuizEasy
DNS Resolution
QuizMedium
TLS/HTTPS Handshake
QuizHard
WebSocket vs Server-Sent Events
QuizMedium
Cross-Origin Resource Sharing
QuizMedium
TCP Three-Way Handshake
QuizEasy
REST vs GraphQL
QuizMedium
+ 40 more Networking MCQs

TypeScript · 29 MCQs

Browse all in TypeScript
Type vs Interface
QuizEasy
unknown vs any
QuizEasy
The never Type
QuizMedium
Type Narrowing
QuizEasy
Generic Constraints
QuizMedium
Mapped Types
QuizMedium
Conditional Types
QuizHard
The infer Keyword
QuizHard
+ 21 more TypeScript MCQs

System Design - Coding challenges · 2 challenges

Browse all coding challenges →
Token-Bucket Rate Limiter
CodeHard
Design Twitter
CodeHard

Algorithms - Coding challenges · 80 challenges

Browse all coding challenges →
Maximum Subarray
CodeMedium
Binary Search
CodeEasy
Climbing Stairs
CodeEasy
Move Zeroes
CodeEasy
+ 72 more Algorithms coding challenges

Data Structures - Coding challenges · 30 challenges

Browse all coding challenges →
Contains Duplicate
CodeEasy
Merge Two Sorted Lists
CodeEasy
Intersection of Two Arrays II
CodeEasy
First Unique Character in a String
CodeEasy
Group Anagrams
CodeMedium
Number of Islands
CodeMedium
Course Schedule
CodeMedium
+ 22 more Data Structures coding challenges

Databases - Coding challenges · 25 challenges

Browse all coding challenges →
SQL: Customers Who Placed Orders (INNER JOIN)
CodeEasy
SQL: Customers Without Orders (LEFT JOIN ... IS NULL)
CodeEasy
SQL: Employees Earning More Than Their Manager (Self Join)
CodeEasy
SQL: Reconcile Two Sources (FULL OUTER JOIN)
CodeMedium
SQL: Date x Product Matrix (CROSS JOIN)
CodeMedium
SQL: Order Count Per Customer (GROUP BY)
CodeEasy
SQL: Big Spenders (GROUP BY + HAVING)
CodeMedium
SQL: Average Order Value by Month (DATE_TRUNC)
CodeMedium
+ 17 more Databases coding challenges

TypeScript - Coding challenges · 15 challenges

Browse all coding challenges →
Frontend: Counter Component (React useState)
CodeEasy
Frontend: Accordion Component (Single vs Multi Open)
CodeMedium
Frontend: Modal with Focus Trap (Tab Order Logic)
CodeMedium
Frontend: Debounced Search Input (Cancellation)
CodeMedium
Frontend: Tabs with Arrow-Key Navigation
CodeMedium
Frontend: useFetch Custom Hook (Loading/Error/Data State Machine)
CodeMedium
Frontend: useDebounce Hook (Trailing Edge Behavior)
CodeMedium
Frontend: useLocalStorage Hook (SSR-safe + Cross-Tab Sync)
CodeMedium
+ 7 more TypeScript coding challenges

Practice in mock interview format

Behavioral and system design rounds reward practice with a live AI interviewer that probes follow-ups, not silent reading.

Start an AI mock interview →

Frequently asked questions

Do I need ML / AI experience to interview at Perplexity?

Depends on the team and level. Search / retrieval, LLM serving, and quality / evaluation teams expect substantive familiarity - knowing how embeddings, retrieval, RAG, and modern LLM serving work is a real differentiator. Product surfaces, infrastructure, and growth teams have a softer bar - general curiosity about AI is sufficient if you have strong systems engineering depth. Senior+ candidates across all teams increasingly face questions about AI integration. Specific experience integrating LLMs into a product (streaming UX, prompt versioning, eval systems, RAG architectures) is a real differentiator at all levels.

What does the RAG system design round actually look like?

Concrete framing: 'design the system that takes a user query and produces a cited answer in under 3 seconds. The query may need web search, an internal Perplexity index, and an LLM synthesis step. The answer must include citations to specific sources. The system must handle 10K queries per second at peak with cost budgets that work for a freemium product.' Expected components: a retrieval pipeline (lexical + semantic + custom indexes), reranking, chunking strategy, LLM routing across model tiers based on query complexity, prompt construction with retrieved context, streaming response generation, citation tracking, evaluation hooks. Perplexity engineers solve this shape of problem daily.

How does Perplexity manage LLM cost at consumer scale?

Aggressively. The high-level techniques: routing queries across model tiers (cheaper models for simple queries, larger models for complex ones), prompt caching for common patterns, streaming responses to allow early termination, careful context window management, speculative decoding where applicable, and (where the math works) running custom-trained smaller models for specific query classes. System design rounds frequently probe whether you can reason about cost-latency-quality tradeoffs at consumer scale. Engineers from environments where LLM cost wasn't a major constraint sometimes underestimate how much engineering effort goes into this.

How does evaluation work for AI products like Perplexity?

It's hard, and Perplexity invests heavily in it. The challenges: there's no single ground truth for 'correct answer' (multiple answers can be valid), hallucination measurement requires careful grounding evaluation against retrieved sources, ranking quality is hard to A/B test because user feedback is sparse and noisy, and the underlying model performance shifts when you upgrade models. Senior+ candidates often face questions about evaluation system design. Specific experience with eval frameworks, LLM-as-judge patterns, or human-in-the-loop evaluation is a real differentiator.

How does the velocity culture compare to other AI labs?

Faster than research labs (OpenAI, Anthropic), comparable to AI product startups. Perplexity ships product surfaces and model upgrades weekly, and the engineering culture explicitly rewards working fast in ambiguous environments. Engineers from research-heavy backgrounds sometimes underestimate how product-shipping the role is; engineers from consumer-product backgrounds sometimes underestimate how much AI infrastructure depth is required. The intersection (AI-fluent engineers who like shipping consumer products fast) is rare and is what Perplexity is selecting for.

What is comp like at Perplexity?

Aggressive on equity, competitive on cash at senior+. SWE targets ~$200-300K total comp, Senior ~$320-450K, Staff ~$450-700K, Principal $700K-1.2M+. Perplexity is private with private-company stock; the equity upside depends on continued growth trajectory (Perplexity has had multiple valuation step-ups in 2024-2026). Cash is competitive with FAANG at mid-levels and lags slightly at staff+ where FAANG equity refresh is large; Perplexity equity can lead for engineers who joined before significant valuation increases. Recruiters share ranges relatively early.

Other prep packs