gitGood.dev
Amazon LPFoundationalFREE

Customer Obsession (Amazon Leadership Principle)

The most-asked Amazon LP. Interviewers screen for evidence you reasoned about end-user impact, not just shipped a feature.

About this theme

Customer Obsession is the first of Amazon's 16 Leadership Principles and the most-tested in interviews. The principle states that leaders start with the customer and work backward, work vigorously to earn and keep customer trust, and pay attention to competitors but obsess over customers. In interviews, Customer Obsession surfaces in two ways: as a direct prompt ("Tell me about a time you went above and beyond for a customer") and as an embedded thread in any other LP question (when they ask about a project, they're listening for whether you mention customer impact unprompted). Strong candidates weave customer reasoning into their answers without being asked.

What interviewers are evaluating

  • Did you start the project from a customer problem, or from internal preferences / shiny tech?
  • Did you talk to actual customers, or assume what they needed?
  • Did you measure customer outcomes, or just shipped output?
  • When the customer wanted something different from your roadmap, what did you do?
  • Did you push back on internal stakeholders to protect customer experience?
  • Did you think about long-term customer trust, not just the immediate request?

Common prompts

Variations on these are asked at every level. Have a story pre-loaded for at least three of them.

  • ?Tell me about a time you went above and beyond for a customer.
  • ?Describe a situation where you had to balance customer needs against business constraints.
  • ?Tell me about a time you disagreed with a customer's request. How did you handle it?
  • ?Walk me through how you've used customer feedback to drive a product decision.
  • ?Describe a time you advocated for the customer when no one else was.
  • ?Tell me about a time you had to deliver bad news to a customer.
  • ?How have you measured whether you're actually serving the customer?

Sample STAR answers

Both strong and weak examples, with notes on what makes each work (or fail). Read the weak examples carefully - the patterns they show up are the ones interviewers are trained to spot.

STRONG

Strong: API redesign driven by customer pain

Prompt: "Tell me about a time you used customer feedback to drive a product decision."
Situation
Six months into running our payments API at a fintech, we noticed support tickets had a clear pattern: about 30% of integration questions came from a single endpoint, /transactions/list. Customers found pagination confusing and the response shape inconsistent with our other endpoints.
Task
I was the lead engineer on the API. The team was about to start a new feature project, but I felt this consistent customer pain was higher-leverage than any new feature.
Action
I pulled support ticket data for the prior 90 days and read 80 individual tickets myself. I tagged them by root cause and found three: pagination tokens that changed format mid-integration, an undocumented quirk where 'created_at' returned UTC but 'completed_at' returned local time, and an inconsistent error response shape. I also called five customers from the highest-volume integrations to verify what I'd seen. Three of them said they'd worked around the issues but didn't trust the API for high-value flows. I wrote a one-page proposal with the customer quotes, the support cost data ($45K/year of support engineer time on this endpoint alone), and a redesign that fixed all three issues with a deprecation path. Got buy-in from the PM and EM. Spent the next sprint shipping a v2 endpoint with the same response shape as our other endpoints, normalized timestamps, and consistent error codes. Wrote a migration guide and proactively reached out to the 12 customers with the highest support ticket volume on this endpoint.
Result
Within 60 days of launch, ticket volume on this endpoint dropped 80%. Support engineer time freed up was redirected to a customer success program. Two of the customers I'd called specifically thanked us in writing. The proactive outreach pattern became a template the team used twice more in the next year.
Why this works

What makes this strong: (1) Started with customer pain, not internal preferences. (2) Measured the cost concretely (dollars and tickets) before pitching. (3) Talked to actual customers, not just stakeholders. (4) Followed through with a deprecation plan, not just a band-aid. (5) The result is quantitative (80% drop, $45K) and human (customer thank-yous). (6) Closed the loop by templating the pattern.

STRONG

Strong: Pushing back on stakeholders

Prompt: "Describe a time you advocated for the customer when no one else was."
Situation
At an e-commerce company, the merchandising team wanted to add an interstitial ad after every third product browse. Models predicted +3% revenue from the ads. The launch was approved up the chain.
Task
I was the engineer on the search and discovery team. I'd seen our funnel data and didn't believe the model. I had to decide whether to flag the issue or just ship.
Action
I dug into the analytics. The model assumed users would tolerate the interstitial because they tolerated banner ads. But banner ads were 40px tall; the proposed interstitial was full-screen. I pulled qualitative session recordings from our user research repo and found a clear pattern: users who saw similar full-screen interruptions in past A/B tests dropped off at 2.5x the rate of users who didn't. The model hadn't used those data because they were from a different surface. I scheduled a meeting with the merchandising PM and the eng director. I shared the recordings and the dropout data, and proposed a smaller A/B test (1% of traffic) to validate before launch. The merch PM was initially defensive - their roadmap depended on this revenue. I framed it as 'If the model is right, we lose nothing by validating; if the model is wrong, we save the customer experience.' That landed.
Result
We ran the A/B at 1%. Revenue went up 0.7% (not 3%) and 30-day retention dropped 2.1%. LTV math turned negative. Launch was killed. The merch team rebuilt the campaign as a smaller persistent banner with personalization, which shipped 6 months later and hit +2% revenue with no retention impact.
Why this works

What makes this strong: (1) The candidate had no formal authority over the launch decision. (2) They did real homework (session recordings, dropout math) before pushing back. (3) They proposed a path that respected the stakeholder's goals. (4) The result rewards both customer trust and business outcomes - showing you're not anti-revenue, you're anti-bad-revenue.

WEAK

Weak: Generic 'I listen to feedback'

Prompt: "Tell me about a time you used customer feedback to drive a product decision."
Situation
We had a product feature that customers were complaining about.
Task
I needed to figure out what to do.
Action
I read some customer feedback and talked to my team. We decided to fix the issue and shipped a fix.
Result
Customers were happier and complaints went down.
Why this is weak

Why this is weak: (1) No specifics - what feature, what complaints, how many customers, what fix. (2) No mention of trade-offs - what else was on the roadmap, why was this prioritized. (3) No measurement of outcome - 'happier' is not a metric. (4) The candidate is the passive subject, not the driver. Interviewers are listening for ownership and concrete impact; this answer reveals neither.

Common pitfalls

  • ×Talking about 'the customer' abstractly without naming a specific user, segment, or persona.
  • ×Confusing internal stakeholders with customers. The PM is not the customer.
  • ×No measurement. 'Customers were happier' without a metric tells the interviewer you didn't care to verify.
  • ×Heroic story without acknowledging trade-offs. Real customer-obsession decisions involve saying no to other things.
  • ×Failing to mention you talked to actual customers. If you only have second-hand info, say so explicitly.
  • ×Confusing customer obsession with customer agreement. Sometimes the right answer is to disagree with what the customer asked for.

Follow-up strategies

Interviewers will probe. Be ready for the follow-up questions that test the depth of your story.

  • If asked 'How did you measure customer impact?' - have specific metrics ready (NPS, retention, support ticket volume, conversion). Generic answers signal you didn't measure.
  • If asked 'How would you do this differently next time?' - have a real reflection. The right answer is usually 'I'd talk to customers earlier' or 'I'd set up the measurement instrumentation first.'
  • If asked 'What if leadership disagreed?' - your story should already include navigating internal disagreement. If it doesn't, acknowledge that and describe how you'd handle it.
  • If asked 'What's the next step?' - real customer-obsession is iterative. Have a follow-on plan, even if you didn't get to execute it.
  • If asked about a different customer segment ('what about enterprise vs SMB') - know the segments you're actually serving. Vague segment-talk reveals you don't know your customer.

Related behavioral themes

Companies that test this theme

Practice these stories live

Reading STAR answers is the floor. The interview signal is in delivering them out loud, with follow-ups, under pressure. The AI mock interview probes your stories the way real interviewers do.

Start an AI mock interview →