gitGood.dev
Amazon LPAdvanced Premium

Bias for Action (Amazon Leadership Principle)

Speed matters. But the principle is reversible-vs-irreversible reasoning, not 'I work fast.' Get this distinction wrong and the answer reads as reckless.

About this theme

Bias for Action is one of Amazon's most-misunderstood Leadership Principles. The principle states that speed matters in business; many decisions and actions are reversible and do not need extensive study. The implicit pair is the concept of "two-way doors" (reversible decisions) vs "one-way doors" (irreversible). Bias for action is about taking calculated risk on two-way doors, not about being reckless. In interviews, weak candidates tell stories about working fast. Strong candidates tell stories about explicitly reasoning about reversibility, taking informed risk, and being willing to be wrong on cheap-to-undo decisions. The Bar Raiser is specifically listening for whether you understood the risk you took, not just whether you moved fast.

What interviewers are evaluating

  • Did you reason about reversibility before acting?
  • Did you take real risk - or was the 'fast' action actually low-stakes?
  • Were you willing to be wrong, and what's your evidence?
  • When the action turned out to be wrong, how did you correct course?
  • Did you involve others appropriately, or use 'speed' as cover for skipping consultation?
  • Did you balance speed with quality - or did one-time speed create lasting tech debt?

Common prompts

Variations on these are asked at every level. Have a story pre-loaded for at least three of them.

  • ?Tell me about a time you made a decision quickly with limited information.
  • ?Describe a situation where you took a calculated risk.
  • ?Tell me about a time you decided to act without complete data, and the action turned out to be wrong. How did you handle it?
  • ?Walk me through how you balance speed with quality in your day-to-day work.
  • ?Describe a decision you reversed. Why did you make it in the first place?
  • ?Tell me about a time you decided not to wait for consensus.
  • ?Describe a time you regretted moving too fast. What did you learn?
  • ?Tell me about a time you decided NOT to take action and waited instead. How did you decide?

Sample STAR answers

Both strong and weak examples, with notes on what makes each work (or fail). Read the weak examples carefully - the patterns they show up are the ones interviewers are trained to spot.

STRONG

Strong: Two-way door at the right level

Prompt: "Tell me about a time you made a decision quickly with limited information."
Situation
I was the on-call engineer for our payments service when an external dependency (a card-network API) started returning intermittent 500s during peak shopping hours. About 3% of card authorizations were failing, affecting roughly 2K transactions per minute.
Task
I had two options. (1) Page the senior engineers, gather context, develop a fix - probably 30-45 minutes. (2) Implement a known retry pattern on the failing endpoint with exponential backoff, which I'd seen the team apply on other endpoints. The retry pattern was a one-line config change in our payments gateway. The risk: aggressive retries could amplify the upstream issue if the card network was actually down, not just degraded.
Action
I read the retry pattern docs (3 minutes). I checked the card network's status page - degraded, not down. I checked our recent metrics - the 500s were transient, not sustained. I judged this was a two-way door: if retries made things worse, I'd see it within 30 seconds and could revert. If they helped, I'd absorb the bulk of the failures while the network recovered. I shipped the config change with a 30-second hold-and-revert plan. I posted in the #payments channel describing what I'd done and why. I watched the metrics for 5 minutes. Failure rate dropped from 3% to 0.4%. I left the change in place and paged the on-call senior to brief them when they came online. They reviewed the change and signed off. The card network recovered within an hour.
Result
We recovered roughly 1,800 transactions per minute that would have failed during the 1-hour incident, worth ~$120K. The change was kept as a permanent fixture. Post-incident review noted the decision-making as exemplary. I wrote a one-pager on the reversibility framework I used; it became part of our on-call runbook.
Why this works

What makes this strong: (1) Explicit two-way door reasoning - 'if it makes things worse, I'll see it within 30 seconds and revert.' (2) Did the cheap homework (status page, metrics check) before acting. (3) Communicated transparently in real time. (4) Looped in the senior afterward, not for permission, for visibility. (5) The result is quantitative and the post-incident write-up shows reflection. This is bias for action in the precise sense Amazon means.

STRONG

Strong: Acted wrong and recovered cleanly

Prompt: "Tell me about a time you decided to act without complete data, and the action turned out to be wrong."
Situation
I was leading the implementation of a new feature flag system. The vendor we chose offered an SDK with built-in caching. My team was split: half wanted to build a thin wrapper for testability and observability, half wanted to use the SDK directly to ship faster.
Task
Decision needed within the week to keep the project on track. I judged it was a two-way door - if the SDK proved insufficient, we could wrap it later. Decided to use the SDK directly.
Action
We shipped 4 weeks later. Within two weeks of going to production, we hit two issues: (1) the SDK's logging was incompatible with our observability stack, and (2) we couldn't unit test code that depended on flags without spinning up a full SDK instance. Both issues were predictable in retrospect; my judgment of the SDK's fit was wrong. I owned the decision in a team retro - explicitly said 'I made the call to use the SDK directly, and I underestimated the testability cost. Let's wrap it.' We scoped a 1-week wrapper project. I led it personally, since the original decision was mine. After it shipped, testability and observability were both clean. I added a 'what would change my mind' section to my one-page proposal template - the original proposal hadn't articulated what evidence would have made me choose the wrapper, and that was the gap.
Result
Wrapper was in production within 2 weeks of the retro. No additional incidents from the SDK. The one-pager template change spread to two other teams via word of mouth. I learned to articulate kill-criteria for two-way door decisions, not just describe the decision.
Why this works

What makes this strong: (1) The candidate's decision was wrong, and they're telling that story. Bias for action stories that always work out are suspect. (2) Took ownership of the reversal cleanly - no 'the SDK was bad' or 'the team should have caught it.' (3) Real reflection (the kill-criteria gap) led to a process improvement. (4) Acted fast on the recovery, not just the original decision. The interview signal here is 'this person knows how to take and recover from calculated risk.'

WEAK

Weak: 'I worked fast'

Prompt: "Tell me about a time you took a calculated risk."
Situation
We had a tight deadline and limited information.
Task
I had to make a decision.
Action
I went with my gut and shipped quickly to meet the deadline.
Result
It worked out fine.
Why this is weak

Why this is weak: (1) No reasoning about reversibility - 'went with my gut' is the opposite of bias for action. (2) No specifics about the risk taken or the alternatives considered. (3) 'Worked out fine' suggests luck, not judgment. (4) The story doesn't show that the candidate distinguishes between two-way and one-way doors. Bar Raisers will probe relentlessly on a vague answer like this.

Common pitfalls

  • ×Conflating bias for action with 'I work fast.' Speed without reversibility reasoning reads as reckless.
  • ×Telling stories where the action was actually low-risk. If everyone would have made the same call, it's not a bias-for-action story.
  • ×Stories where the action worked out by luck. Interviewers can tell, and they'll probe.
  • ×Skipping the part where you explained your reasoning to others. Bias for action is not unilateral; it's calculated.
  • ×Failing to mention what would have made you choose differently. Strong candidates articulate kill-criteria.
  • ×Treating one-way doors as two-way doors. 'I shipped without backup, and we lost data' is a bias-for-action red flag, not a success story.

Follow-up strategies

Interviewers will probe. Be ready for the follow-up questions that test the depth of your story.

  • If asked 'What was your kill-criterion?' - have one ready. The strongest answer names the specific signal that would have caused you to revert.
  • If asked 'Why didn't you wait for more info?' - your story should answer this in the action, but be ready to articulate the cost of waiting (lost time, ongoing impact, expiring opportunity).
  • If asked 'What if it had gone wrong?' - your story should already include either an actual reversal or a contingency plan. If neither, acknowledge the gap honestly.
  • If asked 'Would you do this again?' - the strongest answer is 'I'd take the same kind of risk, but I'd articulate the kill-criterion more explicitly upfront.' Pure yes/no answers under-deliver.
  • If asked about a one-way door situation, describe explicitly how your decision-making changes. Bias for action does not extend to irreversible decisions.

Related behavioral themes

Companies that test this theme

Practice these stories live

Reading STAR answers is the floor. The interview signal is in delivering them out loud, with follow-ups, under pressure. The AI mock interview probes your stories the way real interviewers do.

Start an AI mock interview →