Learning from Failure
Microsoft's Growth Mindset core. Also tested at Google, Anthropic, and any company that screens for self-awareness. The signal is whether you actually changed.
About this theme
What interviewers are evaluating
- →Was this an actual failure, or a rebranded success?
- →Did you take responsibility, or attribute it to circumstances and other people?
- →Did you understand why it happened at a deeper level than 'I should have done X'?
- →Did you change your behavior afterward, or just acknowledge the lesson?
- →Are you self-aware enough to recognize ongoing risk of repeating the failure?
- →Can you tell the story without being either defensive or performatively self-critical?
Common prompts
Variations on these are asked at every level. Have a story pre-loaded for at least three of them.
- ?Tell me about a time you failed.
- ?Describe a project that didn't go well. What did you do?
- ?Tell me about a decision you made that turned out to be wrong.
- ?Walk me through a mistake you made early in your career and what you learned.
- ?Describe a situation where you didn't meet expectations.
- ?Tell me about a time you got feedback that was hard to hear. How did you respond?
- ?Tell me about a project you regret leading. What would you do differently?
Sample STAR answers
Both strong and weak examples, with notes on what makes each work (or fail). Read the weak examples carefully - the patterns they show up are the ones interviewers are trained to spot.
Strong: Real failure with sustained behavior change
- Situation
- Three years ago, I was tech-leading a 6-month project to migrate our monolithic billing system to a new architecture. I owned the technical design and the schedule. The migration shipped 4 months late and caused two weeks of billing-side incidents in the first month after launch.
- Task
- I'll tell you specifically what I did wrong, why, and what I do differently now.
- Action
- What I did wrong: (1) I underestimated the migration testing surface area. I designed for the happy path of new transactions but didn't account for the long tail of edge cases in 5 years of historical billing data - things like grandfathered pricing tiers, partial refunds, currency conversion quirks. (2) I let the schedule pressure push us to skip a parallel-run validation phase. The original plan had us run old and new systems in parallel for 4 weeks; I cut it to 1 week to keep the timeline. (3) I didn't escalate uncertainty. I knew the testing was thinner than I wanted, but I didn't surface the risk to my manager because I felt I should have a handle on it. Why it happened: I was treating 'completeness' as a binary - either done or not. I should have been treating it as a probability distribution. I was also afraid that escalating uncertainty would look like I didn't have command of the project. That fear was the deeper root cause. What I do differently now: For any migration or large change, I (a) explicitly enumerate the migration testing surface area as part of the design doc, with edge case scoping; (b) treat parallel-run / shadow-mode duration as non-negotiable - if schedule pressure threatens it, that's the signal to push back, not to compromise; (c) use a simple framework I built called 'red/yellow' status updates - if any subsystem is yellow on testing or rollout confidence, my manager hears about it within 24 hours. I've now led 4 more migrations using this framework. None has shipped late or caused production incidents. I also coach junior engineers on the same patterns - the 'don't compress the parallel-run window' rule has caught problems on two of their projects.
- Result
- The original failure cost the company an estimated $200K in customer credits and lost about 3 weeks of engineering productivity to incident response. The behavioral change has paid back the cost many times over via the four subsequent successful migrations and the coaching. The deeper lesson - that I was hiding uncertainty out of self-image - is something I check for in myself regularly. It's not solved; it's something I'm conscious of.
What makes this strong: (1) Real, costly failure - $200K and 3 weeks of incidents. (2) Specific failures with specific causes, not 'I should have managed time better.' (3) Identified the deeper psychological root cause (hiding uncertainty) without performative self-criticism. (4) Concrete, sustained behavior change demonstrated across 4 subsequent projects. (5) Acknowledges the lesson is ongoing, not solved. (6) The candidate is the actor - not 'we failed,' but 'I underestimated, I cut corners, I didn't escalate.' This is what real Growth Mindset looks like.
Strong: Hard feedback received and acted on
- Situation
- About 18 months ago, my manager gave me feedback in our 1:1 that my code reviews were creating friction with the team. Specifically: I was leaving long, detailed reviews that read as condescending, even when I didn't mean them that way. Several junior engineers had started avoiding asking me for review.
- Task
- My initial reaction was defensive - I felt my reviews were technically thorough and that 'thorough' shouldn't read as condescending. I had to decide whether to push back or actually engage with the feedback.
- Action
- I asked my manager for specific examples. He pulled up two recent reviews and read me sections. I could see immediately why they read poorly - I'd written things like 'this is wrong because' and long lectures explaining well-known concepts. The intent was thoroughness; the impact was condescension. I asked him to introduce me to one of the junior engineers who'd been avoiding me. I had a 1:1 with her and apologized specifically for two reviews she could remember; I asked her how the reviews had landed and she said 'I felt stupid for not knowing.' I sat with that. I changed my review style over the next month. New rules I gave myself: (a) If I'm explaining a concept the author probably knows, delete it - they don't need a lecture. (b) Replace 'this is wrong' with 'I'd suggest X because Y, what do you think?' (c) Cap review comments per PR - if I'm leaving more than 6, the PR is too big or my feedback is too granular. (d) Once a month, ask one of my recent review recipients explicitly: 'How did my reviews land for you?' I did this for 6 months as a calibration loop. The junior engineer started asking me for review again about 3 months in. I asked her at our next 1:1 what had changed - she said 'you ask me what I think now.'
- Result
- Team-level signal: in our next quarterly survey, the 'feedback culture' score on my team improved 0.6 points (5-point scale). I was specifically called out as having improved by two of the engineers I'd previously alienated. I now use the 'how did my feedback land' calibration loop with peers and direct reports as a default practice. The deeper lesson: my self-image as 'thorough' was actively hurting the people I was trying to help. I had to update both the behavior and the underlying self-concept.
What makes this strong: (1) Specific feedback received, with the candidate's initial defensive reaction admitted. (2) Active investigation - asked for examples, talked to affected colleague, sat with the discomfort. (3) Sustained behavior change with concrete rules. (4) Calibration loop ('how did my feedback land') shows ongoing self-awareness, not one-time fix. (5) Surfaces the deeper self-concept issue, not just the surface behavior. (6) Quantitative outcome at team level. This story would land at any level interview.
Weak: 'I worked too hard'
- Situation
- Early in my career, I was working on a project and I tried to do too much.
- Task
- I needed to deliver everything I'd committed to.
- Action
- I worked extra hours, but I realized I needed to ask for help. I learned to communicate better and ask for help when I'm overwhelmed.
- Result
- I delivered the project and learned an important lesson about not taking on too much.
Why this is weak: (1) The 'failure' is a rebranded success - the candidate delivered the project. (2) The lesson ('ask for help') is generic and doesn't show real growth. (3) No specifics about what they took on, why it was too much, or what specific behavior changed. (4) No evidence the change persisted. (5) The framing is performative - 'I worked too hard' is the classic interview-coaching answer. Bar Raisers and senior interviewers see through this immediately. Real failures hurt; this one didn't.
Common pitfalls
- ×'Fake failures' - rebranded successes. 'I worked too hard,' 'I cared too much,' 'I took on too much and worked nights to deliver.' Interviewers identify these immediately.
- ×Failures attributed to circumstances or others. 'My team didn't deliver,' 'the requirements changed' - even if true, this is below bar.
- ×Surface lessons without behavior change. 'I learned to communicate better' is a non-answer if you can't show what specifically changed and how.
- ×Over-performative self-criticism. 'I'm so flawed' reads as fishing for sympathy. Real failure stories are matter-of-fact.
- ×Skipping the deeper root cause. The strongest stories identify why the failure happened beyond the surface mistake.
- ×No quantitative cost. Real failures have costs (money, time, relationships). If you can't name a cost, it might not have been a real failure.
Follow-up strategies
Interviewers will probe. Be ready for the follow-up questions that test the depth of your story.
- →If asked 'how do you know you actually changed?' - have evidence. Subsequent projects, calibration loops, feedback from peers.
- →If asked 'what's the deeper lesson?' - go beyond the surface tactical lesson. The strongest answers name a self-concept or psychological pattern.
- →If asked 'have you failed in the same way again?' - be honest. Some patterns recur; saying 'I'm conscious of it but it's not solved' is more credible than 'I never fail like that anymore.'
- →If asked 'what's a current weakness?' - have one ready that's specific and current. 'I'm working on giving direct feedback faster' is concrete.
- →If asked 'how do you handle hard feedback?' - have a process. The strongest answers describe a specific technique (e.g., 'I ask for examples and try to argue the other side').
Related behavioral themes
Dive Deep
Amazon LPLeaders operate at all levels. The interviewer is testing whether you actually understand your own systems - or whether you summarize what your team built.
Ownership
Amazon LPTested at every level, scored harder at senior. Did you take responsibility for outcomes - or just for tasks?
Googleyness
GoogleNot a soft round. Structured questions about collaboration, ambiguity, learning, and motivation - scored against rubrics, not vibes.
Companies that test this theme
Practice these stories live
Reading STAR answers is the floor. The interview signal is in delivering them out loud, with follow-ups, under pressure. The AI mock interview probes your stories the way real interviewers do.
Start an AI mock interview →