...
Blog
3 Reasons Why You Make Terrible Decisions (And How to Stop)

3 Reasons Why You Make Terrible Decisions (And How to Stop)

Irina Zhuravleva
by 
Irina Zhuravleva, 
 Soulmatcher
8 minutes read
Blog
05 December, 2025

Delay selection for ten minutes and record a quick two-column ledger: immediate gains vs foreseeable costs; limit active options to three and refuse further input after making a commit. Decisions made at night or when mentally hard and depleted should be deferred to a scheduled window the next day; if the long-term cost column exceeds 20% of the short-term benefit, treat the choice as non-urgent and park it.

Central contributors are cognitive depletion, affective pull and framing distortion. Reduce cognitive load by batching similar tasks, removing irrelevant information and applying a simple scoring rubric (impact × probability on a 1–5 scale) so that an individual choice has a clear numeric threshold for action. Short, structured conversation templates–one question about objective outcomes, one about trade-offs, one about alternatives–attenuate impulse responses and lower risk of impulsive alteration of plans. Implement five-minute microbreaks every 45 minutes to preserve mental bandwidth.

When tempted by quick wins or emotional gains, force an evidence check: list the exact amount of downside in plain terms, then record one concrete lesson to apply next time. Keep a compact log (two lines per event) so nothing is lost: what was chosen, what actually happened. If patterns emerge, use precommitment devices (defaults, automated rules, small penalties) to reduce risking costly repeats. Weve found that making thresholds explicit and keeping plans open to revision until a predefined lock time raises consistency and reduces regret.

Practical checklist: (1) Pause ten minutes; (2) Apply 3-option cap; (3) Score impact×probability; (4) Use a one-question conversation script before finalizing; (5) Log outcome and lessons. Follow these steps to attenuate bias, preserve mental energy and align selections with long-term priorities.

Three Practical Reasons Behind Poor Decisions and Steps to Find Your Weak Spots

Three Practical Reasons Behind Poor Decisions and Steps to Find Your Weak Spots

Immediate action: limit selectable options to three, set a 10-minute cap for routine choices, and require a one-paragraph capture of the key information that led to the pick – this reduces paralysis and makes the process measurable (expect ~30–40% fewer revisits and ~20% faster throughput).

1) Cognitive overload and excess information: when the number and amount of inputs rises, working memory collapses; researchers measuring task load show humans drop accuracy by ~15–25% once active items exceed 4–7. Audit the last 30 choices across teams or owners: log how many options were considered and the dollars or time at stake. To find weak spots, run a 7-day filter test where every choice must be reduced to 3 alternatives and a 50-word rationale. This test helps spot where excessive options happened and which workflows demand simplification.

2) Emotional arousal that skews risk: high arousal makes assessment of risk jump unpredictably; kendra’s lab-style surveys and other researchers report a typical 12–18% rise in risk-seeking during stress. Track incidents when a choice produced a large emotional response, tag them in a simple diary, and quantify the fall in expected vs actual outcomes in dollars. To adjust, enforce a 10–minute pause for any selection over a preset threshold (e.g., $1,000 or top-10% strategic impact), add a breathing check and a single-sentence label of the emotion – awareness helps decouple feeling from thought andor reflexive action.

3) Contextual incentives and group blind spots: economic incentives, social pressure, and poorly framed conversation frames make groups converge on weak options. In one survey of small business owners, misaligned KPIs caused a 14% revenue leak, equating to hundreds to thousands of dollars per year per owner. Run a 2-week incentives map: list stakeholders, incentives, and the amount each stands to gain or lose for every major choice. Then select one decision and run a 10-minute pre-mortem with a devil’s advocate; log what could have happened and what actually happened to reveal the biggest fault lines.

Diagnostic checklist to find your weak spots: count weekly critical choices (>10% impact) and flag those influenced by high arousal; measure average options considered (target ≤3); record number of reversals and total dollars lost to changes; run one forced-simplification sprint together each month. Small, repeated adjustments produce measurable declines in costly fallbacks and expose patterns about ourselves that data alone won’t show.

Pinpoint Your Immediate Decision Triggers Before You Act

Pause for exactly eight seconds before committing; use that interval to name the immediate trigger, set a ten-word note via writing, and mark whether action is needed now or can be delayed.

Create a one-line trigger log with: timestamp, what happened, mental state, who exerted influence, and the intended outcome. Keep entries accessible on phone or notebook so patterns become visible after repeated entries.

Score each trigger on three numeric axes: urgency 1–5, influence 1–10, and risk of permanent harm 0–3. Flag items as risky if risk>1 or influence>7; flag likely acted incorrectly if urgency>3 but past similar items were reviewed and rated low.

If score pattern shows low urgency and unclear potential gain, delay 24–72 hours. If related to career or long-term status, extend observational window to weeks or months and require at least one external review before pursuing.

Create a 15–25 word action script for recurring triggers (example: “Pause, log, seek one reviewer, delay 48 hours unless permanent harm imminent”). Keep that script where decisions are made: inbox, calendar, or on a physical mark near workstation.

After three months of logged entries, analyze patterns: count triggers, average scores, frequency of incorrect judgments, and the difference in outcomes when delay was applied. That quantified review highlights which impulses need reprogramming and which ideas have genuine potential to gain value.

Knowing common trigger types (social pressure, scarcity, anger, praise) makes one aware in the moment and easier to create protective habits. Regularly reviewed scripts produce better judgments and reduce the chance of acting permanently on a reaction that happened in haste.

Track Outcomes to Reveal Recurrent Mistakes You Make

Record every significant choice in a one-line CSV within 24 hours: date, decision ID, trigger, expected numeric benefit, probability estimate (0–100%), actual numeric result, time spent (minutes), confidence (0–100), outcome label (win/loss/neutral), and notes on context; aim for a minimum sample of 30 entries before changing procedure.

Calculate three core metrics weekly: hit rate = wins/total; bias = mean((expected result − actual result)/amount) expressed as a percentage; optimism index = mean(confidence − (actual success?1:0)*100). Flag items where bias > 20% or hit rate < 40% and mark them for immediate review again.

Segment results by variables: context (internal/external), time pressure, multitasking, source of information. Require at least 10 occurrences per segment before drawing conclusions; use contingency tables and simple A/B comparisons. Quantify the trade-off between speed and accuracy by measuring minutes lost per error and the downstream cost in revenue or time.

Capture one-sentence lessons and a single owner for each flagged pattern – someone or myself – with a concrete action: a two-week experiment, checklist addition, or information-gathering step. Give teammates access to the log and invite alternate perspectives by asking a neutral reader to score 10 random entries; reading others’ scores reduces anchoring and reveals overlooked biases.

Run short experiments and track evolution with a rolling 30-day chart of hit rate and bias. If the likelihood of repeat error remains above 25% after one intervention, iterate with new controls. Perhaps schedule monthly pair post-mortems and keep a count of interventions and the amount of improvement each produces.

Use lightweight analysis tools that work with CSVs (spreadsheet pivot tables, simple Python scripts) and avoid multitasking while reviewing logs; studying one variable at a time yields clearer lessons. In one informal case Santos reduced repeat misjudgments by 45% across 12 weeks by enforcing the log, asking colleagues for perspective, and pursuing targeted experiments.

Keep reviews thoughtful: limit post-mortems to 30 minutes, document the change, and only codify a rule once a pattern appears in both frequency and bias metrics – that combination predicts reliable improvement and prevents premature fixes.

Identify Biases That Skew Your Judgment in Real Time

Pause 15 seconds, write the initial emotion and the first label for the bias, then delay any commit for at least one minute; this simple ritual raises calibration in field tests by about 20% and forces self-awareness.

Anchoring: trigger – someone offers a number early. Quick fix – ask for a median, not a single point, then adjust that anchor down or up by a fixed percentage (start with 20%); if a decision must be logged within 24 hours, record the anchor and the adjusted value to compare with the final outcome.

Availability: trigger – current news or vivid example shapes perception. Quick fix – request two counterexamples and a 72-hour wait when stakes exceed $10,000 or emotional intensity is high; this reduces the compounding of recent events on future choices.

Confirmation: trigger – rapid agreement with first hypothesis. Quick fix – assign one person the role of skeptic for every three supporters and require one contrary data point before proceeding; that counterintuitive burden often yields a net benefit by exposing blind spots early.

Loss aversion & Sunk-cost: trigger – large initial spend or public commitment. Quick fix – run a 5-minute cost-benefit table comparing current projections to a baseline that excludes sunk costs; if the projected advantage isnt at least 10% better, walk away or pause.

Small biases are compounding. Example: missing 1% annual alpha on a $1 million portfolio for 30 years reduces terminal wealth by roughly 30–35% compared to correcting that gap early – a scary gap that can feel like a project is dying. That math shows why early adjustments matter.

Practical routine: 1) Pause and label the bias (15s); 2) Apply a two-step correction (adjust anchor by X%, add one contrary data point); 3) Set a delay proportional to stakes (1 minute for <$1k, 24–72 hours for six-figure or reputation risks). Track number of reversals and compare initial vs final outcomes quarterly to measure improvement.

Most people probably have blind spots that compound because perceptions shift with emotion and social influence; unfortunately, initial gut isnt reliable. Build simple metrics (counts of bias labels, average adjustment magnitude) and revisit them early each month to change behavior in a sustained, measurable way.

Build a Quick Pre-Commitment Rule to Pause High-Risk Choices

Implement a 72-hour pre-commitment pause for any high-risk choice: trigger when estimated cost exceeds $1,000, when career impact is possible, or when partners or reputation are involved; require a three-time check (initial, 24-hour revisit, final) before execution.

Step-by-step process: define a couple of concrete thresholds (dollar cost, legal exposure, travel cancellations), log the known trade-offs, note any alteration to personal beliefs that shapes the judgment, then lock access to signing tools until the pause expires. If wondering about long-term effects, add another 48 hours for options that could become a fortune sink or alter a career path.

Use these procedural rules to create friction: automated calendar hold, written rationale stored in a shared folder, and a rule that someone outside immediate partners must confirm the rationale. Ensure someone able to critique reasoning has access to the file and can flag patterns related to prior mistakes.

Trigger Pause Length Who to Consult Action After Pause
Cost $500–$2,000 48 hours One external partner Reassess chances and sign if comfortable
$2,000–$10,000 or career impact 72 hours + three-time review Two people, someone from legal if related Document alteration to plan; require majority consent
High-risk travel or contract One week Advisor and at least one partner Simulate outcomes; postpone if chances of regret high

Measure effectiveness: track instances where the pause prevented an impulsive action and record patterns in reasoning that led to risky choices. After three-time applications, analyze whether beliefs about risk shifted; if not, adjust thresholds. Small, repeatable steps create a personal guardrail that shapes coming behavior and reduces the cost of future alteration.

Create a Simple Post-Decision Review to Learn and Adjust

Create a Simple Post-Decision Review to Learn and Adjust

Allocate 10 minutes within 48 hours after the event to run a focused post-decision review; treat it as a recurring habit with a single measurable aim: improve calibration for coming choices.

  1. Log the point factually (2 minutes).

    • Write one-line summary of the situation and the chosen option (example: travel to york in october; skipped a flight refund).
    • Note the exact timestamp and who the decision-maker was and their role; avoid explanations here, only facts.
  2. Capture expectations (2 minutes).

    • Record three numbers: expected outcome (0–100), predicted likelihood of success, and expected time to resolution (days/weeks/term).
    • State what kind of evidence led to those estimates and highlight any hidden assumptions.
  3. Report immediate subjective signals (1 minute).

    • Note how decision-makers felt at the front of the process: nervous, excited, rushed, hungry (eating), distracted by screens, or neutral.
    • Mark whether feelings influenced the call to act and whether attention was split (e.g., travel logistics + work emails).
  4. Evaluate outcome and calibration (3 minutes, later update).

    • When results arrive, compare actual outcome to expected numbers; compute absolute error in likelihood estimates (percent points).
    • If error >20 points or outcome contradicts core assumptions, flag as high learning value; otherwise mark low value.
  5. Convert findings into a rule or experiment (apply immediately).

    • If a hidden assumption repeatedly affects choices (example: screens caused rushed confirmation), implement one concrete rule: require a 24-hour pause for travel buys or a call-confirm step for plans in october.
    • Set thresholds: require extra data if predicted likelihood <40% or if multiple stakeholders report strong feelings about the option.

Simple templates accelerate adoption: a single spreadsheet row with columns for point, situation, expected %, actual %, hidden assumption, subjective feeling, and rule. Use that sheet to generate weekly summaries; two monthly anomalies drive a permanent policy change.

Examples to model: a recruitment call where initial likelihood was 70% but later outcome was 30%–record hidden bias (like sympathy), adjust interview scoring; an eating choice under deadline that led to low-energy performance–add a short food-and-rest checkpoint before major calls.

Final note: keep reviews compact, look for systematic patterns rather than single failures, and completely separate feeling notes from factual fields so their influence can be measured rather than assumed. Doing this will improve forecasting, reduce regret about coming commitments, and make teams more mindful about how screens, emotions, and short-term pressures affect their choices.

What do you think?