Blog

The Productive Role of Recurring Mistakes: Pattern Recognition in Professional Practice

In most professional contexts, mistakes are viewed as events to be avoided, corrected, or ideally prevented altogether. Yet within the realities of complex work—whether in software development, product design, or team management—mistakes often serve a more constructive purpose than we typically acknowledge.

While isolated errors may seem like noise, recurring mistakes offer valuable insight. They reveal patterns, expose flawed assumptions, and challenge mental models. Through recurrence, professionals can move beyond surface-level corrections and begin to engage in deeper, systemic improvement.


Beyond the One-Off Error

In simple systems, mistakes may have obvious causes and equally obvious fixes. But complex systems—those involving multiple actors, shifting variables, or layered abstractions—rarely provide such clarity. When a problem arises once, it may be dismissed as a fluke. When it recurs, however, it demands attention.

Recurrence is not necessarily a sign of incompetence or negligence. Instead, it may indicate that something within the system is consistently misaligned: a process misunderstood, a design misinterpreted, or an assumption invalid under real-world conditions.

These repeated breakdowns serve a diagnostic function. They point not just to what went wrong, but to why it continues to go wrong. This shift in perspective—from treating errors as isolated events to seeing them as symptoms of broader patterns—is essential for developing mature, adaptive systems.


The Legacy of Negative Framing

From an early age, many people are taught that mistakes are personal failings. In most traditional school settings, the focus is on right answers, correct methods, and avoiding error. Mistakes are highlighted, marked in red, or used as evidence of falling short. Rarely are they treated as the essential raw material for learning.

This mindset often carries into adult life. Many professionals feel shame or fear when making mistakes—even small, everyday ones—because they have internalized the idea that getting things wrong signals weakness, stupidity, or lack of preparedness. This cultural framing is not just unhelpful; it’s actively harmful.

When we treat human error as a defect of character, we create environments where people feel the need to hide their mistakes, avoid risk, or retreat from learning opportunities. In contrast, organizations that cultivate psychological safety—where mistakes can be discussed openly and constructively—consistently outperform those that don’t.

Professional growth depends on being wrong in the open, and learning why. Reframing mistakes as informative rather than shameful is not only more humane—it’s more effective.


The Cost of Dismissing Recurrence

In some environments, recurring mistakes are met with little more than impatience or punitive action. People are shuffled out, replaced, or quietly sidelined. New workers arrive, only to find themselves navigating the same landscape—and making the same missteps.

Over time, I’ve observed what can only be described as a conveyor belt of talent, where individuals are discarded not because they are incapable, but because they are placed into flawed systems that do not evolve. Instead of investigating the recurring issues, or asking what the system might be failing to support or reveal, responsibility is shifted solely onto the individual.

This cycle is counterproductive for both the worker and the organization. For the worker, an opportunity for growth and mastery is denied. For the company, institutional memory is lost, lessons go unlearned, and the same inefficiencies repeat under a different name.

When recurring mistakes are met with rotation instead of reflection, everyone loses—and the system stays broken.


The Limits of Front-Loaded Knowledge

A common belief, especially in hierarchical or rigidly structured organizations, is that mistakes should be preemptively eliminated through training, education, or process documentation. While these are undoubtedly important, they cannot account for the unpredictable nature of real-world environments.

No amount of formal education can fully prepare someone for the nuance and variability of live systems, evolving teams, or human behavior. Professionals must often construct knowledge through interaction with the system itself, encountering its boundaries, surprises, and contradictions firsthand.

Mistakes—especially those that recur—play a key role in that construction process. They surface information that was previously unknown, untested, or misunderstood. Through that surfacing, the system teaches us how it really works, as opposed to how we thought it did.


Recurrence as a Tool for Design and Process Improvement

In design disciplines, recurrence often manifests through repeated user confusion, friction, or misinterpretation. These are not just isolated lapses, but indicators that something in the system invites misunderstanding. Rather than correcting users, designers must recognize that the system is not communicating clearly.

Similarly, in development or operations, repeated failures may reflect a disconnect between the intent of the system and its actual behavior under variable conditions. Recurring faults in implementation, integration, or deployment highlight points of fragility that are not always visible during initial planning or isolated testing.

In management, repeated miscommunication, unmet expectations, or process breakdowns often trace back to assumptions that were never aligned. Addressing these issues reactively treats only the symptoms. Allowing patterns to emerge over time makes it possible to identify structural causes and introduce meaningful change.


A Case for Slower Correction

There is a tendency in fast-paced environments to prioritize immediate correction. While this can be useful for triage, it often short-circuits learning. Not all problems should be solved at first sight. In some cases, allowing a mistake to recur—safely and observably—provides the data needed to address it at the level of principle rather than procedure.

This is not about tolerating negligence or accepting harm. It is about recognizing that premature fixes may obscure patterns, and that meaningful correction often depends on the emergence of those patterns over time.


Toward Systemic Learning

This idea aligns with established models in organizational learning, such as double-loop learning, where the aim is not simply to change behavior but to challenge the underlying assumptions and rules that govern behavior.

Recurring mistakes prompt this kind of reflection. They demand not just a better answer, but a better question: What system dynamics are producing this result, and what needs to change for the result itself to disappear?

Seen this way, recurrence is not a failure of professionalism. It is a pathway to greater insight—a signal that the system is offering a lesson we have not yet learned.


Conclusion: Rethinking Professionalism

The idea that professionals should not make mistakes—or that all relevant mistakes should have been “learned from at school”—reflects a flawed and outdated understanding of how expertise develops.

In both professional and personal contexts, human error is not a weakness—it’s a window. It reveals where we are still growing, where systems are still opaque, and where opportunities for improvement remain hidden.

By embracing recurrence, acknowledging the legitimacy of everyday mistakes, and creating environments where error is met with curiosity instead of shame, we can build more adaptive teams, better systems, and more humane workplaces.

After all, not everything worth learning can be taught in advance.
Some lessons are only revealed through experience—mistake by mistake.

Learning Is the Work: Why “You’re Not Here to Learn” Is the Wrong Mindset in Game Development

In game development, I have heard a recurring line from producers and managers:
“You’re not here to learn, you’re here to make games.”

At face value, this may sound like a call for focus and productivity. But in practice, it reveals a fundamental misunderstanding of the craft. Game development is learning. To deny this is to deny the very process that keeps both people and projects alive.


The False Divide Between Learning and Making

Game development is not assembly-line work. It is a creative-technical discipline that demands constant adaptation. Every project brings new problems:

  • A new console forces a rethink of performance budgets.
  • A gameplay mechanic demands physics that feel fun, not just “real.”
  • An art direction shift requires new rendering techniques.

To pretend we can “just make games” without learning is to pretend these problems don’t exist. In reality, making and learning are inseparable. We make games by learning, and we learn by making games.


The Limits of “Knowing”

One of the least smart mindsets is believing you “already know everything.” That belief is not intelligence — it is fragility.

  1. The world moves, but you don’t. Engines, hardware, and player expectations evolve constantly. If you stop learning, you fall behind.
  2. You shut down curiosity. Without questions, you never discover new approaches.
  3. You repeat old mistakes. Assuming you “know it all” means carrying bad patterns from project to project.
  4. You block others’ growth. A leader who believes they know it all discourages juniors from asking questions and seniors from innovating.

Knowledge is always partial:

  • From school and books: useful, but limited. No textbook prepares you for debugging a live multiplayer desync at 3 a.m.
  • From doing: powerful, but also limited. Your experience is only as wide as the projects you’ve touched.

Neither theory nor practice is the “golden egg.” Real growth comes from combining the two, and from admitting there is always more to learn.


Case Example 1: The Stagnant Studio

At one studio, curiosity was treated as waste. Developers were told to stop asking why and just ship.

  • The physics engine ran on outdated code nobody dared to touch.
  • Designers were stuck with clumsy tools that slowed them down.
  • The same bugs haunted every project because no one was allowed to rethink the system.

The outcome? Every game felt like a reskin. Morale dropped, innovation stalled, and players noticed the sameness.


Case Example 2: The Learning Culture

In another studio, learning was part of the workflow. Time was set aside for “tech spikes” and experimentation.

When an animator struggled with blending, an engineer researched state machines, prototyped a new system, and shared it across the studio. A small learning exercise became a studio-wide improvement.

The result? Games that felt fresher, stronger systems, and a team motivated by curiosity rather than drained by repetition.


Learning as a Competitive Advantage

The industry moves fast. Engines deprecate features. Hardware generations change performance budgets. Players expect new experiences.

A team that does not learn falls behind — not in a decade, but in a year. Research supports this: organisations that foster continuous learning are more innovative, adaptive, and resilient (Garvin, Edmondson, & Gino, Harvard Business Review, 2008).


The Principle

Rejecting learning is not efficiency. It is stagnation.

If you’re not thinking, you’re not evolving. If you’re not evolving, you’re not growing.

Game development demands growth because each game is different. The smartest teams are not the ones that “already know” — they are the ones willing to learn.


Closing Thought

“You’re not here to learn” belongs to a production-line mentality that does not fit creative industries. In reality, learning is the work. The best games — and the best teams — are those that never stop growing.

The Originality Trap: Why Coherence Matters More Than Novelty

When many of us first begin creating—whether in writing, coding, or game design—we are haunted by the pursuit of originality. We fear tropes, clichés, and “borrowed” mechanics, convinced that unless our work is radically new, it lacks value. This is what I call the originality trap: the belief that innovation requires absolute novelty.

Ironically, this obsession often stifles creativity rather than liberating it. Early drafts become incoherent as we avoid familiar structures. Games sprawl into messy experiments as we try to reinvent mechanics wholesale. The result is work that is different but rarely better.

The Shared Building Blocks of Story and Game Design

In literature, Joseph Campbell famously identified recurring patterns across myths in The Hero with a Thousand Faces (1949). The “monomyth” demonstrates that most stories recycle the same archetypes: the call to adventure, the mentor, the ordeal, the return. Yet Star Wars, Harry Potter, and The Odyssey remain distinct because of how these shared elements are arranged.

Games mirror this principle. Mechanics such as resource management, progression systems, and combat loops appear in countless titles. Sid Meier once defined a game as “a series of interesting decisions” (Meier, 2012). What distinguishes Civilization from StarCraft or Age of Empires is not the raw mechanics—many are functionally identical—but the way they are woven into a coherent player experience.

Features vs Narratives

This is where the difference between features and narratives emerges. Features are discrete capabilities: crafting systems, weather effects, skill trees. Narratives are the integrated whole: the way features interact, reinforce each other, and create meaning.

As in writing, where tropes become narrative arcs only when coherently arranged, so in game development features become compelling when unified into a systemic story. Originality is not the invention of new atoms, but the arrangement of existing molecules.

A Universal Analogy

Physics reinforces this lesson. Everything in the observable universe—stars, planets, humans—is built from the same fundamental particles: quarks, electrons, and the forces that bind them (Greene, 2004). The uniqueness of a star versus a human arises not from the invention of new matter, but from the particular arrangement of shared components.

So it is with creative work. Stories borrow tropes, games reuse mechanics, code repurposes structures. Their individuality emerges through coherence, not invention.

Conclusion

Originality is overrated; craft is underrated. By escaping the originality trap, we allow ourselves to focus on the true challenge: weaving familiar pieces into a unity that feels inevitable, complete, and compelling. In writing, coding, and game design alike, how we arrange the parts is what makes a work truly its own.

References

  • Campbell, J. (1949). The Hero with a Thousand Faces. Princeton University Press.
  • Greene, B. (2004). The Fabric of the Cosmos: Space, Time, and the Texture of Reality. Alfred A. Knopf.
  • Meier, S. (2012). Sid Meier’s Memoir!: A Life in Computer Games. W. W. Norton.

Artificial Stupidity in Practice: Designing Flaws in AI Logic for Tempus Inceptum

Introduction

Design techniques and C# examples for intentionally flawed AI that feels human—and is often tougher to beat.

When developers think about AI in games, the focus is often on making it smarter: efficient resource managers, tactically sharp combatants, or hyper-optimized decision systems. But there’s a paradox here—sometimes, the most effective AI isn’t the one that always makes the best choice. It’s the one that doesn’t.

The Paradox of Perfection

A flawless, rules-driven AI can become predictable. Once players understand its logic, they can “solve” it—exploiting patterns to guarantee victory. Unpredictability, by contrast, breaks the player’s script.

The Martial Arts Analogy

“I’d rather spar with someone trained than a beginner. The trained fighter is predictable. The beginner is dangerous precisely because they’re unpredictable.”

That insight maps directly to game AI. A perfectly rational opponent may be impressive, but an opponent that occasionally overcommits, hesitates, or makes a suboptimal move can throw the player off balance. Unpredictability becomes a weapon.

Artificial Stupidity ≠ Randomness

“Artificial stupidity” is not arbitrary failure. Poor randomness feels unfair or immersion-breaking. Instead, it’s the purposeful introduction of believable, context-aware flaws that mimic human imperfection.

Design Patterns

  • Probability-Weighted Choices: Favor good actions without guaranteeing them.
  • Fuzzy Evaluation & Noise: Add small perturbations to utility scores.
  • Hesitation & Delay: Build in indecision to create tension.
  • Risk Profiles: Personalities that bias toward cautious or reckless errors.
  • Contextual Mistakes: Flaws that make narrative sense, not dice rolls.

Implementation Examples (C# / Unity)

1) Probability-Weighted Decisions

Instead of picking the max utility every time, choose from a weighted distribution.

public static class AIDecisionHelper
{
    public static T ChooseWithWeights<T>(IReadOnlyList<(T option, float weight)> options)
    {
        float total = 0f;
        for (int i = 0; i < options.Count; i++) total += MathF.Max(0f, options[i].weight);
        float roll = UnityEngine.Random.Range(0f, total);
        float acc = 0f;

        for (int i = 0; i < options.Count; i++)
        {
            acc += MathF.Max(0f, options[i].weight);
            if (roll <= acc) return options[i].option;
        }
        return options[^1].option; // fallback
    }
}

Usage in Tempus Inceptum (resource choice):

var resourceChoice = AIDecisionHelper.ChooseWithWeights(new List<(string, float)>;
{
    ("Crop",   0.70f), // best
    ("Lumber", 0.20f), // sub-optimal
    ("Fur",    0.10f)  // weak
});
// Proceed to plan around 'resourceChoice'

2) Fuzzy Evaluation with Noise

Inject small noise into utility values to simulate human inconsistency without chaos.

float EvaluateResource(string resource, float basePriority)
{
    // ±20% jitter; clamp to keep within sane bounds
    float noise = UnityEngine.Random.Range(-0.2f, 0.2f);
    float score = basePriority * (1f + noise);
    return Mathf.Clamp(score, 0f, basePriority * 1.4f);
}

3) Hesitation & Delay

Coroutines that introduce decision latency; great for build/commit timing.

IEnumerator DelayedAction(System.Action action, float maxHesitationSeconds)
{
    yield return new WaitForSeconds(UnityEngine.Random.Range(0f, maxHesitationSeconds));
    action?.Invoke();
}

// Example:
// StartCoroutine(DelayedAction(() => PlaceProductionBuilding(plan), 3.0f));

4) Risk Profiles (AI Personalities)

Profiles bias the chance and magnitude of non-optimal decisions.

public enum RiskProfile { Cautious, Balanced, Reckless }

float GetDecisionModifier(RiskProfile profile)
{
    // Returns a multiplier the AI applies to a chosen action's utility,
    // sometimes skewing toward a mistake based on personality.
    float r = UnityEngine.Random.value;

    switch (profile)
    {
        case RiskProfile.Cautious:
            // 10% chance to undervalue aggressive plays
            return (r < 0.10f) ? 0.6f : 1f;

        case RiskProfile.Balanced:
            // 20% chance to slightly misjudge either way
            return (r < 0.20f) ? (r < 0.10f ? 0.8f : 1.2f) : 1f;

        case RiskProfile.Reckless:
            // 30% chance to overvalue risky plays
            return (r < 0.30f) ? 1.5f : 1f;
    }
    return 1f;
}

5) Contextual Mistakes (Believable Flaws)

Use state and goals to justify occasional errors that still “fit” the faction’s character.

void BuildFoodOrTimber(FactionState faction)
{
    bool needsFood = faction.NeedsFoodNow();
    float mistakeChance = 0.15f; // tune per difficulty & personality

    if (needsFood && UnityEngine.Random.value < mistakeChance)
    {
        // Misjudges urgency; narrative: "short-term cash need" or "misread"
        Build("TimberYard");
    }
    else
    {
        Build("Farm");
    }
}

Why This Works in Tempus Inceptum

  • Unpredictability: Disrupts “solved” player strategies.
  • Believability: Flaws resemble human error and faction personality.
  • Challenge: Players must adapt in real time, not memorize patterns.
  • Variety: Different runs feel different, extending replayability.

Conclusion

Artificial stupidity is not the opposite of artificial intelligence; it’s a design tool that makes AI feel human and—crucially—harder to beat. By layering weighted choices, fuzzy utilities, timed hesitation, risk profiles, and contextual mistakes, Tempus Inceptum creates opponents that are intelligent, fallible, and formidable.

Perfect AI can be mastered. Flawed AI keeps players on edge.

The Fractal Trap: A Visual Model for Premature Complexity in Software Architecture

“Duplication is far cheaper than the wrong abstraction.”
— Sandi Metz, Practical Object-Oriented Design in Ruby

In the world of software design, there is a constant tension between planning for the future and addressing the present. Most developers have experienced the creeping urge to abstract, modularize, and generalize early—before actual needs have emerged. This often comes from a well-meaning desire to future-proof a system, to make it extensible and professional.

But premature architecture rarely does what we hope it will. It often adds complexity before the design has had time to stabilize, before the system has revealed its true shape. The result is a fragile lattice of assumptions—hard to modify, harder to understand.

This post offers a metaphor that helped me understand and communicate this challenge: a fractal. And although I first used it as a junior developer—and was largely dismissed for doing so—I’ve come to believe it’s a valuable mental model for anyone working with evolving systems.

Fractals as a Metaphor for Architectural Expansion

Fractals are recursive geometric patterns that generate complexity by repeating a simple rule at increasing levels of detail. A classic example is the Koch snowflake, which begins with an equilateral triangle. With each iteration, a smaller triangle is added to the center of every existing edge.

      • Iteration 0: Simple triangle.

      • Iteration 1: Triangle with bumps.

      • Iteration 2+: Increasingly complex, spiky shape.

    At first glance, this seems like a beautiful and scalable process—complexity emerging from simplicity. But here’s the key insight: as the complexity grows, the foundational triangle becomes increasingly difficult to change.

    If you later decide that the base triangle should be a square—or even slightly stretched—every recursive addition depends on the original angles and lengths. Changing it breaks the whole system. You’ve painted yourself into a geometric corner.

    Figure 1: The Koch snowflake begins with a triangle and recursively adds complexity—mirroring how over-architecture can evolve from simple systems.

    Applying the Fractal Analogy to Code

    In software, a similar phenomenon occurs when we apply layers of abstraction and modularity before they’re needed. For example:

        • Creating interfaces for every class, even when there’s only one implementation.

        • Building dependency injection frameworks for systems without actual dependencies yet.

        • Designing for plugin support or extensibility without any clear use case.

      These are architectural equivalents of adding detail to the triangle’s edges long before we know whether the triangle itself is shaped correctly.

      And just like in a fractal, each abstraction layer or interface becomes a “recursive bump”—increasing surface area, but also increasing fragility. By the time you realize that the core concept was slightly off, you’re surrounded by an ornate, interconnected structure that depends on the original being right.


      The Allure (and Danger) of Early Generalization

      Premature abstraction often feels productive. It looks sophisticated. It demonstrates technical vocabulary and forethought. But as Kent Beck famously said:

      “Make it work, make it right, make it fast.”
      — Kent Beck, Extreme Programming Explained

      In other words, get the triangle right first.

      Martin Fowler echoes this in Refactoring:

      “If you only need one implementation, don’t create an interface.”

      Interfaces are powerful tools. They enable polymorphism, inversion of control, and testability. But when applied prematurely, they also create unnecessary indirection, slow comprehension, and foster rigidity. They are not inherently good or bad—they are context-dependent.

      The same applies to architectural patterns. A pattern is a proven solution to a recurring problem. But if the problem hasn’t recurred—or hasn’t even appeared—then the pattern is not yet justified.


      Simplicity as Strategic Patience

      In systems thinking, simplicity is not a lack of intelligence—it’s a form of restraint. Choosing to build only what is needed is a discipline that comes with experience, not inexperience.

      This is why many experienced engineers practice evolutionary design: allowing the system to grow organically, refactoring as real-world demands emerge. You refactor when patterns repeat. You introduce interfaces when variability appears. You abstract when extension becomes inevitable.

      This is not laziness or short-sightedness. It’s intentional deferral of complexity, in the name of long-term flexibility.

      Rich Hickey, the creator of Clojure, gave a talk titled Simple Made Easy where he distinguishes between simplicity (lack of interleaving, independence) and ease (how close something is to your current knowledge). Simplicity is a design goal. And it often means resisting the urge to add layers “just in case.”


      Complexity is Not Free

      In architectural terms, every layer—every triangle added to the edge—has a cost:

          • Cognitive cost: more things to understand, even if unused.

          • Maintenance cost: abstractions must be preserved, even when they’re not helping.

          • Inflexibility cost: deeply nested abstractions are harder to refactor.

          • Testing cost: interfaces demand mocks, which may not add value at early stages.

        The assumption is that this upfront cost will pay off later. Sometimes it does. But very often, the future arrives differently than expected, and the extra scaffolding becomes dead weight.


        What This Means in Practice

        This metaphor isn’t a hard rule—it’s a mental model. Not all systems behave fractally. Sometimes early abstraction is necessary: in large distributed systems, shared libraries, or platforms with multiple consuming teams.

        But for the vast majority of early-stage codebases or features:

            • Favor concrete over abstract.

            • Build only what is needed today, and design in a way that welcomes change.

            • Let complexity emerge from real use cases, not imagined ones.

          If you find yourself designing five layers deep for a need that might never appear, consider starting with a single triangle.


          A Closing Thought

          When I first shared this idea as a junior developer, it was dismissed as oversimplified, even patronizing. “Just use an interface” was the advice I got in return.

          But over time, I’ve learned that many of the best developers do not build more than is necessary. They listen to the codebase. They treat abstraction as a tool, not a reflex. They respect simplicity—not because it’s easy, but because it’s fragile, valuable, and hard-won.

          So the next time you feel the urge to design for everything up front, ask yourself:

          “Am I reshaping the triangle, or am I adding triangles to its edges?”

          And perhaps… wait until the system tells you it’s time.


          Further Reading

              • Kent BeckExtreme Programming Explained

              • Sandi MetzPractical Object-Oriented Design in Ruby

              • Martin FowlerRefactoring

              • Rich HickeySimple Made Easy (Talk, 2011)

            Scroll to top