Complexity gets most of the attention when attempting to explain or prevent the failure of big projects—including IT projects. This makes sense: It’s impossible for any one person to grasp all the knock-on effects any big project will have. Preparation and planning are key, but some factors will always be missed.
Poor project management usually also means that the team is not proactively tackling these factors. The fact that complexity makes projects difficult is common knowledge. What may not be as well-known is how cognitive bias can trap IT leaders and project teams into counterproductive ways of thinking.
Humans handle complexity with mental models. We have brain maps of our SharePoint or Salesforce instances and conceptions of what our most important data sets look like. The statistician George Box famously said, “All models are wrong, but some are useful.” The model that was most useful for the old implementation may not be the best for the new one, however.
Notions of how things “ought to work” or “have always worked” can blind project teams from seeing a better way and can even lead to the inadvertent accumulation of new technical debt when a clean slate was possible. Such unmotivated bias can happen at the requirements level—such as a project plan where users attempt to recreate legacy features in a new system rather than utilizing the newer best practices that come out of the box—or at the process level—such as an insurer new to the Agile methodology with a leadership team that still expects a finalized list of requirements before a project begins.
Different mental models held by the project team, the vendor, and the systems integrator, or even differences across or within teams inside an organization, can also leave those working on a project with a sense of “talking past each other.” This miscommunication has obvious detrimental effects on implementation. These incompatible and unconscious ways of seeing things are an example of unmotivated bias.
On top of these biases, there are internal and external pressures that introduce what are known as “motivated bias.” The deterrence theorist Ned Lebow argued, “When [decision] makers become convinced of the necessity to achieve specific [objectives], they become predisposed to see those objectives as obtainable.” When a statement like “failure is not an option” reflects actual pressures rather than platitudes, motivated bias is likely to occur.
When motivated biases are present, objectives may end up determining “reality” rather than the other way around. Choices are rationalized rather than reasoned. Let’s imagine a situation in which a team is told a core system replacement must complete in 18 months—compensation and even careers may depend on it. The team assesses the types of policies on the old system and the effort required for conversion.
Any reasonable interpretation of the result would suggest that the 18-month timeline is all but impossible. Because delaying the timeline is off the table, the team is highly motivated to focus on the “all but” rather than “impossible” in “all but impossible.” They thus proceed as quickly as possible, foreclosing further discussion in the interests of efficiency. Important pieces of the puzzle are missed, and the implementation ultimately fails.
Much bias can be mitigated through thorough planning and the inclusion of diverse voices and viewpoints (including those of outsiders) to ensure all avenues are explored and groupthink is avoided. Honest initial assessments can help teams deal with complexity and mitigate unmotivated biases.
At the same time, only courageous realism from the top can avoid motivated bias. When the achievement of “Plan A” is deemed likely to be “extremely challenging,” leaders would do well to consider whether a more achievable “Plan B” might align with the company’s overarching goals (e.g., the avoidance of catastrophe) even if it fails to meet more immediate goals.
Leaders must be careful to make the right comparisons when weighing options: “cost” and “benefit” are meaningless unless weighed against risks as well. Plan B might bring far fewer benefits and perhaps higher long-term costs than Plan A—but only if Plan A is successful. A successful Plan B might be considerably more valuable than a failed Plan A.
For concrete advice on successful policy administration system implementations, see our report PAS Lessons Learned: Implementation Best Practices for P/C Insurers.