The Need for this Change

We bemoan the fact that our legacy systems continue to get more and more entrenched while at the same time our silos are becoming more numerous. Astute observers note that big data, the cloud, and Software as a Service, while making things better for individuals, very small companies, and startups, are actually making things worse for enterprises as their information footprints become more and more fractured.

Decades of ‘best practices’ in implementing application systems, and as much time spent cost-justifying each new application, have led us to believe what we’ve been doing is value added. It is anything but.

Many solutions have been mooted to these problems, from better project management, to application portfolio theory, to service-oriented architecture. Each, as currently being implemented, is making the situation worse.

The problem, at its core, is that we have allowed applications exclusive control over the data they manipulate. At first blush this seems necessary and desirable. The validation, integrity management, security and even the meaning of most of the data is tied up in the application code. So is the ability to consistently traverse the complex connections between the various relational tables that we euphemistically call ‘structured data.’ This arrangement seems to be necessary, but it isn’t. It is not only not necessary, it is the problem.

Decades of ‘best practices’ in implementing application systems, and as much time spent cost-justifying each new application, have led us to believe what we’ve been doing is value added. It is anything but.

The Startup's Advantage

The zero-legacy startups have a 100:1 cost and flexibility advantage over their established rivals. Witness the speed and agility of the Pinterests, the Instagrams, the Facebooks and the Googles. What do they have that their more established competitors don’t? It’s more instructive to ask what they don’t have: They don’t have their information fractured into thousands of silos that must be continually “integrated” at great cost.

In large enterprises, simple changes that any competent developer could make in a week typically take months to implement. Often change requests get relegated to the “shadow backlog” where they are ignored until the requesting department does the one thing that is guaranteed to make the situation worse: launch another application project.

How often have we seen multi-million (even multi-hundred million) dollar projects justified on the basis of a handful of requirements, that if not for the need to make a wholesale change for the sake of change, would be fairly simple incremental additions? We’ve seen a $50 million HR project justified on the basis of a requirement to support collective bargaining, only to see it not be available in time for the requirement that justified it.

Building large systems from scratch is hard. Making small changes to large systems is frequently much harder and often entirely out of reach. Here are some examples illustrating the problem:

  1. A survey of 40,000 projects reveals that only one in three succeeds.
  2. The so-called successful projects pay off only 20% of the time.
  3. In early 2013, California cancelled its $208 million DMV Modernization project .
  4. In late 2012, the Air Force cancelled a six-year-old modernization program in which they had invested
  5. $1 billion after realizing that it would cost another $1 billion to obtain 25% of the originally planned capabilities.
  6. By mid-2014, Healthcare.gov had cost $800 million. At its heart it is a very simple system. Afunctionally equivalent system was built and released by healthsherpa.com in two person-months of effort, presumably for well under $800,000.
  7. Data integration typically consumes 35-65% of a company’s IT budget.