￼ ￼ SOA in the Teen Years
Service-oriented architecture (SOA) is now a teen. In the early 2000s, SOA was just becoming the leading-edge discussion point for architects. Now that SOA has passed its awkward, gangly stage and is fully into its informative teen years, maturing nicely. And, just like any teen, SOA is developing its own language and approach. As I speak to people, some get confused with the use of the new terms.
The notion behind SOA was to break big, monolithic applications into smaller chunks so portions could be reused; so when an application was updated it would be less painful. The services would be hosted in data centers and, as part of the design process for a new application, you would find what services existed that you could reuse. This would cut development time and lower time-to-value for the business, and would result in having less code to maintain. With less code to maintain, the business should be able to spend more money on new capability than on maintaining existing capability. How has all of this manifested into the IT environment we operate in 2016, and what other changes have occurred?
Service orientation is very strongly in place. Organizations are chipping away at existing monolithic applications and environments and are updating to reflect SOA principles. As new IT capabilities are created, they embrace the SOA model and reside in virtual environments. Virtual machines (VMs) and virtual operating environments are the infrastructure and networking instantiation of SOA. With VMs, you spin up a virtual environment to support a new service or capability. As adoption of the service warrants, the VM resources can expand to support the additional need. Prior to VM, if you were starting a new project you would need to scope the hardware you would need and order it and get it installed in time to support releasing the software. We have moved from VM-only environments to hybrid environments that also include cloud-based capability or are completely operated in the cloud. As understanding (and faith) is gained in cloud-based service providers, businesses are moving all of their applications to the cloud and closing data centers. Service catalogs are maintained by the IT Operations team that define internal and cloud-based services available for use. Micro-services is the latest evolution of SOA. With micro-services, each service is self-contained, so includes the infrastructure, platform, and service. As for applications, most consider an application to be an API that can be accessed.
Silos are coming down. We have shifted from an environment with silos based on role and position in the Plan/Build/Run paradigm to an environment where teams fund, build, deliver and maintain the solution or service they create. The term that has manifested for this is DevOps, where the development and operations roles are combined. There is still an IT operations team and developers may be in the business unit or in a centralized team, but the idea is that whoever builds it fixes it. For the most part, testing is automated and the responsibility of the development team and security is the responsibility of the development team. For security, there is still a security team or security center of excellence, but they provide guidance and process that the development teams need to follow before releasing their application. This is giving rise to the term DevSecOps. Different than DevOps, but the same idea. As for architecture and architects, both are distributed. While infrastructure, software, information, and business have been focused or distributed across the organization, the enterprise architecture function is distributed across the organization now. Waterfall approaches to design and architecture are a thing of the past since we can be wrong with our architecture and course correct as we gain more insight. That said, architecture is quickly becoming an agile process.
We have shifted our IT costs from 75% of the budget being used to maintain the existing IT capability (infrastructure, power, cooling, licenses, bug fix, etc.) to 50/50, or, in some cases as low as 25% of the budget spent maintaining. We’ve also changed how we budget and incur costs. For organizations that use cloud-based services rather than buying a server or other hardware, the cost moves from CAPEX to OPEX. Planning can be shorter-term since no big outlay of cash is needed when a new project starts. With decomposition and reuse, services can be maintained through a centralized budget and the business lines can fund new capability. With cloud-based solutions, the exact cost for a capability can be reported (or billed back) directly to the business unit using the solution. This alone makes IT seem more transparent and a partner to the business unit.
At least, that is how it looks from here…