Don’t need Application Release Automation for your Oracle SOA? Think again.

Whether you like it or not, organisations worldwide are using customisable commercial off-the-shelf software products to deliver Enterprise Integration solutions that underpin the systems we rely on in the business ever day. Whether you're filling out a pre-populated web-form or receiving an automated notification email, that data securely crossed the network at some point likely originating from one or more disparate systems. Middleware, Integration and APIs are that invisible glue unknown to many yet a constant source of change for any organisation.

In this blog, we will look at the Release Management challenges organisations may face for Middleware, Integration and APIs in general; followed by some of the specifics for Oracle SOA Suite and related Middleware. Finally, we will introduce the driving principals behind Application Release Automation using MyST, the market leading DevOps for Oracle Middleware software and show how organisations are using it to release valuable business outcomes while controlling delivery cost and risk.

But first...

What's in an integration release?

How is it different to the release of a standalone web application?

Integration releases most commonly require different settings on a per-environment basis. For instance, in a development environment, a Customer Relationship Management (CRM) Integration may point to a cloud-based Siebel backend. In production, it may point to an on-premise Siebel backend. We may also have monitoring and security settings which differ on each environment. We are in-a-sense deploying a slightly customised version of our code with different endpoints and settings for each environment so...

  • How do we ensure that we are promoting and testing the same codebase from Development through to Production?
  • How do we ensure that a developer doesn't accidentally promote an incomplete piece of new code to a controlled environment when intending to only change the endpoints?
  • How do we ensure a release engineer understands what endpoints to customise and which version of the code to promote?
  • And mostly importantly, how do we ensure that we are testing in a way that there are no nasty surprises by the time we get to production?

But first, let's dig a little deeper into how Integration Release Management is typically done within an Enterprise.

Integration Delivery in an Enterprise

The evolution of a piece of integration code, from conception to release will likely involve interactions and collaboration among a number of individuals, or even a number of teams. For instance:

  • Solution Architects and Business Analysts (BAs) may work with Business Stakeholders to formalise the business-case, produce requirements and ideally develop an initial set of acceptance test cases.
  • Testers will work with the BAs and Developers to flesh out the Testing Strategy and execute the Exploratory and Automated Testing according to the plan.
  • Developers will write the source code for the solution based on the requirements and in alignment with the acceptance criteria. If they are not the ones responsible for Operations they should at least be working with those that are to ensure the solution is production-ready. They may also write a suite of unit tests and potentially system and integration tests to:
    • prove their code is working
    • protect against regression
    • document the codes behaviour
    • provide design guidance
    • support future refactoring
  • Operations (Ops) will require an understanding of the deployment process and the operational settings within the codebase so that endpoints, security and monitoring can be applied at deploy-time and so that the solution can be supported and maintained in production. It goes without saying they should be working closely with Developer.
  • At release-time, Managers and other so called "Gate Keepers" will want to ensure the preconditions are meet and that the release was successful. They may ask questions before a release like "What components are effected by this change?" and "Are all the tests passing?.
  • After the release, most likely the Stakeholders will want to make changes. So who should be informed to make the change? Is it an Ops change, a code fix or a new set of requirements to be driven by BAs?

When the necessary inter-personal interactions are unclear and performed in an ad hoc manner, it can make the release process painful and error prone even for small and close teams. When we add the Enterprise dimension we may be talking about large, bureaucratic and silo'd teams so the problem doesn't get any easier and we're not just talking about a simple web application. What about all of the End-system owners, we can't leave them out of the picture?

It has been said and shown by many startups and enterprises that DevOps culture can play a part in addressing the communication gaps within an organisation (in particular, Dev, QA and Ops) to avoid wasteful (non-value added) tasks in the delivery lifecycle.

But of course, to be completely effective, there is no "DevOps" without automation and for the highest business value, automation itself should be seen across the whole delivery chain not just development and release but everything in between. We should be using automation to help form the contract of understanding between developers and operations and visualising everything that is important to stakeholders in the delivery lifecycle. Automation is not the solution in and of itself but the enabler for better informed and collaborative teams to be more productive. This results not just in personal fulfilment for employees but deep organisational productivity through the elimination of avoidable waste.

The automation underpinning a Continuous Delivery Pipeline can help teams to make decisions faster by providing them with the insight they need and reducing the reliance of guesswork, snooping and politics.

  • An Automated Pipeline can visualise a single change from a developer's commit to a version control system through to it's deployment to production system while managing the approvals in between.
  • Automation can empower the release engineer by auto-discovering the operational variables to be applied per deployable artifact and prompting for them on release if they are not set.
  • Automation can help to discover implicit and explicit dependencies between components within large Enterprise Integration landscapes to allow change impact to be well defined.

Build and Deploy Concepts

SOA, Microservice and general Middleware solutions tend to wire services together at deploy-time rather than bundling them as libraries in a monolithic application. To support this notion of customising endpoints and other operational settings at deploy-time each technology has a concept of a "plan" or "customisation" file.

Typically, there will be one file per-environment and it will contain all of the required customisations to be applied to an artifact for the given environment. In the Oracle Fusion Middleware stack, the corresponding plans concepts for given Artifact types have different names but are fundamentally achieving the same objective.

  • SOA Composites use per-environment Configuration Plans
  • Oracle Service Bus Projects use per-environment Customization Files
  • Java and WebLogic Applications use per-environment Deployment Plans

Without these solutions, developers would need to build a unique artifact for each environment, which would be highly error prone due to it's complexity and the higher risk of making a mistake such as forgetting to replace a property. However, it should be said even the per-environment, per-artifact customisation strategy has it's own weaknesses.

Welcome to Configuration Plan Hell

If we consider a Customer service capability that needs to be deployed to DEV,TST,UAT,STG,PRD we may end up with a number of configuration files for each environment looking like this:

Customer  
├── java
│   ├── DeploymentPlan_dev.xml
│   ├── DeploymentPlan_prd.xml
│   ├── DeploymentPlan_stg.xml
│   ├── DeploymentPlan_tst.xml
│   ├── DeploymentPlan_uat.xml
│   ├── pom.xml
│   └── src
├── osb
│   ├── OSBCustomizatiomFile_dev.xml
│   ├── OSBCustomizatiomFile_prd.xml
│   ├── OSBCustomizatiomFile_stg.xml
│   ├── OSBCustomizatiomFile_tst.xml
│   ├── OSBCustomizatiomFile_uat.xml
│   ├── pom.xml
│   └── src
└── sca
    ├── configplan_dev.xml
    ├── configplan_prd.xml
    ├── configplan_stg.xml
    ├── configplan_tst.xml
    ├── configplan_uat.xml
    ├── pom.xml
    └── src

What happens when we evolve our code base, do you regenerate each plan again from scratch? What happens if we forget to do this for one environment? Now, it's out-of-sync. Or what if we need a new environment? Do we take a copy of production and edit it for the new environment? We may be do that, but we must be very careful as forgetting to a change a setting may result in your non-production environment pointing directly to production!

What happens if you have to do something as simple as changing a single endpoint? If that is referenced by multiple artifacts that could be lot of files to change and ensure consistency across.

These kind of patterns, or dare I say, anti-patterns of per-environment, per-artifact customisation can lead to undesirable workaround. The most common being:

  • Avoiding customisation files altogether
  • Custom coding workarounds

Let's look at these.

Anti-pattern #1: Avoiding configuration plans altogether

Here Developers and Administrators ditch Configuration Plans altogether and change the code with the environment differences on a per-deploy basis when deploying directly from JDeveloper IDE. This is a slippery slope as it leaves no way of repeating what was done consistently across multiple releases against an environment. The runtime solution may become unpredictable as it may be accidentally pointing to wrong endpoints or environments.

Anti-pattern #2: Custom coding workarounds

It's no surprise that coders like to solve problems with coding. In an attempt to avoid configuration plan hell, developers may build a custom solution which constructs a unique artifact per environment. They may create their own properties file per environment and take the environment name as a parameter. This approach may work for a while until someone makes a mistake like deploying the dev artifact to the test environment. What ever happened to promoting a single artifact to each environment?

The MyST antidote

By now it should be clear that building an artifact per-environment can cause major application release risk due to inconsistency and complexity. MyST release management avoids the need to build artifacts per-environment and guarantees release certainty, consistency and reliability. This is achieved through a number of driving principals

  • Every artifact built is a potential release candidate.
  • Every artifact is packaged with operational variables to be defined at deploy-time on a per-environment basis.
  • Every artifact should be connectable to other integration endpoints at deploy-time without a need to rebuild the artifact per-environment.
    • Other unique settings per environment such as monitoring, security and performance tuning parameters should also be definable without a need to rebuild the artifact.
  • Common environment-specific values are versioned controlled centrally and looked up at deploy-time.

MyST utilises the customisation file type available to each technology (e.g. Config Plans for SOA Suite, Customization Files for OSB). The file can be packaged within the artifact itself so it doesn't drift from the code it is designed to support. The file can contain references to operational variables.

Customer  
├── java
│   ├── DeploymentPlan.xml
│   ├── pom.xml
│   └── src
├── osb
│   ├── OSBCustomizatiomFile.xml
│   ├── pom.xml
│   └── src
└── sca
    ├── configplan.xml
    ├── pom.xml
    └── src

Below is an example of a generic OSB Customization file with ${app.stock.host} and ${app.stock.port} property references that are replaced at deploy-time.

Where the out-of-the-box customisation technologies fall short, you can define XPath, Property Reference or Find and Replace Rules against file patterns through a powerful MyST customisation plan. Can't change a service account with an OSB customization file? No problem! Create an XPath rule and apply that at deploy-time.

Containerized. Releasable. Portable.

Now that any given source artifact can be deployed to any environment by decoupling the runtime settings (such as external endpoints, security and monitoring) from the code, it's easy to compose portable solutions that will run on-premise or cloud, on our local development environment or in production. We get away from the works on my machine mentality and ensure we are not wasting our time on testing anything less than a production release candidate. We can consistently test and promote our code artifacts from Development to User Acceptance Testing and ship to Production at anytime without a unique and untested release ceremony every time we do so.

In this post, we've shown first hand how the MyST antidote for Oracle Middleware can be useful for reducing risk and cost in the delivery lifecycle and how you can benefit from those principles, today.

If you'd like to learn more about MyST for Application Release Automation (ARA), you can check out our summary of the MyST 5.5 release here.

Thanks for reading and happy releasing :)

Craig Barr

I am a Software Engineer with a decade of experience empowering Enterprises in Banking, Logistics, Manufacturing with Service-Oriented Architecture, Microservices and Cloud Computing.

Brisbane, Australia https://twitter.com/craigbarrau

Subscribe to Oracle PaaS, Oracle Middleware and DevOps

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!