What your DevOps team will look like in 3 years
DevOps continues to gain momentum. This isn’t exactly news, as the term itself was coined about eight years ago, and prominent companies like Google, Amazon, Microsoft, and Netflix have made their share of meaningful donations to its development.
Many IT industry influencers believe it’s no longer a question of if your organization will adopt these practices, but when. And yet the DevOps methodology of software delivery is still evolving, as are the tools and technologies that support it. So the question also involves what you’ll be adopting and who will adopt these new tools and workflows. Let’s try to imagine what your DevOps team will look like in 3 years, so you will be better able to adapt to the future today.
How DevOps came to be
At first, the software engineer (or system administrator) was supposed to be the jack of all trades. He had to be able of installing, configuring and patching the software, writing the scripts and macros for supporting business IT workflows, even fixing the faulty printer. However, it soon became obvious that such an approach is not feasible, and the great specialization began.
Software engineers were forced to specialize in various professions:
- Front-end developers,
- Back-end developers,
- QA and testing engineers,
- Website markup developers
- Designers, etc.
Sysadmins had to split also:
- Database administrators,
- Customer support,
- Level 2 and level 3 support,
- Operations engineers, etc.
However, such siloed responsibilities and tasks had lead to a significant increase in the complexity of the software delivery process. When the teams grew too big, finding out the team member guilty in exceeding the project deadlines became too complicated. This is why the DevOps concept came to be.
The original idea of the DevOps methodology was simply to de-silo Devs and Ops to overcome the bottlenecks in the software development and deployment process, mostly on the ops side. The resulting DevOps engineers had to be able to fix minor code issues during the deployment; readjust the CI/CD pipelines to use the automated unit test codebase; be able to solve customer support requests; be proficient in monitoring and rapidly react to incidents and much, much more.
At first blush, it seems as though the merge would affect only Devs and Ops, but multiple other areas of expertise had to be included, like:
- Testing and QA,
- Data security objectives,
- System and data protection,
- AI implementation for predictive analytics,
- Efficient system monitoring.
These are top-level concerns of management, and they have become part of the DevOps picture. In other words, when you hear “DevOps” today, you should probably be thinking DevSecQATestInfoAIOps.
When Agile is not enough
DevOps approach underpins the Agile software delivery methodology, but it goes far beyond the concentration on cost reduction and shorter time-to-market for new products. Forrester has proclaimed that 2018 was the year of enterprise DevOps, as more than 50% of enterprises worldwide have adopted the DevOps practices to enhance their technology value streams. As the value delivery today is largely based on technology, utilizing the available systems and tools in the most efficient way is the key to success.
How to ensure this works? By removing the managerial barriers, so the guys who actually have to introduce the change do not have to ask for permission from managers, whose main concern is stability, not the improvement. This way the DevOps team has a say in the company strategy field and can become the true driver of technological and cultural innovation in the enterprise. Below are the main prerequisites for enterprise DevOps success today:
- The cultural environment of high trust, where DevOps principles flourish
- Infrastructure as Code approach facilitating experimentation with little mistake costs
- Extensive usage of continuous system performance assessment, continuous feedback gathering, continuous value implementation, and continuous code deployment pipelines to ensure smooth system operations and proactive issue solving.
The nowadays of DevOps
Today, with the evolution of DevOps, the goal is supporting a continuous delivery pipeline. Operations department has been adopting so many of the techniques used by developers to support more agile and responsive processes that we’re seeing a kind of DevOps evolution aimed at further differentiation of main DevOps roles:
- Site Reliability Engineer or SRE — the DevOps talent specializing in designing resilient, easily scalable and cost-effective infrastructures for your products and services
- Dev+ — a developer able to create and configure Continuous Delivery pipelines, where the build, testing, and staging servers are provisioned automatically, so the ready code is pushed to production without even issuing a request for an Ops engineer.
- Ops+ — an Ops engineer, who knows enough about the app in development and understands its architecture to be able to execute some minor fixes mid-deployment to minimize the release downtime or avoid it altogether
- DevSecOps — an Ops engineer specializing at weaving the security layers, features and practices into the code, workflows, and pipelines of software delivery
- AIOps — an Ops engineer specializing at imbuing the AI algorithms into the system monitoring frameworks to enable anomaly discovery and self-healing IT infrastructures, etc.
What about DevOps tools?
DevOps methodology of Agile software development was greatly empowered by the introduction of Docker containers. These are essentially the packages of code with all the required runtime components to launch an app. This technology fairly revolutionized the software delivery process, as the containers allowed to run ANY app on ANY type of infrastructure (from public cloud to on-prem virtual machines or bare metal clusters).
The second pillar of DevOps transformation was the introduction of 2 immensely powerful DevOps tools: Kubernetes and Terraform. Kubernetes is used to create and manage the virtual clusters for running containerized apps, while Terraform enables the simple creation, adjustment, and deletion of the underlying cloud infrastructure.
Thus said, the Gartner report states that DevOps workflows “demand a linked chain of tools to enable efficient system operations.” This toolkit grows steadily, so DevOps engineers have to constantly increase their knowledge and become versatile with various tools to remain competitive. Luckily, this is essentially what DevOps approach fosters — the environment of high trust, where the talents become better.
The environment of high trust
DevOps methodology of software development does not concentrate on blaming the one responsible for each particular incident, post-release downtime or database failure that requires backup restoration. Such issues are merely considered indicators of system structure flaws, highlighting the room for improvement.
When the team can discuss the failure freely, it leads to finding the roots of the problem and improving the system performance, instead of finding the guilty person and blaming him. This is why in DevOps environments of high trust the team members exchange the information to help discover and solve the issues faster, which leads to highly motivated teams determined to deliver the ever-increasing stream of value. So to say, the main question is not “whose fault is this?” but “what we learned today?”, so the teams concentrate on improvement, instead of being paralyzed by fear of punishment.
Using Infrastructure as Code
Such kind of scenario is possible only when the server infrastructure is versioned as code, through Terraform and Kubernetes manifests, Ansible playbooks, Jenkins workers, etc. Long gone are the days when the servers had to be installed, configured and maintained manually according to long checklists.
Nowadays, the infrastructure states can be described in code, stored on Github and versioned, so that the required environment can be rapidly provisioned in the cloud. Due to this approach, the cost of error is negligible in comparison to manually configured dedicated servers. This way, the team is free to experiment and build better infrastructure without being stopped and slowed down on every step by the server provisioning costs.
The evolution of DevOps practices
Essentially, the IT industry repeats its previous path. A single DevOps engineer of today cannot cover all the needs of multi-faceted IT operations, hence the differentiation. At the same time, the DevOps teams must evolve their workflows and practices in order to be competitive. Simply automating the majority of mundane operations using Jenkins jobs, Kubernetes manifests and Ansible playbooks is just not enough. Predictive and prescriptive analytics powered by AIOps, leveraging the benefits of ChatOps, designing and implementing the self-healing IT infrastructure — this is the way the IT services evolve nowadays. Below we explain what are these trends and what your DevOps team will look like in three years.
- Predictive and prescriptive analytics with AIOps. AIOps is the next level of IT services, where predictive analytics and prescriptive analytics based on AI algorithms are used to predict and prevent issues, rather than monitor the systems and fix the incidents after they occur. This is a much more cost-effective approach compared to staring into the monitoring tool dashboard, waiting for the hell to break loose.
- Leveraging the benefits of ChatOps. ChatOps is another arising trend, where important system notifications are not emailed, nor are sent as SMS. Instead, they are delivered to popular messengers like Slack, WhatsApp, Viber, Telegram or others via an API integration with various popular DevOps open-source tools. Such messages contain the links to the code branch, GitHub repo, pull request number, automated test, or Jenkins job in question. This context greatly simplifies and shortens the incident solving process and the software delivery routine at scale.
- Design and implementation of self-healing IT infrastructure. This is really the way the most advanced IT service companies are moving. From the manually configured servers — to virtualization. From virtual machines — to provisioning infrastructure for containerized apps with Terraform manifests. From the predictive analysis of system performance controlled by humans — to fully autonomous self-healing infrastructure maintained with the AI algorithms and ML models. When the environment is faster to recreate than to repair after failure and the machine learns to prevent such failures with time — this is a whole other level of DevOps workflows.
Final thoughts: DevOps services enter another spin of the spiral
As of now, the variety of roles in the IT industry is steadily growing. Software engineers begin to identify themselves as experts in the domains that did not exist a couple of years ago (like serverless computing or app containerization). This is backed up by constant introduction of new services and features like managed Kubernetes services, serverless computing, distributed cloud databases, AI-powered products and features from AWS, Google, MS Azure and other providers alike.
However, in about 3 years the DevOps industry will have to combine multiple areas of expertise into more versatile and all-around capable specialists, which will be able to fully utilize the power of AI to enhance their IT operations. This will happen exactly the way it happened when the DevOps culture itself was emerging in 2009 — and for the same reasons. All the diverse specialties will have to mix and melt into a new type of software engineer.
To sum it up, it is hard to predict what your DevOps team will be called in 3 years, but we can predict what tools and practices is will have to use to remain competitive. It is high time to invest in ML training for your DevOps engineers, so in a year they will be able to engage into AIOps, ChatOps and begin implementing the self-healing systems for your business. This is the way of future-proofing your business and making sure it remains competitive and lucrative.