DevOps is just a fad, right? Wrong. We at Synergex may have only been talking about DevOps for a couple of years, but DevOps has been a practice in the industry for over a decade, and it isn’t going away anytime soon. It’s a prescription of practices that increases development and organizational performance, resulting in increased value production. Researchers have identified 24 key capabilities of DevOps that drive software delivery and organizational performance. If a list that long is enough to give you pause, consider that the world is becoming increasingly digital, and the demand for rapid development of high-quality software products is increasing with it. If an organization refuses to rise to meet that demand, the competition will—and they’re probably already doing it. Trying to tackle the whole list at once is ill-advised, even by the staunchest of DevOps advocates. On the other hand, starting with some of the basic capabilities a team can control is the best place to initiate an organization’s DevOps transformation.
Version control systems have existed since the ‘70s and ‘80s and have continued to evolve into integral tools for modern development teams, even those not engaging with DevOps. The key feature in today’s popular systems (e.g., Git and Subversion) that DevOps teams lean on is branching, which enables developers to test and iterate on their work without affecting production source, a core component to increasing product quality. Additionally, teams get access to full source change history, allowing developers to see what was changed and why (without those massive routine headers), and the ability to access product source on any machine, allowing for easy source deployment to automated build and test environments. Furthermore, distributed source control systems with a remote repository like GitHub or Azure DevOps provide disaster recovery capabilities for the organization, thereby reducing the potential cost of failure of a local system (due to anything from natural disasters to hardware failures) to almost zero, with respect to source loss. Version control is so useful it’s even been incorporated into tools unrelated to software development, like Microsoft Office and Google Drive. Unfortunately, some teams and organizations still do not use a version control system to manage source code, and those teams are going to incur increased costs in attempting DevOps operations or find DevOps impossible to achieve.
Automated tests have long been a staple of modern software development, and the need for them increases in a DevOps environment. Running automated tests per change (feature, bug fix, etc.) is core to DevOps and builds quality into the product during development rather than leaving it as an afterthought. This necessitates speedy (automated) and consistent (unit) tests. While it is possible to construct unit tests for routines that run via console apps or the traditional runtime, Synergy/DE Language Integration for Visual Studio (SDI) is the preferred tool for the job. SDI has supported .NET unit testing for a while and added support for unit testing traditional Synergy code in 11.1.1d. Not only do unit tests in Visual Studio integrate seamlessly with automated DevOps environments, but source builds occur significantly faster than build commands in a script due to dependency analysis and parallel component builds, and developers also have an enhanced code editing and debugging experience for code running on all platforms. All of this packaged together makes SDI a core component in Synergy DevOps development.
Experimentation is a cornerstone of improvement in all occupations, and software development is no different. A development team is the best judge of the tools and practices it uses to produce value for the organization. If the team finds something is lacking or problematic, the team should be able to come up with a solution and make the change. Measurements of success must also be developed to keep the team from going astray. Does a modified change approval process result in more bugs discovered by QA or users? Does changing build tooling result in faster builds? The only way to know is to try and then report on the results, because “don’t fix what isn’t broken” doesn’t mean ignoring the deflated tire on your car. After these initial steps, it’s time to integrate with other teams in the organization. This is going to come in the form of deployment automation, continuous integration, and a lot of communication between teams. A wide range of tools and services enables the various automated components to integrate seamlessly, but none of them can provide increased value without teams and team members being on the same page. For some insight into some of the obstacles and pressures teams experience during this process and how to overcome them, consider reading The Phoenix Project (a fictional depiction of a DevOps transformation) and The DevOps Handbook (the non-fiction counterpart to The Phoenix Project). For analytics on the benefits of implementing DevOps, read The State of DevOps reports published annually by Puppet. Once all that reading is done, get to work on making changes that benefit your team and organization!