Why Infrastructure Security Is Key to DevSecOps
Making the move to DevSecOps is essential, even if it will be painful.
January 9, 2019
By Don MacVittie
Don MacVittie
There’s been a lot written about DevSecOps of late, in a general sense and in terms of MSSPs. We have a pretty good idea of how to start integrating security into DevOps, and what challenges this transition will have. Less so for service providers, but much of what is written for enterprises also applies to service providers.
In short, it’s a tall order and a long path. Even so, InfoSec has been short of people (let alone quality candidates) for years, so the increasing burden that agile methodologies and DevOps have placed on traditional security shops means making the move to DevSecOps is essential, even if it will be painful.
But there is a catch. There are precursors to DevSecOps that far too many shops – particularly MSSPs because of the breadth of their business – are behind on. Locking down the infrastructure that houses DevOps functionality is imperative. While the frequency of deploys has grown, so has the number of targets that are deployed to. Rapid changes present a security challenge, particularly when they change the underlying infrastructure by needing new ports open or using a new version of a library/module. And new targets have new security requirements. Locking down public cloud instances may share a little bit with locking down containers, but not much. Additional bells and whistles available with cloud services (be they public, private or internal) offer more confusion.
If an organization can’t protect its source code management (SCM) repositories, and can’t protect its continuous integration (CI) or release management tools, then protecting applications through DevSecOps is moot. This is a thousand times true for service providers, where it’s not just the organizations’ own systems at risk, but — through shared code — each clients’ systems as well.
Illegal access to an SCM is a disaster-level event for a service provider, and one we all recognize well. But it seems that awareness of the same issue with regards to CI systems lags behind. Simply put, if a ne’er-do-well can load modules in Jenkins, everything that passes through that CI system is suspect, making extreme steps warranted to block unauthorized access to Jenkins in any environment. But in a service provider environment, one code insertion to the self-service app could undermine the entire customer base.
Potential Service Provider Fixes
What can be done? Well, there are some quick-hit easy steps that service providers should take to start down the path of ensured infrastructure.
First, if you are one of the (small, but due to agile/DevOps, unfortunately increasing in number) shops that allows changes in production, stop. Emergencies will always leave the door open to a change in production, but in general these should be extremely limited.
Next, do a check-out, build and compare to production on a regular (at least daily) basis. Bowker Identifier Services recently had a code insertion issue that would have been detected on day one, were this simple process in place (yet another disclosure: It is possible the author knows about the Bowker compromise very firsthand). Instead, it was six months of their customers’ data being stolen and customer credit cards sold on the dark web. Validating that the code running is the code intended and reporting the differences allows valid, in-production changes to be captured for insertion back into the DevOps process, and allows security to …
…sleep a little better at night knowing that one possible source of tomfoolery is blocked. Yes, in complex environments like service providers, this is easier said than done, but it is worth the effort if it assures unauthorized changes are caught quickly.
For public cloud implementations, three words: automate, automate, automate. There are toolsets out there, like VMware’s Secure State, that can help set standards across the variety of cloud services and cloud vendors (Disclosure: The author worked with the Secure State team before they became part of VMware). By automating the process, and monitoring for compliance with standards set by the company, systems like Secure State create consistency in cloud environments. Of course, the rules must be set up and maintained to reflect the newest developments in security, but this is better than searching through cloud subsystem offerings to find just the right switch to flip.
If you build it, you must store it. For most MSPs, storage also is a big deal. Customers are storing things on your systems, and any attack that might allow a criminal to modify customer files is huge. Some environments are easier to handle in that regard than others — a storage/backup provider can offer encryption on the customer site before transfer, for example, and protect files, while a hosting provider has to protect servers/subnets because web hosts must have executables stored on them. While worthy of mention here, this particular problem is best solved in verticals.
The big thing is to lock down centralized touches-your-code systems like Git and Jenkins as much as possible in your environment. The potential harm for a service provider is many times larger if one of these apps gets hacked than if a website is hacked. Particularly in Jenkins, remove unused plug-ins. They are doing no good if they are unused, and offer an attack surface that security might not be paying attention to … because they’re unused and security is perennially understaffed.
DevSecOps is a big deal, but locking down the systems that do DevOps and supporting infrastructure before attempting to lock down the product of DevOps is required. Security of output cannot be achieved without security of systems creating the output. Your customers are counting on it — take the extra steps and reward their confidence. You won’t get any thank-you notes, but you likely will avoid the negative effects of not locking things down.
Don MacVittie is the founder of Ingrained Technology, and has worked in every facet of IT from entry-level programmer to CIO, from network operations to storage and database analyst. He currently works in DevOps while running a successful technical evangelism consultancy. Don has contributed to projects his company worked on for organizations in DevOps, DevOps leadership, data protection, network security, global file systems, and non-IP communications spaces, along with several international publications and PR firms. His MSSP background is in both communications and utilities. Follow him on LinkedIn or Twitter @dmacvittie.
Read more about:
MSPsYou May Also Like