If you’re a developer today, it’s hard not to love microservices. By adding agility and resiliency to applications, microservices architectures make it easier to build high-performing apps.
But a microservices strategy only pays off if you effectively manage the risks that go hand-in-hand with microservices. In certain key ways, microservices are fundamentally more challenging from a security perspective than less complex monolithic architectures. If you fail to manage the security risks of microservices, you may find yourself with an application that doesn’t perform well at all because it has been compromised – a problem that no amount of agility will solve.
That’s why it’s important to identify and address the security risks that accompany microservices. Here’s a list of the top five most common security issues developers should think about when writing microservices-based apps, along with tips on addressing them.
If you’ve ever written or managed a microservices app, you know that microservices architectures bring complexity to a whole new level.
They make it more complex to write applications because developers have to ensure that each microservice can find and communicate with other microservices efficiently and reliably. And they make management harder because admins have to contend with service discovery, distributed log data, instances that constantly spin up and down, and so on.
Both of these challenges translate to security risks in the sense that, when it’s hard to keep track of everything happening in an environment, it’s more difficult to detect vulnerabilities. In order to conquer the complexity, developers and security teams need stronger tools for managing source code and monitoring runtime environments than they would require when dealing with monolithic apps.
Depending on how you deploy microservices, you may have limited control over the runtime environment.
For example, if you use serverless functions to host microservices, you will have little to no access to the host operating system. You get only the monitoring, access control, and other tooling that the serverless function platform provides to you.
From a security perspective, this makes matters significantly more challenging because you can’t rely on OS-level tools to harden your microservices, isolate them from one another, or collect data that might reveal security issues. You have to handle all of the risks within the microservice itself. That can certainly be done, but here again, it requires more coordination and effort than developers of monolithic apps are accustomed to.
Part of the reason everyone loves microservices is that they can scale so easily because new instances can be launched in just seconds.
That’s great when you actually want your microservices to scale. But what if someone with malicious intent gets hold of your environment and massively scales up your microservices in such a way that they consume enormous amounts of cloud resources?
You end up as the victim of a so-called Denial-of-Wallet attack, which is an attack designed to waste victims’ money, even if it doesn’t actually disrupt service.
So far, Denial-of-Wallet attacks remain purely theoretical; no such attack has yet been reported in the wild. Still, this is a real risk, especially for businesses with poorly secured cloud computing accounts or fewer measures in place to detect malicious spending activity.
In a monolithic application, data is usually stored in a simple and straightforward way. It probably lives on the local file system of the server that hosts the data, or possibly in network-connected storage that is mapped to the server’s local storage. This data is easy to encrypt and lock down with access controls.
Microservices typically use an entirely different storage architecture. Because microservices are usually distributed across a cluster of servers, you can’t rely on local storage and OS-level access controls. Instead, you most often use some kind of scale-out storage system that abstracts data from underlying storage media.
These storage systems can usually be locked down with access controls. But the access controls are often more complex than dealing with permissions at the file system level, which means it’s easier for developers to make mistakes that invite security breaches.
On top of this, the complexity of ensuring that each microservice has the necessary level of access to the storage can lead some developers to do the easy but irresponsible thing of failing to configure granular storage policies and allowing all microservices to access all data.
Either way, you end up with storage that is not as secure as that of a conventional, monolithic app.
The answer here is to ensure that you take full advantage of granular access control within storage systems, while also scanning access configurations for potential misconfigurations.
Securing the network is critical for any type of application that connects to the network – which means virtually every application today.
When you’re dealing with microservices, however, network security assumes a whole new level of complexity. That’s because microservices don’t just communicate with end-users or third-party resources over the Internet, as a monolith would. They also usually rely on a complex web of overlay networks to share information among themselves.
More networks mean more opportunities for attackers to find and exploit vulnerabilities. They can intercept sensitive data that microservices exchange with each other, for example, or use internal networks to escalate breaches from one microservice to others.
All of these risks can be managed. To say that developers should avoid microservices because they’re too complex and challenging would be like saying we should return to the age of horse-drawn buggies because cars are too dirty and dangerous.
But that doesn’t mean that it’s not important to manage the risks of microservices. Just as no responsible driver would move a car without taking the reasonable precaution of buckling up first, no developer should deploy microservices without taking steps to manage their inherent risks.
To learn more about Staying Ahead in the Game, be a part of //checkmate, the virtual conference bringing together the global AppSec and dev community. Secure your spot here.
Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.