The Security Challenge: Mapping and Securing Your Distributed Data
Here’s a quick-fire question: do you know where all your sensitive data is? As businesses of all sizes generate, accumulate, store, and process more data records in more places than ever, it’s increasingly challenging to classify and track all that data – not to mention make use of it.
On the one hand, enterprises rush into digital transformation with their isolated data silos and outdated legacy code. On the other hand, 86% of developers admit they do not consider application security a top priority when coding. Somewhere in between are CISOs facing burnout as they attempt to enforce code security best practices, privacy regulations, and compliance standards into the chaotic process that is the software development lifecycle.
In this article, I’ll look at why mapping your distributed data is necessary, what challenges you’ll face along the way, and how you can overcome them.
Why is data scattered in the first place?
Whether you like it or not, most data produced, stored, and processed by business applications is distributed by nature. Both logical and physical data distribution is necessary for any application to scale in functionality and performance. Organizations store different data types across different files and databases for various purposes.
The classic example of data distribution within a company is buyer and client data. One SME can have data on leads, warehouse orders, CRM, and social media monitoring spread over dozens of internally developed and third-party SaaS applications. These applications read and write data at different intervals and formats to owned and shared repositories. In many cases, each also has various schemas and field names to store the exact same data.
Application development processes distribute a significant portion of data within the application architecture, especially regarding serverless, microservice-based architectures, APIs, and third-party (open source) code integration. So, the critical question isn’t why we distribute data in our applications. Instead, it’s how we can manage it effectively and securely throughout its lifecycle in our application.
Mapping distributed data: is the effort worth the reward?
Shift left application security, big data security, code security, and privacy engineering are not new concepts. However, software engineers and developers are only beginning to adopt tools and methodologies that ensure their code and data are safe from malefactors. Mainly because, until recently, security tools were designed and built for use by information security teams rather than developers.
Privacy by design is nothing new either, but in today’s hectic velocity and delivery-driven developer culture, data privacy still tends to be neglected. It often remains ignored until regulatory standards (like GDPR, PCI, and HIPAA) become business priorities. Alternatively, in the aftermath of a data breach, the C-suite may demand that all relevant departments take responsibility and introduce preventative measures.
It would be great if all software services and algorithms were developed with privacy by design principles. We’d have systems planned and built in a way that makes data management a breeze, which would streamline access control throughout the application architecture and bake compliance and code security into the product from day one. In short, it’d be absolutely fantastic. But that’s not the case in most development teams today. Where do you even start if you want to be proactive about data privacy?
The first step in protecting data is knowing where it resides, who accesses it, and where it goes. This seemingly simple process is called data mapping. It involves discovering, assessing, and classifying your application’s data flows.
Data mapping entails using manual, semi-automated, and fully automated tools to survey and list every service, database, storage, and third-party resource that makes up your data processes and touches data records.
Mapping your application data flows will give you a holistic view of your app’s moving parts and help you understand the relationships between different data components, regardless of storage format, owner, or location (physical or logical).
Don’t expect an easy ride
Mapping your data for compliance, security, interoperability, or integration purposes is easier said than done. Here are the hurdles you can expect to face.
Depiction of a moving target
Depending on your application’s overall size and complexity, a manual data mapping process can take weeks or even months. Since most applications that require data mapping are thriving and growing projects, you’ll often find yourself chasing the velocity of codebase expansion and deploying additional data stores throughout micro-services and distributed data processing tasks. However you spin it, your data map is obsolete as soon as it’s complete.
The ease of data distribution
Why do new data stores pop up faster than you can map them? Because it’s so easy to deploy new data-based features, microservices, and workflows using cloud-based tools and services. As your application grows, so does the number of data-touching services. Furthermore, since developers love to experiment with new technologies and frameworks, you may find yourself dealing with a complex containerized infrastructure (with Docker and Kubernetes clusters) that may have been a breeze to deploy, but is a nightmare to map.
The horrors of legacy code
As enterprises undertake digital transformation of their legacy systems, they must address the data used and created by those systems. In many cases, especially with established enterprises, whoever originally wrote and maintained the legacy code is no longer with the company. So it’s up to you to explore the intricacies of service interconnectivity and data standardization in an outdated environment with limited visibility or documentation.
Integrating security and privacy engineering in your applications
It’s no secret that data is stolen every day. So much so that you can pretty much guarantee that your email address is included in one or more datasets for sale on the dark web. What can you do to protect your application and data from the greed of cyber criminals and the scrutiny of regulators?
Scan your code to map your data
Modern CI/CD pipelines and processes employ Static Application Security Testing (SAST) tools to identify code issues, security vulnerabilities, and code secrets accidentally pushed to public-facing repositories. You can employ a similar static code analysis technique to discover and map out data flows in your application.
This approach maps out the code components that can access, process, and store the data, thus mapping out the data flows without fully crawling the content of any database or data store.
Enforce clear boundaries for microservices
In a microservice architecture, each microservice should (ideally) be autonomous (for better or worse). But where does each microservice end and another begin regarding sensitive data?
You can identify the boundaries for each microservice and its related domain model and data by focusing on the application’s logical domain models and related data. Then, attempt to minimize the coupling between those microservices.
Secure your sensitive data
Your organization’s data is its most precious asset, and Data Security Posture Management (DSPM) solutions are the key to safeguarding it. These solutions are able to pinpoint sensitive data stored in the cloud, determine who is allowed to access it, and analyze the overall security posture of the data.
Shift left for privacy in a distributed world
Data security and privacy are rarely a priority for application developers. So it’s no surprise that application data can float around your cloud assets and on-premises devices uncatalogued and unmanaged. However, in 2023 you can’t afford to neglect data privacy laws and potential data security threats lurking in your code. Mapping the data flows in and out of your application is the first step to shifting privacy left and integrating privacy engineering, compliance, and code security in your CI/CD pipeline.