Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Ulf Mattsson, CTO Protegrity: Database Security for Cloud and Outsourced Environments

February 2011 by Ulf Mattsson, CTO, Protegrity Corporation

One of the biggest concerns about the cloud is the threat of data being stolen. The cloud is a high risk environment that decreases the database administrators’ ability to control the flow of sensitive data. Because cloud introduces risk, exposure of database encryption keys becomes particularly vulnerable.

Data security in today’s business world is a classic Catch-22. We need to protect both data and the business processes that rely on that data, but in order to do so we need to move from a reactive, fear (or compliance) driven mode to a proactive data security plan.

Enterprises are currently on their own in deciding how to apply the principles of PCI data protection (e.g. segregation of regulated data) when reducing costs with virtualization or cloud computing, or reducing PCI exposure with tokenization and encryption. Tokenization eliminates keys by replacing sensitive data with random tokens to mitigate the chance that thieves can do anything with the data if they get it. The transparency inherent in random token also reduces remediation costs to applications, databases and other components where sensitive data lives.

It’s not easy to secure any relational database, let alone one as enormous and feature-rich as Oracle. The product’s massive and diverse deployments and legacy installations make it virtually impossible to identify and defend against every potential threat. What’s more, the product’s extensive feature set, while serving some extremely valuable business needs, means more room for trouble. The database’s connectivity to Web apps brings open-source and third-party variables into the mix, making the end-user organization even more vulnerable. And end users are inconsistent at best when it comes to patching. Some organizations will need to reach beyond a point solution for one database brand to address new threats to data across their IT environment. The nature of these breaches call for a different security approach, particularly in outsourced environments.

Not too long ago, many security experts believed that the best way to defend data was to apply the strongest possible technological protections to all of the data, all of the time. While that plan may work perfectly in theory, in the real world of business this model creates unacceptable costs, performance and availability problems. What works from both IT and management standpoints? Risk-adjusted data security. Protecting data according to risk enables organizations to determine their most significant security exposures, target their budgets towards addressing the most critical issues, strengthen their security and compliance profile, and achieve the right balance between business needs and security demands.

Legacy Oracle Installations and the Threat Surface

Not every Oracle customer uses version 11 of the database. In fact, a majority of customers use older versions. Large percentages of Oracle customers still use versions 8 and 9. The database is functionally stable, so customers are not in a hurry to make the investment in upgrading. But these versions were designed and built before most people had ever heard of buffer overflow attacks or remote exploits. Many of the known security threats have been addressed with patches — provided they have been back-ported — but these older versions lack some of the advancements in password management, encryption, separation of DBA roles, and auditing.

Similarly, legacy applications — control systems, homegrown applications, mainframe connectors, SAP R3, and so on — that use older versions of Oracle don’t have security built in. They have interlocking dependencies between the application and databases, and rely on external security services to detect and protect against threats. The sheer number of features and options provided by Oracle creates a larger threat surface, with far more targets of opportunity for attackers. Oracle comes standard with many features that a lot of businesses rarely use. And just about every Oracle package has been compromised at one time or another. Because Oracle serves so many different use cases, there is no such thing as a secure default configuration. The default Oracle configuration is insecure, and users must take the time to remove features they don’t need, and to verify that user, platform, and application security measures are in place before the database goes into production.

Oracle Investing in New Technologies

Oracle has been working hard and investing in new technologies to address these concerns, particularly in light of the bad press about its lack of responsiveness to the database’s vulnerabilities. The result: a security makeover of sorts. The vendor now offers an extensive range of optional security tools that reaches far beyond the basic security items in the Oracle Database Vault.

The company’s Transparent Database Encryption, for instance, automatically encrypts information as it’s stored in the database; it’s designed so you can retrofit it easily into your existing Oracle setup. Likewise, integration of the Secerno firewall technology is designed to detect database attacks in real time; Oracle and Secerno together can now block SQL injection attacks and other types of queries that appear dubious, including remote code exploits. You’ll have to develop your own rules, policies and reports for each of the Oracle security tools.

Reaching Beyond a Point Solution

Some organizations will need to reach beyond a point solution for one database brand to address new threats to data across their IT environment. The nature of these breaches call for a different security approach, particularly in outsourced environments. We need to understand how to deal with the threats conceptually before we jump into the more complex technical and operational issues that can confuse your choices. Transparent Encryption won’t protect sensitive content in the database if someone has access to it thought legitimate credentials, but it will protect the information on storage and in archives, and provides a significant advantage as it is deployed independent of your business applications. If you need to protect things like credit card numbers where you need to restrict even an administrator’s ability to see them, this option isn’t for you. If you are only worried about lost media, stolen files, a compromised host platform, or insecure storage, then Transparent Encryption is a good option. By not having to muck around with the internal database structures and application logic, it often provides huge savings in time and investment over more involved techniques.

Transparent Database Encryption

The term Transparent Encryption is used by many vendors to describe the capability to encrypt data stored in the database without modification to the applications using that database. We’ve also added "External" to distinguish from external encryption at the file or media level. If you have a database then you already have access controls that protect that data from unwanted viewing through database communications. The database itself screens queries or applications to make sure that only appropriate users or groups are permitted to examine and use data. The threat we want to address here is protecting data from physical loss or theft (including some forms of virtual theft) through means that are outside the scope of access controls. Keep in mind that even though the data is "in" a database, that database maintains permanent records on disk drives, with data being archived to many different types of low cost, long term storage. There are many ways for data to be accessed without credentials being supplied at all. These are cases where the database engine is by-passed altogether — for example, examination of data on backup tapes, disks, offline redo log files, transaction logs, or any other place data resides on storage media, as pointed out at http://securosis.com/tag/database .

Transparent/External Encryption for protecting database data uses the following techniques & technologies:

• Native Database Object (Transparent) Encryption: Database management systems, such as Oracle, Microsoft SQL Server, and IBM DB2, include capabilities to encrypt either internal database objects (tables and other structures) or the data stores (files). These encryption operations are managed from within the database, using native encryption functions built into the database, with keys being stored internally by default. This is good overall option in many scenarios as long as performance meets requirements. Depending on the platform, you may be able to offload key management to an external key management solution. The disadvantage is that it is specific to each database platform, and isn’t always available.

• External File/Folder Encryption: The database files are encrypted using an external (third party) file/folder encryption tool. Assuming the encryption is configured properly, this protects the database files from unauthorized access on the server and those files are typically still protected as they are backed up, copied, or moved. Keys should be stored off the server and no access provided to local accounts, which protect against the server becoming compromised by an external attacker. Some file encryption tools, such as Protegrity, Vormetric and BitArmor, can also restrict access to the protected files based on application. Thus only the database processes can access the file, and even if an attacker compromises the database’s user account, they will only be able to access the decrypted data through the database itself. File/folder encryption of the database files is a good option as long as performance is acceptable and keys can be managed externally. Any file/folder encryption tool supports this option (including Microsoft EFS), but performance needs to be tested since there is wide variation among the different tools. Remember that any replication or distribution of data handled from within the database won’t be protected unless you also encrypt those destinations.

• Media encryption: This includes full drive encryption or SAN encryption; the entire storage media is encrypted, and thus the database files are protected. Depending on the method used and the specifics of your environment, this may or may not provide protection for the data as it moves to other data stores, including archival (tape) storage. For example, depending on your backup agent, you may be backing up the unencrypted files or the encrypted storage blocks. This is best suited for high performance databases where the primary concern is physical loss of the media (e.g., a database on a managed SAN where the service provider handles failed drives potentially containing sensitive data). Any media encryption product supports this option.
Which option to choose depends on your performance requirements, threat model, exiting architecture, and security requirements. Unless you have a high-performance system that exceeds the capabilities of file/folder encryption, we recommend you look there first. If you are managing heterogeneous databases, you will likely look at a third party product over native encryption. In both cases, it’s very important to use external key management and not allow access by any local accounts.

Data Breaches and the Cloud

The Verizon Business RISK team, in cooperation with the United States Secret Service (USSS), has been conducting an annual Data Breach Investigations Report. The purpose of the report is to study the common elements and characteristics that can be found in data breaches. In six years, the Verizon Business RISK team and USSS combined dataset now spans 900+ breaches and over 900 million compromised records.

As in previous years, the 2010 Report showed that nearly all data was breached from servers and online applications, with 98% of all data breaches coming from servers originating from hacking and malware as the most dominant perpetrators. Financial services, hospitality, and retail comprised the “Big Three” industries, recorded as being 33%, 23%, and 15%, respectively, of all data breaches. Targeting of financial organizations is hardly shocking, as financial records represent the nearest approximation to actual cash for the criminal. An astounding 94% of all compromised records (note: records differ from breaches) in 2009 were attributed to financial services.

Financial firms hold large volumes of sensitive consumer data for long periods of time, and because of this, fall under very stringent government regulation requirements that require them to submit remediation validation records if data is found to be vulnerable, as well as regular compliance reports proving that they are adequately securing the data they have access to. Despite being under such stringent compliance standards, 79% of financial firms whose data had been breached failed to meet PCI DSS compliance, the minimum security measure. Thus, organizations have been searching for a solution that protects the business from endpoint to endpoint, while efficiently meeting compliance.

In addition to the constantly evolving security threats that must be mitigated, enterprises are quickly adopting cloud computing practices that add a new element to the data security conundrum. According to Gartner forecasts, worldwide revenue from use of cloud services will increase nearly 17% this year to $68.3 billion and will approach $150 billion in 2014, a 20.5% compound annual growth rate over the next five years.

While its growing popularity is undeniable, the cloud also has serious data security issues. In the cloud, data moves at a faster pace and frees up on-premise network bandwidth, which is what makes it attractive. Unfortunately, those performing the data breaches recognize the cloud’s vulnerabilities and are quickly capitalizing on them. At DEFCON 2010, one of the largest hacker conferences in the world, 100 attendees who have already hacked or have tried to hack the cloud participated in an in-depth survey. 96% of the participants believed that the cloud would open up more hacking opportunities for them. Given its rapid adoption rate, enterprises need a solution that will secure the cloud today and tomorrow.

Encryption, First Generation and Next Generation Tokenization

Recognizing the vulnerabilities that the cloud faces, we must establish a way to secure data that does not hinder the benefits of the cloud including: remote data access from anywhere with an Internet connection; quick content delivery; easily sharable content; and better version control. Two options that have been used in on-premise data security are becoming a hot debate for what is better to secure data in the cloud: encryption and tokenization. While there is no silver bullet to the data security and compliance woes of large enterprise organizations, all eyes are on tokenization right now.

The difference between end to end encryption and tokenization

End-to-end encryption encrypts sensitive data throughout most of its lifecycle, from capture to disposal, providing a strong protection of individual data fields. While it is a practical approach on the surface, encryption keys are still vulnerable to exposure, which can be very dangerous in the riskier cloud environment. Encryption also lacks versatility, as applications and databases must be able to read specific data type and length in order to decipher the original data. If a database and data length are incompatible, the text will be rendered unreadable.

Tokenization solves many of these problems. At the basic level, tokenization is different from encryption in that it is based on randomness, not on a mathematical formula, meaning it eliminates keys by replacing sensitive data with random tokens to mitigate the chance that thieves can do anything with the data if they get it. The token cannot be discerned or exploited since the only way to get back to the original value is to reference the lookup table that connects the token with the original encrypted value. There is no formula, only a lookup.

A token by definition will look like the original value in data type and length. These properties will enable it to travel inside of applications databases, and other components without modifications, resulting in greatly increased transparency. This will also reduce remediation costs to applications, databases and other components where sensitive data lives, because the tokenized data will match the data type and length of the original.

First generation tokenization

There are compelling arguments that question the validity of this emerging technology like those explained in Ray Zadjmool’s article “Are Tokenization and Data Field Encryption Good for Business?” that appeared in November’s ISSA Journal. Zadjmool pointed out that “some early adopters are quietly discarding their tokenization and data field encryption strategies and returning to more traditional card processing integrations.” He also mentioned that there are no standards to regulate and define exactly what is and is not tokenization. What he failed to do is acknowledge that there are different forms of tokenization. It is no surprise to me that companies that have tried first generation methods have not seen the results that they were promised. Here’s why.

Currently there are two forms of tokenization available: “first generation” and “next generation.” First generation tokenization is available in flavors: dynamic and static.

Dynamic first generation is defined by large lookup tables that assign a token value to the original encrypted sensitive data. These tables grow dynamically as they accept new, un-tokenized sensitive data. Tokens, encrypted sensitive data and other fields that contain ‘administrative’ data expand these tables, increasing the already large footprints.

A variation of first generation tokenization is the pre-populated token lookup table – static first generation. This approach attempts to reduce the overhead of the tokenization process by pre-populating lookup tables with the anticipated combinations of the original sensitive data, thereby eliminating the tokenization process. But because the token lookup tables are pre-populated, they also carry a large footprint.

While these approaches offer great promise, they also introduce great challenges:

• Latency: Large token tables are not mobile. The need to use tokenization throughout the enterprise will introduce latency and thus poor performance and poor scalability.

• Replication: Dynamic token tables must always be synchronized, an expensive and complex process that may eventually lead to collisions. Complex replication requirements impact the ability to scale performance to meet business needs and to deliver high availability.

• Practical limitation on the number of data categories that can be tokenized: Consider the large lookup tables that would be needed to tokenize credit cards for a merchant. Now consider the impact of adding social security numbers, e-mail addresses and any other fields that may be deemed sensitive. The use of dynamic or static first generation tokenization quickly turns into an impractical solution.

Next generation tokenization

Like first generation tokenization, next generation tokenization is built around the same concept of replacing sensitive data with random tokens. However, a key differentiator of next generation tokenization is that it employs small footprint token servers that frees up the process from many of the challenges faced by the first generation tokenization.

Here are the key features of next generation tokenization:

• Distributed: Token servers with small footprints enable the distribution of the tokenization process so that token operations can be executed closer to the data. Thus, latency is eliminated or greatly reduced, depending on the deployment approach used.

• Scalable: The smaller footprint also enables the creation of farms of token servers that are based on inexpensive commodity hardware that create any scaling required by the business, without the need for complex or expensive replication.

• Versatile: Any number of different data categories ranging from credit card numbers to medical records can be tokenized without the penalty of increasing the footprint, and more data types can benefit the transparent properties that tokens offer.

• Increased performance: Users benchmarked next generation tokenization at approximately 200,000 tokens per second – performance metrics that are hard to achieve with first generation tokenization or encryption.

When next generation tokenization is applied strategically to enterprise applications, confidential data management and PCI audit costs are reduced and the risk of a security breach is minimized. Because authentic primary account numbers (PAN) are only required at authorization and settlement, security is immediately strengthened by the decrease of potential targets for would-be attackers. Simultaneously, PCI compliance costs are significantly decreased because tokenization brings data out of scope, and eliminates the need for annual re-encryption that PCI requires with encryption strategies. Because they all need high availability, high performance, scalability and quick response times that it offers, next tokenization is well suited for financial, retailer, healthcare and telecommunications industries.

As Zadjmool pointed out, standards have yet to be developed for tokenization, but PCI Standards Security Council is in the process of creating guidance and validation documents to help provide clarity on this emerging technology. In the meantime, Visa’s “Best Practices for Tokenization” Version 1.0 which was published on July 14 can provide some clarity until the Council releases its own standards. But be careful because this draft implies a “one-size-fits-all” architectural solution open enough for botched implementations. This includes encryption pretending to be tokenization that lacks security requirements, where random-based tokenization is the only true end-to-end solution.

Next generation tokenization examples

Here are a few examples of this next generation data tokenization. A major retailer recently migrated from the traditional tokenization approach to a new Next Generation tokenization approach to meet the needs of operational performance. This retailer’s Tokenization solution now enables them to perform 200,000 tokenizations per second on a single small commodity server which now enables them to meet their current and planned future needs for using tokenization on card holder data and PII/PHI data across the enterprise. In another example, a major retailer recently migrated from an encryption solution to this next generation tokenization approach to reduce cost. In a final example, a major US organization, encouraged by the scalability and small footprint of their new generation tokenization approach, are planning to put a hardened tokenization server at several hundred of their sites to meet their performance and availability requirements. The New Distributed Tokenization is fully distributed solution which does not require replication between servers and completely eliminates the severe issue of token value collisions.

Outsourcing PCI Compliance

A few Service Providers can help you by outsourcing PCI Compliance. Some of them are also active members of the PCI Security Standards Council. Physical security and policies and procedures should meet or surpass PCI requirements and be audited annually by a Qualified Security Assessor (QSA). The Payment Card Industry Data Security Standard (PCI DSS) is an evolving set of security requirements designed for entities that store, process, or transmit cardholder data. These entities must maintain a secure Cardholder Data Environment (CDE). Compliance with PCI DSS is a sound business practice that also serves to keep sensitive data secure. As a business grows and conducts an increasing number of annual credit card transactions, it is subject to increasingly complex compliance requirements. PCI validation requirements are currently organized into four levels which are explained in detail on Visa’s web site (http://usa.visa.com/merchants/risk_management/cisp_merchants.html).

Achieving compliance in-house requires a significant level of expertise, and maintaining compliance quickly becomes resource intensive. As a result, outsourcing PCI compliance solutions to managed service providers has become a popular business trend in recent years. An increasing number of businesses outside the payment card industry are also deploying PCI DSS solutions across the enterprise to meet a range of industry requirements.

However, not all PCI DSS service providers deliver a comprehensive compliance solution. In fact, many solutions fail to meet a majority of compliance requirements. For example, some third-party service providers offer services that meet basic security requirements such as ASV scans and antivirus, but their compliance offerings lack advanced security controls such as Web Application Firewalls (WAF), log management, and two-factor authentication.

A comprehensive solution is more than just hardware or software solutions. Achieving and maintaining PCI DSS compliance requires specialized skills and experience that only a few providers can deliver. The right provider will also offer the resources and expertise necessary to accommodate company growth and scale a PCI DSS solution. When selecting a certified service provider look for one with a deep understanding of the specific requirements of PCI DSS, demonstrated expertise in secure network architecture (including proper network segmentation to reduce the number of system components considered in-scope), and security service design and implementation.

It is essential that the service provider deliver a transparent service agreement which clearly defines both the client’s and the provider’s responsibilities. This ensures that all aspects of the standard are addressed and there are no gaps in responsibility. The organization or merchant that is storing, processing, or transmitting the cardholder data is ultimately liable for any gaps in compliance, which makes it imperative to clearly delineate who is responsible for each aspect of meeting the PCI DSS standard. Companies often underestimate the time, resources, and efforts required to continuously and rigorously maintain compliance in-house. Security systems require significant capital investments in hardware and software and are costly to implement, maintain, and monitor. Outsourcing to the right provider enables businesses to achieve and maintain compliance while controlling costs.
Other companies turn to outsourcing because their current business volume has outgrown existing compliance resources. As illustrated in the chart above, merchants are subject to increasingly complex compliance requirements as the annual number of credit card transactions processed grows. Meeting those requirements becomes extremely resource intensive which makes outsourcing an attractive option.

Managed services providers offer a range of PCI solutions and services for businesses seeking to outsource compliance. These providers offer a deeper level of expertise and experience in PCI compliance than most businesses typically have in-house. Depending on their clientele, a service provider may have experience working with Level 3 or Level 4 validation requirements only, or they may have a stronger range of experience from serving clients who are subject to Level 1 or Level 2 audits. Expertise and security credentials will also vary among providers. While most service providers can deliver basic security controls, many lack the specific knowledge or advanced expertise required to architect and properly manage a complex PCI DSS compliant solution.

PCI solution offerings will vary depending on the service provider. Some just offer the tools necessary to meet a few aspects of compliance rather than a complete fully managed solution that achieves and maintains PCI compliance. These differences are not always readily apparent, but often come to light upon close examination of the provider’s service agreement. If the service provider fails to clearly define which PCI requirements are being met and who is responsible to meet them (the provider and/or the client), a business may falsely believe they are compliant simply because they purchased a PCI toolbox. Although automated tools are one step toward meeting compliance, several PCI DSS requirements demand careful vigilance and analysis to continuously interpret logged data. One example of a service provider for PCI data can be found at http://www.datapipe.com/solutions-compliance-pci-dss.htm .

Outsourcing Tokenization

Standard tokenization is an available feature direct from gateway payment providers. But utilizing standard tokenization still requires that credit cards must first be handled and stored on the merchant’s infrastructure prior to being tokenized. Newer integration options and Next Generation Tokenization handles the token exchange at the edge, before any data has entered the merchant’s infrastructure. This newer approach plays a key role in helping to reduce and potentially remove ecommerce and online transaction activity from PCI scope. Other tokenization options for off-loading PCI transactions include re-directing the traffic to externally hosted and processed web pages, or inserting third party fields into existing ecommerce workflows. Both methods require merchants to outsource this business critical transaction to uncontrolled, non-customized, and un-reliable infrastructures. With newer approaches in integration of Tokenization, customers do not require workflow changes, externally hosted sites, or form fields.

How to Develop and Deploy a Risk-adjusted Data Security Plan

Protecting data according to risk enables organizations to determine their most significant security exposures, target their budgets towards addressing the most critical issues, strengthen their security and compliance profile, and achieve the right balance between business needs and security demands. Other issues that risk-adjusted security addresses are the unnecessary expenses, availability problems and system performance lags that result when data is over-protected. And cloud-based technologies, mobile devices and the distributed enterprise require a risk-mitigation approach to security, focused on securing mission critical data, rather than the now-unachievable ‘protect all the data at all costs’ model of years past.

1: Know Your Data

Begin by determining the risk profile of all relevant data collected and stored by the enterprise, and then classify that data according to its designated risk level. Data that is resalable for a profit — typically financial, personally identifiable and confidential information — is high risk data and requires the most rigorous protection; other data protection levels should be determined according to the value of the information to your organization and the anticipated cost of its exposure — would your business be impacted? Would it be difficult to manage media coverage and public response to the breach?

There are several models that a business can use to classify data. Larger enterprises will likely want to rely on policy-driven automated tools. Smaller businesses can use the simplest model: assign a numeric value for each class of data; high risk = 5, low risk = 1.

2: Find Your Data

Data flows through a company, into and out of numerous applications and systems. A complete understanding of the high risk data flow is essential to the risk-adjusted process. You can’t protect data if you don’t know where it is, and assigned risk levels will change depending on how data is being collected, used and stored. High risk data residing in places where many people have access is obviously data that needs the strongest possible protection.

Locate all of the places that data resides including applications, databases, files, and all the systems that connect these destinations such as data transfers across internal and external networks, etc. and determine where the highest-risks reside and who has or can gain access to data (see “Understand your Enemy” below).

Other areas to examine for data stores include your outsourcing partnerships as well as data that is being used for nonproduction purposes such as third-party marketing analysis or in test and engineering environments. It’s not uncommon for organizations to invest in protecting production systems and data centers yet have live data sitting unprotected on the systems of application developers and other outsourced parties. If live production data is being used in a less controlled environment there has to be attention paid to regulatory compliance and security threats. Here, too, data de-identification technologies like Format-Controlling Encryption and tokenization can help.

Step 3: Understand Your Enemy

The next step is conducting an end-to-end risk analysis on the high risk data flow to identify the highest risk areas in the enterprise ecosystem and the points where data might be exposed to unauthorized users.
Currently web services, databases and data-in-transit are at high risk. The type of asset compromised most frequently is online data. Exploiting programming code vulnerabilities, subverting authorized user credentials and malware targeting the application layer and data (rather than the operating system) are the attack methods that are being utilized most frequently. These vectors change so keep an eye on security news sites to stay abreast of current threats.
Most data breaches are caused by external sources but breaches attributed to insiders, though fewer in number, typically have more impact than those caused by outsiders. Nearly three-quarters of the breaches examined in the Verizon Report were instigated by external sources. Unauthorized access via default credentials (usually third-party remote access) and SQL injection (against web applications) were the top types of hacking, access to a network was often followed by malware being planted on the system.

Step 4: Choose Your Defenses

Look for multi-tasking solutions that protect data according to its risk classification levels, supports business processes, and is able to be change with the environment so that you can easily add new defenses for future threats and integrate it with other systems as necessary.

High risk data is best secured using end-to-end encryption or tokenization of individual data fields. Tokenization removes sensitive data from the information flow at the earliest possible point in the process, replacing it with a token that acts as an alias for the protected data. By associating original data with an alias, high-risk data can systematically be removed and protected from malicious hackers over its lifecycle under a fully auditable and controllable process. This practical protection method is perfectly suited for securing high risk data like payment card information and social security numbers.

Newer solutions provide targeted protection for data in use and doesn’t interfere with business processes. For example, Data Format Controlling Encryption retains the original format, on a character-by-character basis, of encrypted data, putting an end to the data re-formatting and database schema changes required by other encryption techniques. It’s especially well-suited to protect data that’s being used for testing or development in a less-controlled environment. Partial encryption can then be applied to provide the ability to encrypt selected parts of a sensitive data field based on policy rules. Policy-Based Masking provides the ability to mask selected parts of a sensitive data field. Implemented at the database level rather than application level, policy-based Data Masking provides a consistent level of security across the enterprise without interfering with business operations and greatly simplifies data security management chores.

Step 5: Deployment

Risk-Adjusted data protection enables enterprises to stage their security roll-out. Focus your initial efforts on hardening the areas that handle critical data and are a high-risk target for attacks. Then continue to work your way down the risk-prioritized list, securing less critical data and systems with appropriate levels of protection.

Security is an ongoing process not a series of events. The level of protection required by data may change according to how it is being collected, transmitted, used and stored. Reevaluate risk levels annually and on an as-needed basis if business processes change.

Step 6: Crunch the Numbers

Risk-adjusted data security plans are cost effective. Among the typical benefits of a risk-adjusted plan is the elimination of the all too common and costly triage security model which is ineffective whether you’re triaging based on compliance needs or the security threat of the moment. Replacing triage with a well thought-out logical plan that takes into account long-range costs and benefits enables enterprises to target their budgets toward addressing the most critical issues. By switching focus to a holistic view rather than the all too common security silo methodology, an enterprise will also naturally move away from deploying a series of point solutions at each protection point, which results in redundant costs, invariably leaves holes in the process, and introduces complexity that will ultimately cause significant and costly rework. Additionally, an understanding of where data resides usually results in a project to reduce the number of places where sensitive data is stored. Once the number of protection points has been reduced, a project to encrypt the remaining sensitive data with a comprehensive data protection solution provides the best protection while also giving the business the flexibility it needs.

Conclusion

A holistic solution for data security should be based on centralized data security management that protects sensitive information throughout the entire flow of data across the enterprise, from acquisition to deletion. While no technology can guarantee 100% security, tokenization and encryption are proven to dramatically reduce the risk of credit card data loss and identity theft. Next generation tokenization in particular has the potential to help businesses protect sensitive data in the cloud in a much more efficient and scalable manner, allowing them to lower the costs associated with compliance in ways never before imagined.

By switching focus to a holistic view rather than the all too common security silo methodology, an enterprise will also naturally move away from deploying a series of point solutions at each protection point, which results in redundant costs, invariably leaves holes in the process, and introduces complexity that will ultimately cause significant and costly rework. Additionally, an understanding of where data resides usually results in a project to reduce the number of places where sensitive data is stored.


See previous articles

    

See next articles


Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts