Contactez-nous Suivez-nous sur Twitter En francais English Language

De la Théorie à la pratique

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN



Ulf Mattsson: Demystifying Data Security On The Mainframe

November 2009 by Marc Jacob

Securing critical data cost-effectively and without disrupting essential business processes is always a challenge, especially in high-volume environments. In this Interview with the CTO, Protegrity’s Chief Technology Officer Ulf Mattsson discusses tried and true best practices as well as new technologies — such as Format Controlling Encryption, Data Masking, Tokenization and Database Activity Monitoring — to protect data on the mainframe and successfully manage data security across the enterprise. Mattsson also details a multilayered, risk-adjusted approach that enables businesses to choose the right procedures and technologies for their data security needs and provides criteria that enterprises can use when evaluating solutions.

Are mainframes still in wide use in the business world? Haven’t they mostly been phased out?

Ulf Mattsson : The death of the mainframe has been over reported. According to a September 2009 report from IDC, many enterprises that use mainframes aren’t planning to replace them anytime soon — in fact "nearly one-half of respondents indicated they plan to increase annual spending on mainframe hardware and software." Mainframes are very reliable; easy to manage, high capacity, take up a smaller footprint, and are light on cooling. They satisfy all of the factors that make a system cost-effective from a total cost of ownership perspective.
And since most of the businesses that do use mainframes are large enterprises, these high-capacity machines are typically home to particularly sensitive data and a lot of it — smaller systems process a couple of thousand of transactions per second while mainframes are doing millions of transactions. Plus they have huge data stores with a lot of data warehouse applications that are hosting a ton of information.
What security issues should businesses be aware of when using mainframes? In the old days, mainframes tended to be used within very secure environments. You used to have a dumb terminal connecting into the mainframe so that the mainframe could control the access to the data stores. It knew the user and where the user came from.
In the open networks of our time, the mainframe cannot really trust the user. The use of clients has evolved; first from direct-connect terminals, then to client servers, and now you typically see web-based systems. You have a service oriented architecture today where you basically have a chain of computers touching the request before it hits the back-end system. The user could be several tiers away.
So, the back-end system, the mainframe, is not really sure who the real users are and where they identified and authenticated themselves. In that highly distributed world, when you open up access to mainframes’ data, the security situation gets really scary.
Another issue is that it is very hard to find people who know mainframes inside and out these days. The mainframe people are mostly retired now. So it is hard to find people that know the applications and it’s hard to find people that know the mainframe security products. Learning the mainframe environment typically takes ten or fifteen years. From a security point of view you need to know the operating system, the file system, the database system, the security system, and the storage management system—a lot of different products, each one complicated.

What sorts of threats are the most prevalent?

Ulf Mattsson : Apart from the issues depicted above, attacks are getting more and more sophisticated. Attempts to steal information have largely replaced the sabotage, denial of service and defacing attacks we saw a couple of years ago. Now you have organized crime with very skilled hackers and they are inventing more attack vectors than anyone can patch. Mainframes are a very fruitful target for these professionals. They typically tunnel in under a protocol to reach them. So the mainframe is looking at the protocol like an FTP or HTTP and the mainframe is basically saying, “Oh, this looks nice. I approve this.” It could be a SQL injection attack and the mainframe can’t do anything about it.

How can an organization properly address this convergence of security problems facing their mainframes?

Ulf Mattsson :I think in general you need to put infrastructure functions in place, because you cannot rely on fixing the mainframe applications. It is not feasible for many reasons, including you can’t find the people to fix them. So you want to put security functions “outside”, on your databases, your web servers and so on. Adding security additional layers is very effective and can provide a cost effective approach with minimal impact on transparency. This additional layer will fill the gaps where authentication, for example, will fail. If someone is stealing a password and can bypass the authentication system then you want to have another security layer right next to the data. So adding deeper security lines and defense in depth is critical.
I also think we need to start inspecting things that security systems did not inspect before, such as looking at the data level, at who is trying to read the credit card column in your databases, for example. I also strongly believe in looking at how much data is being accessed per user. If a user ID is compromised, you should have a layer of defense that prevents the attacker from getting too much data too quickly. You slow the hacker down and you can also detect the intrusions through this type of data usage control.
Protegrity have patents on behavior-based intrusion response technology that manages exactly this type of behavior, controlling the amount of records per hour that a user can read. It is a very natural thing in the physical world. You go to a pharmacy and you have a prescription that will give you so many pills and you will not get more. You go to an ATM and you can only get $400 at a time. I think that we need to apply the same sort of general thinking that we are applying in physical security in IT security as well; it’s the missing layer in data-driven security.

Looking at the threats chart (above) it appears that insider attacks are also a concern?

Ulf Mattsson : For many years external security threats have received a great deal of attention while the potential for a trusted individual with access to steal or modify data was often overlooked. That’s changing, but too many companies still focus their attention and the bulk of their security budget on thwarting outsider attacks. A recent study by the Ponemon Institute found that nearly 60% of U.S. businesses and government agencies still can’t adequately deal with insider threats to their network, and 58% rely on manual controls to audit and control user access to critical systems and databases. It should be obvious that protecting the network perimeter isn’t enough anymore – in fact it was never enough. Without an encryption plan in place, teamed with access controls and good management policies, we aren’t adequately protecting the critical information stored in enterprise databases.
While viruses worms and hack attacks conducted by outsiders are serious, attacks perpetrated by people with trusted insider status — employees, ex-employees, contractors and business partners — pose a far greater threat to organizations in terms of potential cost per occurrence and total potential cost than attacks mounted from outside. In general, users and computers accessing resources on the local area network of the company are deemed trusted. Sadly, some insiders can and will take advantage of trust and physical access. Typically we do not firmly restrict their activities because an attempt to control these trusted users too closely will impede the free flow of business information. Unfortunately once an attacker has physical control of an asset, that asset can no longer be protected from the attacker.

What best practice approaches can a business use to defend the mainframe?

Ulf Mattsson : No single approach to securing systems will be able to defeat each and every new and innovative intrusion attempt. Security experts have long advocated deploying layers of protection for the same reason that every urban dweller uses multiple locks to secure an apartment. If one lock fails, another will likely withstand the attack – or at least slow down the criminal, who is likely to give up and ransack a more vulnerable target. Many crimes, including network attacks, are crimes of opportunity – easy in and easy out is the thief’s preferred modus operandi.
But while hackers much prefer easy targets they will work very hard to steal important data. And the more we create concentrations of valuable data, the more worthwhile it is for malware manufacturers to put the effort into customizing a “campaign” to go after specific targets. So, if you are charged with securing a mainframe that process or stores critical data (or partner with/outsource to a business that is a major target) you need to ensure that the level of due diligence that you apply to data security equals or exceeds that expended by malicious hackers.

How does an enterprise protect their data without interfering with necessary business processes?

Ulf Mattsson :Risk management works well here by enabling you to protect data according to its value to the organization, to others and the risk of it being stolen or otherwise misused.

Data that is resalable for a profit — typically financial, personally identifiable and confidential information — is high risk data and requires the most rigorous protection; other data protection levels should be determined according to its value to your organization and the anticipated cost of its exposure — would business processes be impacted? Would it be difficult to manage media coverage and public response to the breach? You also need to consider data volumes, connectivity, physical security, HR aspects, geography, compensating controls — and more.
Once you have data classified you can look at enterprise-class end-to-end encryption solutions that offer a complete set of protection options, such as tokenization, Format Controlling Encryption, and Database Activity Monitoring.

Can you tell me more about these newer technologies?

Ulf Mattsson : Format Controlling Encryption preserves the format of the original, unencrypted data set — data that is encrypted using traditional methods has a different format than it did before it was encrypted for example, an encrypted Visa credit card number does not have 16 distinct numerical fields after encryption. So Format Controlling Encryption can make the process of retrofitting encryption into legacy application environments simpler and it will provide protection while the data fields are in use or transit. This aspect is similar to the tokenization approach, where an alias of the original data points to real data or a secondary database from which the real data can be derived) — you’re moving the problem to another place where it may be more cost effective to address the problem.

Businesses are increasingly turning to Tokenization to secure highest risk data such as credit card and social security numbers. Tokenization removes sensitive data from the information flow at the earliest possible point in the process, replacing it with a token that acts as an alias for the protected data. Associating original data with an alias allows high-risk data to be protected from malicious hackers over its lifecycle under a fully auditable and controllable process.

An enterprise tokenization strategy also reduces the overall risk to the enterprise that results from the ability of many persons to have access to confidential data, often beyond what can be justified by business needs. Tokenization, applied strategically to business data and applications, can also reduce ongoing confidential data management costs as well as ease compliance with data protection regulations.

Technologies like Transparent Database Field Encryption can also make the process of retrofitting encryption into legacy application environments a lot simpler but cannot provide a fully transparent protection while the data fields are in use or transit. The data field in transit will be in clear or encrypted into a non transparent binary form.

Database activity monitoring in combination with file level encryption of databases can provide appropriate solutions for lower risk data. There is a need to discuss data risk factors and to use a risk adjusted methodology for determining appropriate solutions.

These approaches can be very important in the data warehouse, for example, where aggregation on the clear text values is frequent. Format Controlling Encryption may require 10 times more CPU cycles than Transparent Database Field Encryption during the decrypt and search operations. File level encryption of the database can provide fast search operations since data and index information is in the clear inside the database.

With tokenization or Format Controlling Encryption, high risk data is systematically protected from hackers and criminals over its lifecycle under an auditable and controllable process. At the same time, these techniques solve the challenge of separating duties from employees and administrators who need to manage data but perhaps don’t always need to see live data like SSNs. PCI is a good example of a regulation that specifically calls for this kind of separation of duties.

The key thing to remember here is that one size security solutions are never the best fit. For example, a company that manages dynamic information such as payment transactions or customer data used in billing systems will have data that is almost constantly in use, it is rare to find databases full of transaction data "offline”. Obviously this is an issue for retailers and companies that process payments, but in a world that is trending to "always on" network-based services, and criminals who have discovered that data theft is a highly profitable recession-proof business, any good risk-based data protection strategy has to provide end-to-end security for sensitive data.

What about key management?

Ulf Mattsson :There are some best practice rules to abide by for key management on the mainframe:

• Keys should be cached or stored on the mainframe.
• In a mature solution, the DB2 subtask should not expose the key in a dump of the DB2 master task or User Task.
o DB2 V8+ native column-level encryption has the encryption key in a dump of the DB2 master task.
o The subtask should only have the key-label, which is not enough to encrypt the data and is meaningless since it’s only the name of the key.
o The IBM Data Encryption Tool uses DB2 EDITPROC, and a key label is stored in the EDITPROC.

• Key files (ICSF CKDS and other sensitive data sets) should be RACF protected and encrypted.
• Define userids for the started tasks and prevent almost every other id from accessing the DB2 data sets.
• There are some exceptions for administrators who must manage the logs or work with the DSN1* utilities.
o Having separate IDs for each subsystem is the standard recommendation. What are some best data security management practices for the mainframe? No matter what solution you opt to use, best practices begin with the centralization of data security management, enabling a consistent, enforceable security policy across the organization. From a centralized console, the Security Administrator defines, disseminates, enforces, reports and audits security policy, realizing gains in operational efficiencies while reducing management costs. An effective security policy should protect sensitive data from all ‘reasonable’ threats. Administrators who have access to all data are a reasonable threat, no matter how much we trust them - liking someone is not a viable security policy. Implementing a separation of duties of security definition and database operations provides a checks-and-balances approach that mitigates this threat.
Controlling access to the data is a critical element to any security policy. Defining a security policy that allows centralized definition of data access, down to the data field level, on an individual-by-individual basis across the entire enterprise is best practice. Putting limits on authorized usage of data is necessary to avoid breaches from the inside. Much like an ATM machine will limit the amount of money a person can take out of their own account, it is important to be able to set the limits on authorized use as part of data security policy. If the use of sensitive data should be limited to 9-5, Monday-Friday, then any attempt to access that data outside of those boundaries should be denied.
While mainframes do present some unique security challenges, virtually all can be addressed with commonsense security policies and the proper tools.

SIDEBAR: Evaluating Mainframe Encryption Solutions
A mature encryption solution should deliver:
1: Focused Protection
The solution should allow you to encrypt only the data that your organization needs to protect (Credit Card, Social Security #, Salary, etc). It should provide individual protection for each column through individual keys to gain an extra layer of protection in case of security breach.

2: Strong Key Management
A secure system is only as good as the protection and management of its keys. Protegrity delivers patented, integrated key management systems that control where keys are stored, who has access to them, and ensures they are encrypted and protected.

3: Protecting Policy Changes
Changes to security policy are critical events that need to be protected. It is a best practice to require more than one person to approve such changes. Protegrity delivers this through assigning the Master Key to more than one individual. This system insures that one person cannot independently make and deploy policy changes.

4: Separation of Roles
Privacy and data protection regulations stipulate that a data security system must provide “reasonable protection from threats.” Having the ability to log and review the activities of both the Security Administrator and the Database Administrator provides a checks-and-balances approach that protects from all reasonable threats.

5: Reporting and Monitoring
Reporting and monitoring your security policy and generating protected audit logs are fundamental best practices, and required by regulations. Protegrity has the most comprehensive and efficient reporting provisions of any solution on the market

• column level and row level encryption using fieldproc and editproc
• application protector for VSAM files
• encryption utility for flat files
• Protegrity takes advantage direct access to CPACF hardware (not using ICSF)
• Protegrity takes performance advantage of direct access to crypto logic (not using IBM LE – Language Environment)
• The enterprise (across platform) positioning is a strong differentiator
• Enterprise Key Management is a strong differentiator (including local caching/storing of keys)
• Data flow protection is a strong differentiator (including FCE)

6: Evidence-quality Audit
This is an obvious must in today’s regulatory environment. Protegrity delivers evidence-quality auditing that not only tracks all authorized activity, it also tracks unauthorized attempts as well as any changes to security policies – it even tracks activities of the database administrator (DBA), and provides a complete audit report of all these activities.

7: Selective Auditing Capabilities
A reporting system should be highly selective, allowing your security administrators to examine only the information most critical to their job.

See previous articles


See next articles