Audits, by nature, are rear-view facing. In many cases, that may be fine (i.e., income tax audit and process audits), but in the world of cybersecurity and data security, reliance on an external audit poses a significant business risk.
Data security and data governance, risk and compliance (GRC) goals have never aligned until GDPR. Data GRC focuses on demonstrating (reporting) the controls over who, what and when in-scope data was accessed and not primarily about securing it. Organizations need to demonstrate compliance and focus on passing an audit -- not preventing data breaches. Before GDPR, monetary fines for breaches were minor; it was more important to find ways to pass the audit so they could continue operating the business than reduce the risk of a data breach.
GDPR has shifted this paradigm by imposing substantial monetary fines in the case of a breach. As a result, organizations now focus on minimizing data loss risks rather than passing an audit. After all, there is no GDPR compliance audit like with International Organization for Standardization (ISO). The only mention of an audit within the GDPR regulation is for data processing. Compliance is self-imposed by the threat of a stiff fine that compels organizations to start thinking about compliance and security with a unified goal: to protect data.
Previous compliance standards and regulations such as ISO, Payment Card Industry (PCI), Sarbanes-Oxley (SOX), and Service Organization Control (SOC 2), to list a few, have focused on the audit.
For these regulations, organizations put in place the minimum processes and controls necessary to pass the audit. The controls may have little to no impact on data protection and privacy. The organization is solely attempting to gain compliance via a passing audit. The certificate acts as a get-out-of-jail card. If anything goes wrong, the organization says, "But we passed our audit. It's not our fault."
Data protection solutions are finally evolving to the current state of data: distributed, cloud-centric and always-on. Data used to only exist within the corporate network on devices that never left the physical protection of the company.
Data loss prevention (DLP) has been the default solution for protecting data. It's literally in the name. What countless organizations have determined is that DLP doesn't stop breaches, but it does generate extremely high operational overhead. The same is true for other legacy solutions such as pretty good privacy (PGP) and information rights management (IRM).
DLP is only as good as the classification rigidity enforced by the organization. Classification is always too rigid and can't keep up with fluid data movement. For DLP to prevent data from egress, data must be classified correctly. Classification is complicated and fragile. What is sensitive today is not sensitive tomorrow and vice versa. Classification turns into an endless battle of users trying to manage the classification of data. Ultimately, classification and DLP deteriorate over time. DLP adds an extremely high operational overhead, as it requires users to be classification superstars, and even then, mistakes will happen. Desjardins Group, a Canadian bank, recently made news for a malicious insider who obtained information on 2.7 million customers and over 170,000 businesses. The exact details of the breach haven't been made public yet, but DLP solutions are standard in all financial institutions.
A New Approach to Data Protection
A new wave of solutions has appeared in the market to significantly shift the focus of data protection. Here are four criteria to measure data protection in the solutions you're currently considering:
Data protection has followed the same paradigm for years: discover, classify and protect. That paradigm exists because years ago, protection solutions were extremely painful to implement. Administrative overhead was high. The end-user impact was high.
The only way organizations would consider implementing protection tools without a riot was to execute protection on a small amount of data. Historically, organizations wanted to discover all the locations of data first. Then they decided which data was essential to protect by classifying the data. This paradigm creates a small, manageable amount of data to protect.
Again, the legacy paradigm exists because protection solutions such as file encryption, information rights management (IRM) and data loss prevention (DLP) were too complicated to deploy, administer and operate. Many data loss guides span into the thousands of pages.
Protection solutions like DLP are too fragile. They rely on classification, which always changes over time. What is critical to protect today is not sensitive tomorrow, and more troubling is that what organizations don't consider important today becomes vital in the future. Classification is also very user dependent. Users make mistakes, and malicious users are hard to identify.
A new category of data-centric data protection is now available that works in the background, where users only see notifications when users access files they don't have permission. It's a similar approach to antivirus and malware. Users are only interrupted when something needs attention.
5 Reasons Why Organizations Are Switching
1. Manage by exception
DASB manages by exception
DASB persistently and transparently protects data, with no impact to end-user experience, applications, and business workflows. DASB flips the traditional data protection model from one of opting into the least amount of data to protect, to an expansive, opt-out model. This opt-out model enables organizations to protect any and all data and manage exceptions around collaboration.
DLP manages by rule
DLP requires rules to be written for every scenario. Whether the scenarios are trying to identify every possible exfiltration pathway or map to acceptable business use, these rules need to be continuously tuned to decrease alerts, false positives and false negatives.
2. Identify data by DNA
DASB expands its protection through dDNA matching
DASB’s patented similarity detection engine understands the DNA of the data (dDNA) and looks for a match to dDNA that is already protected. If there is a match, Magic Derivative applies protection to this data automatically, with the same access controls as the originally protected data. This means that even if you have not discovered or classified all your sensitive data, or if your colleagues create or import new sensitive data down the road, DASB will automatically recognize this “unknown” data as sensitive and protect it.
DLP’s data identification is like using a fingerprint
DLP might encounter this telephone number (819661820893) and identify it as a credit card number, a false positive. An outgoing email attachment with this telephone number might be blocked causing a slowdown in the business where none is warranted. This interference with normal business operations is one of many major downsides of DLP. The more aggressively the security team adds and updates rules, the more often false positives occur. Employees are measured on their productivity. When security tools slow them down they complain and try anything they can to circumvent the blocker, DLP. DLP also fails to detect sensitive information that has been slightly altered, allowing it to pass freely as a false negative. For credit cards, a classic exfiltration bypass method is to spell out the credit card number ("eight one nine six..."), change the credit card number to an unreadable font like Wingdings, or re-write it as Roman numerals. It is easy to think up ways to get past DLP's pattern matching.
3. Protect First
DASB protects any and all data
DASB protects any data transparently. This allows for organizations to protect data first and then work on discovery and classification. DASB’s methodology for discovery and classification enables organizations to identify and administer the appropriate access controls to unknown data. This includes all the information your employees are creating every day and all the unknown data stored in location (on-prem, cloud, on endpoints, etc.) across your enterprise.
DLP requires tedious discovery and classification
DLP’s obtrusive nature requires discovery and classification as a necessary crutch to achieve even the most basic protection scenarios. Manual classification can depend on every employee in the company filling out a small form every time they are about to send an email or save a file, a major drain on employee time. Worse, your colleagues are not security professionals, and their incentive is to get their work done, so the accuracy of their classification is in doubt. Insiders are known to be the largest threat vector, so giving employees the power to classify whether data is sensitive or not is a critical flaw.
Discovery is known to be highly ineffective as discovery tools are not equipped for the volume of data and the varied locations (public or private cloud, on-prem) in which this data is stored . Automated discovery is also highly error-prone, leading to the wrong policies applied to the wrong data.
4. Expansive Protection
DASB data protection is expansive
DASB takes an expansive approach to data protection. We recognize that most, if not all, enterprise data contains sensitive or valuable information and this data should not be allowed to leak. DASB continuously discovers, classifies and protects previously unknown data. DASB achieves zero-trust, persistent protection that is completely transparent to end users. DASB protects any and all data without impact to the end-user experience.
DLP data protection is reductive
Contrary to DASB, DLP's approach to data protection is reductive. DLP depends on discovering and classifying data, with the goal of opting into only the smallest subset of data to protect. By default, DLP allows a file to flow freely unless it has been specifically identified as sensitive and a rule exists that can dictate how users can interact with that file. This is an ongoing, tremendously time consuming, never-ending effort for security teams. It is nearly impossible to devise every possible rule to block exfiltration pathways, while aligning with the business and acceptable business use cases. Managing by rules is also a huge burden on employees, as more and more restrictions are imposed on their daily workflows. Given the amount of effort required of the security team to devise rules that detect sensitive data, and the overhead incurred by employees classifying their own data, using only prescribed applications and file types with workflow pop-ups, errors and overhead along the way, the DLP approach ends up being to opt-in to the least amount of data to protect as possible.
5. Time to Value in Hours
DASB is implemented in hours
With DASB, deploy the agent, target a location, and you are transparently protecting data. DASB is implemented enterprise-wide, or in a phased approach, selecting the most important use cases first (source code, CRM, trade secrets, finance, PCI/PHI, etc.) and protecting all data related to those use cases. DASB imposes no limits on applications, versions, file types, file sizes, repositories, developer tools, workflows, or anything else in the environment, no matter how complex or enterprise specific.
DLP takes months, if not years to implement
DLP requires a comprehensive discovery and classification program, with buy-in and assistance from the business before even starting to write rules. As the discovery and classification program is continuous and manually conducted, rules need to be written, false positives and false negatives need to be constantly tuned. Once the discovery and classification programs are underway and tuning progress has been made, we are now able to move to monitor or test mode to see how the DLP program will impact end-user experience. Once the business and security sign off on acceptable impact to the business, and staff have been trained on the manual classification and data usage policies, DLP might be ready to start protecting data.
DLP is the old paradigm. DASB is the New New. Based on the Zero Trust philosophy, DASB allows all data to be protected transparently, without impacting workflows or applications.
Download our whitepaper, The Rise of DASB, to learn how to protect your organization's data against breaches and insider threats.