Data protection solutions are finally evolving to the current state of data: distributed, cloud-centric and always-on. Data used to only exist within the corporate network on devices that never left the physical protection of the company.
Data loss prevention (DLP) has been the default solution for protecting data. It's literally in the name. What countless organizations have determined is that DLP doesn't stop breaches, but it does generate extremely high operational overhead. The same is true for other legacy solutions such as pretty good privacy (PGP) and information rights management (IRM).
DLP is only as good as the classification rigidity enforced by the organization. Classification is always too rigid and can't keep up with fluid data movement. For DLP to prevent data from egress, data must be classified correctly. Classification is complicated and fragile. What is sensitive today is not sensitive tomorrow and vice versa. Classification turns into an endless battle of users trying to manage the classification of data. Ultimately, classification and DLP deteriorate over time. DLP adds an extremely high operational overhead, as it requires users to be classification superstars, and even then, mistakes will happen. Desjardins Group, a Canadian bank, recently made news for a malicious insider who obtained information on 2.7 million customers and over 170,000 businesses. The exact details of the breach haven't been made public yet, but DLP solutions are standard in all financial institutions.
A New Approach to Data Protection
A new wave of solutions has appeared in the market to significantly shift the focus of data protection. Here are four criteria to measure data protection in the solutions you're currently considering: