Classifying data is such a given that it's often one of the first things that security professionals recommend when launching a program. If you don't know the criticality of your data and where it's located, the conventional wisdom goes, then how can you assess the risk and decide how to mitigate it? And if you don't know what's most critical, then how can you prioritize your finite resources appropriately?
Don't get me wrong: I think data classification forms the basis for an important conversation around risk management, particularly with the top executives. If you're not used to thinking about your data in terms of assets to be defended (or lost), then you certainly won't support security efforts. But I think that for organizations below a certain level of maturity, it's only useful for a limited purpose, and then it should be set aside.
For one thing, most people who work with data don't want to spend their time thinking meta-thoughts about its security levels. If they have to choose a drop-down every time they get ready to send an email, they'll default to whatever gets the email through. Professionals in the defense sectors understand classification intuitively, but it takes practice (and you CISSPs, raise your hands if you can still describe the difference between the Bell-LaPadula and Biba models).
Another problem is that many different factors go into a data classification, and some of them are dynamic. Data elements can be mixed or selected such that the result is more confidential. Business decisions can create "data events" and cause classifications to change -- for example, a press release is usually strictly confidential, until suddenly it's the opposite. Expecting non-security employees to keep up with these changes is not realistic.
Finally, I think we put too much emphasis on classification and prioritization, because the data we have on breaches tells us that many intrusions start through a forgotten or lower-classified system. When organizations believe they don't have to do as much for a system that doesn't handle confidential data, they end up not doing enough. For example, I've seen developers who only used SSL on pages of an application that handled confidential data, leaving the rest of it vulnerable to session hijacking. The result of doing more in some spots and less in others is that attackers can move laterally through the network, exploiting trusted relationships and other vulnerabilities to get to the good stuff.
Unfortunately it's hard to sell the idea of treating all systems exactly the same; businesspeople love the pragmatism of prioritization. But what enterprises can do is to be very conscious about the ramifications of treating applications, networks and systems differently. If some are not getting the extra layers of protection, scrutiny, and monitoring, then they need to be treated as sacrificial and must be segregated as far as possible. In other words, if you're going to use levels of classification, take them seriously: create domains with appropriate controls and policies, and watch that cross-domain traffic. Otherwise you're just working with the equivalent of one large perimeter with a lot of weak spots in it, all leading to that soft, gooey center.
The practice of data classification shouldn't be held up as any sort of magic that makes all the rest of a security program come together. It's barely a start along that long and winding road leading to a sustainable practice. And touting it as a discrete goal can lead to a false sense of ... well, you know.