X
Business

Data Quality and Hadoop

With the emergence of Big Data and new analytic data platforms that handle different kinds of data, such as Hadoop, do the same data stewardship practices that have been around since the mid 90’s still apply?
Written by Andrew Brust, Contributor
This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a principal analyst, covering Big Data,at Ovum.
By Tony Baer
baer-tony.jpeg
 Data warehousing and analytics have accumulated a reasonably robust set of best practices and methodologies since they emerged in the mid-1990s. Although not all enterprises are equally vigilant, the state of practices around data stewardship (e.g., data quality, information lifecycle management, privacy and security) is pretty mature. With emergence of Big Data and new analytic data platforms that handle different kinds of data such as Hadoop, the obvious question is whether these practices still apply. Admittedly, not all Hadoop use cases have been for analytics, but arguably, the brunt of early implementations are. That reality is reinforced by how most major IT data platform household brands have positioned Hadoop: EMC Greenplum, HP Vertica, Teradata Aster and others paint a picture that Hadoop is an extension of your [SQL] enterprise data warehouse. That provokes the following question: if Hadoop is an extension of your data warehouse or analytic platform environment, should the same data stewardship practices apply? We’ll train our focus on quality. Hadoop frees your analytic data store of limits, both to quantity of data and structure, which were part and parcel of maintaining a traditional data warehouse. Hadoop’s scalability frees your organization to analyze all of the data, not just a digestible sample of it. And not just structured data or text, but all sorts of data whose structure is entirely variable. With Hadoop, the whole world’s an analytic theatre. Significantly, with the spotlight on volume and variety, the spotlight has been off quality. The question is, with different kinds and magnitudes of data, does data quality still matter? Can you afford to cleanse multiple terabytes of data? Is “bad data” still bad? The answers aren’t obvious. Traditional data warehouses treated “bad” data as something to be purged, cleansed, or reconciled. While the maxim “garbage in, garbage out” has been with us since the dawn of computing, the issue of data quality hit the fan when data warehouses provided the opportunity to aggregate more, diverse sources of data that was not necessarily consistent in completeness, accuracy, or structure. The fix was cleansing record by record based on the proposition that analytics required strict apples to apples comparisons. Yet volume and variety of Hadoop data casts doubt on the practicality of traditional data hygiene practice. Remediating record by record will take forever, and anyway, it’s simply not going to be practice – or worthwhile – to cleanse log files which are highly variable (and low value) by nature. The variety of data, not only by structure, but also source, makes it more difficult to know what is the correct structure and form of any individual record. And given that individual machine data readings are often cryptic and provide little value except when aggregated at huge scale also militates against traditional practice. So now Hadoop becomes a special case. However, given that Hadoop also supports a different approach to analytics, by reason, data should also be treated differently. Exact Picture or Big Picture? Quality in Hadoop becomes more of a broad spectrum of choice that depends on the nature of the application and the characteristics of the data – specifically, the 4 V’s. Is your application mission-critical? That might augur for a more vigilant practice of data quality, but that depends on whether the application requires strict audit trails and carries regulatory compliance exposure. In those cases, better get the data right. However, web applications such as searching engines or ad placement may also be mission-critical but not necessarily bring the enterprise to its knees if the data is not 100% correct. So you’ve got to ask yourself the question: are you trying to get the big picture, or the exact one? In some cases, they may be different. The nature of data in turn determines the practicality of cleansing strategies. More volume dictates against traditional record-by-record approaches, variety makes the job of clean sing more difficult, while high velocity makes it virtually impossible. For instance, high throughput complex event processing (CEP)/data streaming applications are typically implemented for detecting patterns that drive operational decisions; cleansing would add too much processing overhead for especially high-velocity/low latency apps. Then there’s the question of data value; there’s more value in a customer identity record an individual reading that is the output of a sensor. A spectrum of data hygiene approaches Enforcing data quality is not impossible in Hadoop. There are different approaches, that, depending on the nature of the data and application, may dictate different levels of cleansing or none at all. A “crowdsourcing” approach widens the net of data collection to a larger array of sources with the notion that enough good data from enough sources will drown out the noise. In actuality, that’s been the de facto approach that has been taken with early adopters, and it’s a fairly passive one. But such approaches could be juiced up with trending analytics that dynamically track the sweet spot of good data to see if the norm is drifting. Another idea is unleashing the power of data science, not only to connect the dots, but also correct them. We’re not suggesting that you turn your expensive (and rare) data scientists into data QA techs, but to apply the same methodologies for exploration to dynamically track quality. Other variants are applying approaches that apply cleansing logic, not at the point of data ingestion, but consumption; that’s critical for highly-regulated processes, such as assessing counter-party risk for capital markets. In one particular case, an investment bank used a rules-based, semantic domain model using the OMG’s Common Warehouse Model as a means for validating data consumed. Bad Data may be good Big Data in Hadoop may be different data, and may be analyzed differently. The same logic applies to “bad data” that in conventional terms appears as outlier, incomplete, or plain wrong. The operable question of why the data may be “bad” may yield as much value as analyzing data within the comfort zone. It’s the inverse of analyzing the drift over time of the sweet spot of good data. When there’s enough bad data, that makes it fair game for trending to check whether different components or pieces of infrastructure are drifting off calibration, or if the assumptions on what constitute “normal” conditions are changing. Like rising sea levels, typical daily temperature swings, for instance. Similar ideas could apply to human-readable data, where perceived outliers reflect flawed assumptions on the meaning of data, such as when conducting sentiment analysis. In Hadoop, bad data may be good.
Editorial standards