The Data Virtualization Gold Standard

Robert Eve

Subscribe to Robert Eve: eMailAlertsEmail Alerts
Get Robert Eve: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Virtualization Magazine, Desktop Virtualization Journal, Datacenter Automation


When Should We Use Data Virtualization?

Try data virtualization first, ETL as a last resort

People often ask me, “When is it appropriate to use data virtualization versus a more traditional form of data integration (e.g., consolidation using ETL)?”

My answer often surprises them because this is not an either-or situation: data virtualization should be a ubiquitous layer in your infrastructure that all data consumers use to access enterprise data.

Data Virtualization Focuses on the Data Consumer
Whether the data underneath the virtualization layer is pre-consolidated (using ETL), pre-materialized (as a cache), or exists only in the original transactional and operational systems is a secondary question, the answer to which should be based on latency requirements, resource constraints, and committed SLAs.

The data consumer doesn’t really care how IT accomplishes data delivery to meet expectations, so long as the consumer has the right data in the right timeframe.

Data Virtualization Always, ETL When You Must
In other words, the answer to the question is that you should (almost) always use data virtualization, and sometimes you may also need to use ETL and other data integration strategies. Surprised? Don’t be.

The long-term strategic value proposition of data virtualization is to loosen the tight coupling between data consumers and data providers. Data virtualization does this by establishing an abstraction layer capable of providing the right data, in the expected format, at the appropriate time, over the desired protocol, to any authorized requestor.

How exactly this is accomplished is encapsulated in and below the data virtualization layer, giving IT professionals unheard of architectural flexibility. This IT flexibility translates directly into business agility that few large enterprises have ever experienced.

Data Virtualization Projects Often Lead Data Virtualization Layers
Now to be fair to the person asking the original question, there are definitely near-term project-oriented value propositions for various data virtualization capabilities (e.g., data federation, accessing cloud-based data, etc.). And whether to apply data virtualization to a specific project often comes down to evaluating data virtualization functionality against more traditional data integration methodologies like ETL.

For these kinds of project-based questions there are numerous criteria that can help you decide whether to consolidate or, say, federate your data (see Data Integration Strategy Decision Tool).

Applying data virtualization on a single project is a good way to start because it allows you to become familiar with the technology while also realizing significant benefits. The project-based ROI for data virtualization is usually large enough to justify establishing the data virtualization footprint in your enterprise, and once that occurs you’ll be off and running towards an enterprise-wide data virtualization layer.

Pretty soon you’ll be asking a different question:When must I pre-stage data rather than accessing it directly from the systems-of-record?”

It’s a whole new world.

More Stories By Robert Eve

Robert Eve is the EVP of Marketing at Composite Software, the data virtualization gold standard and co-author of Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility. Bob's experience includes executive level roles at leading enterprise software companies such as Mercury Interactive, PeopleSoft, and Oracle. Bob holds a Masters of Science from the Massachusetts Institute of Technology and a Bachelor of Science from the University of California at Berkeley.