THE CHAIR

After falling in love with Killing Eve instantly, I was quite excited about this one. But as much as I want to praise it, I’m not going to lie, this was disappointing. The only good thing about The…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Three questions on business and Data Virtualization

IT Manager bear not only the budget responsibility, they have to ensure that applications in their IT environment remain functional and flexible, while IT departments never lose control over the applications. The diversity of applications is generally difficult to tame. Add to that the pressure to analyse ever larger amounts of data at ever higher speed, and you’ll get a more difficult problem than a small budget.

Getting back to your question — if you deploy a data virtualization solution, then you can get more control of the data based IT projects, can get better application stability in shorter time, for less money. So yes, there must also be some “hard savings”.

At least with dataWerks, we don’t need to change anything on the applications or on the underlying infrastructure. If a business data source has sufficient number of free connectors, if the network has enough free bandwidth, then we’re generally operational. However, we see that the data sanity is very often an issue. For example, some business processes don’t really care that their dates and timestamps are stored as text, another ones are sensitive to that. Data virtualization means a unified interpretation of data, otherwise data can’t be correlated intelligently and, the solution doesn’t deliver the maximum value.

I’d say, in our projects, quite some time is spent on achieving a level of data maturity. This includes a lot of learning, both on our side and on the customer’s.

At first glance you may see only relatively high fixed costs of developing of data normalizing transformers, but keep in mind that connecting of a new datasource of known type comes at low margin cost. On a larger scale, this approach pays off very well.

How easy is the implementation of a data virtualization solution in a typical project?

It varies quite a bit! Typically, the easiness of a project implementation is directly proportional to how complete your data virtualization product is. Accumulating of enough coded logic to formalize (and to normalize) the data is the biggest challenge of implementation. You know, at the end of the day all data is either strings or longs, maybe bitmaps or metadata, they all are commodity. The true magic is to know how you want to analyze them, so that you can plan, how to abstract them, in order to treat them simply and universally.

It’s very common that customers aren’t clear about how they want to interpret their data. Getting data readiness at acceptable level complicates any project, regardless how small.

Add a comment

Related posts:

The Truth About using Oat Milk as Creamer.

There are many reasons why folks are finding non-dairy alternatives. Here are some reasons folks are seeking Oat milk specifically: As for me, I choose not to consume too much dairy because of what…

Getting Started With American Football

American football has long been considered one of the most popular sports in the United States. This is because people love the physical side of the game...

Local Farm Finds New Ways to Feed Community

A farm local to Poughkeepsie has been working to fight the chronic food insecurity problem in the Hudson Valley. The Poughkeepsie Farm Project, established in 1999 as a small community farm, has…