Unlock your trapped data: Driving insights from edge-to-cloud

getty-sergey-nivens-edge-computing-2

Getty/Serge-Nivens

Let’s talk about data silos for a minute. Real world silos, of course, are those towers on farms that are used to store grain for future use or sale. They are tall buildings that usually contain one type of raw material. The silo concept generally serves as a metaphor to describe large collections of raw data that are stored separately from other raw data.

Servers and devices often silo data. Different machines store data, but not all must be shared with other devices. Applications generate and store data, but some can only be…can… Share if a well-written API (Application Programming Interface) is being used. Over time, organizations find themselves with a lot of data, but most of it is isolated, stored in separate morphological silos, never able to be part of a larger whole.

How edge computing creates the perfect storm for data silos

When it comes to enterprise networking, especially edge-to-cloud, data silos naturally exist. Every device at the edge produces data, but much of that data may reside on the device, or at least on a cluster of devices at that edge location. The same is true of cloud operations. Data is created and stored in many different cloud providers and, while they sometimes exchange data, much of it remains isolated from the rest of the enterprise.

Also: How edge-to-cloud is driving the next phase of digital transformation

But insights and actionable strategies come when all data across the enterprise is accessible to the appropriate users and systems. Let’s look at an example that might happen at the hypothetical home goods retailer, door-to-door, that we discussed earlier.

Door-to-door sells wall-mounted light fixtures that use plastic brackets to attach them to the wall. In general, this is a great seller. But every year in March and April, the company receives a flood of returns due to broken brackets. Returns are from across the country, from Miami to Seattle. That’s our first data set, and it stores itself.

The brackets are manufactured in the factory by the partner company. Normally, the factory operates at temperatures above 62 degrees Fahrenheit, but in January and February, the factory’s ambient temperature drops to an average of 57 degrees. That’s our second cluster of data, factory temperature.

Neither data set is connected to another. But as we explored in depth a while back, some plastic manufacturing processes begin to fail at 59 degrees or less. Without being able to correlate data set at the factory with return data from stores, the company could not have known that a slightly cooler factory was producing substandard brackets, which were failing across the country.

But by capturing all the data and making data sets available for analysis (and AI-based correlation and big data processing), insights become possible. In this case, because home-by-home has made digital transformation part of its DNA, the company was able to make the connection between factory temperatures and returns, and now customers who buy those lighting fixtures experience far fewer failures.

Your data is everywhere, but is it actionable?

This is just one example of the potential of harvesting data from the edge-to-cloud. Here are some key ideas that are all interrelated.

Your data is everywhere: Almost every computer, server, Internet-of-things device, phone, factory system, branch office system, cash register, vehicle, software-as-a-service app, and network management system is constantly generating data. Some of it is refined as new data is generated. Some of this builds up until the storage devices shut down due to overuse. Some of this resides in cloud services for each of your login accounts.

Your data is segregated: Most of these systems do not talk to each other. In fact, data management often takes the form of figuring out what data can be deleted to make room for more collection. While some systems have APIs for data exchange, most are underutilized (and some are overused). When complaining about some local businesses, my father used to use the phrase, “The left hand doesn’t know what the right hand is doing.” When data is isolated, so is an organization.

Insight comes from correlating multiple inputs: While it is possible to analyze a single dataset in detail and derive insights, you are more likely to see trends when you can relate data from one source to data from other sources. We’ve seen before how factory floor temperatures have a distant, but measurable, connection to the amount of returns in stores across the country.

To do that, all that data needs to be accessible in your enterprise: But those correlations and observations are only possible when analysts (both human and AI) can gain access to multiple sources of data to learn what stories it tells.

Making data usable and turning it into intelligence

The challenge is to make all that data usable, collect it, and then process it into actionable intelligence. For this, you should pay attention to four things.

be the first travel. All of these must have mechanisms to move the data from edge devices, cloud services, servers, and whatever else it can be done on, or collected. Terms such as “data lake” and “data warehouse” describe this concept of data aggregation, although the actual storage of data may be more dispersed.

Also: Edge-to-cloud driven digital transformation comes to life in this scenario of a big-box retailer.

Both these issues, storage of data and movement of data, need to be considered Security and governance. Data in motion and data at rest need to be protected from unauthorized access, while at the same time making all that data available to analysts and tools that can mine the data for opportunities. Similarly, data governance can be an issue, as data generated in one geographic location may have government or tax issues if it is moved to a new locale.

And finally, there is a fourth factor to consider Analysis. It must be stored in an accessible manner for analysis, updated frequently enough, properly cataloged, and carefully curated.

A gentle introduction to data modernization

Humans are curious creatures. What we create in real life, we often reproduce in our digital world. Many of us have cluttered homes and workplaces because we have never found the perfect storage location for each item. The same, sadly, is often true of how we manage data.

As we discussed earlier, we’ve muted it a lot. But even when we pull all that data into a central data lake, we don’t have the best ways to search, sort, and sift through it all. Data modernization is about updating how we store and retrieve data to use modern advances like big data, machine learning, AI, and in-memory databases.

The IT buzz-phrases of data modernization and digital transformation go hand in hand. That’s because digital transformation can’t happen unless the methods of storing and retrieving data are on top (often The Top) organizational IT priority. This is called a data-first strategy and it can reap substantial rewards for your business.

See, here’s the thing. If your data is tied up and stuck, you can’t use it effectively. If you and your team are always trying to find the data you need, or never see it in the first place, innovation will stifle. But free up that data, and it unlocks new opportunities.

Not only that, poorly managed data can be a time sink for your professional IT staff. Instead of working to drive the organization forward through innovation, they’re spending time managing all these different systems, databases, and interfaces, and troubleshooting the different ways they can break.

Modernizing your data not only means you can innovate, it also means you can free up your time to think instead of react. This gives you time to use more applications and features that can open up new horizons for your business.

Find hidden value and actionable insights in your data

The process of data modernization and adopting a data-first strategy can be challenging. Technologies like cloud services and AI can help. Cloud services can help by providing on-demand, scale-as-needed infrastructure that can collect more and more data. AI can help by providing tools to sift through all that data and organize it coherently, so your experts and business managers can take action.

But it’s still a big question for most IT teams. Typically, IT doesn’t set all that data in silos. It just happens organically as more and more systems are put in place and more and more to-do items are put on people’s lists.

That’s where management and infrastructure services like HPE GreenLake and its competitors can help. GreenLake offers a pay-per-use model, so you don’t have to “guess” capacity usage ahead of time. With cross-application and cross-service dashboards and a wide range of professional support, HPE GreenLake can help you challenge your data everywhere data first strategy.

Source

Also Read :  AWS Tunes Up Compute And Network For HPC

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button