Predictive Maintenance

Deploying the edge for real-time analytics

Author : Jason Anderson, Vice President of Business Line Management, Stratus Technologies

21 November 2018

Shutterstock image
Shutterstock image

Though the relative newness of edge computing requires users to navigate evolving definitions for the technology, its ease of deployment and immediate benefits can be obtained if you focus on answering the right questions.

By now, we all know that cloud computing is an excellent asset for storing data that doesn’t have a need to be immediately analysed or accessed. But what is the solution for data that does require real-time processing? 

That’s where edge computing comes into play.

Considering that edge computing is still a relatively new technology, you might be asking yourself questions such as:

• What exactly is edge computing?
• What are the benefits of processing data at the edge?
• How is the edge different than the cloud?

These are questions that industrial and manufacturing companies are contemplating when thinking about how to incorporate edge into their operations. But because edge computing is such a new concept, there isn’t one definition that can answer these types of questions for every user. But that doesn't mean the questions can’t be answered in a way that will help you make an effective decision about this new area of technology.

Dueling definitions

At the present, there are several definitions that explain edge computing:
• Gartner says the “edge” is the physical location where things and people connect with the networked digital world.
• The Open Fog Consortium defines edge computing as the process of placing data and data-intensive applications at the edge (i.e., on premise) to reduce the volume and distance that data must be moved. 
• The Linux Foundation defines edge technology as being a tool to improve performance, operating cost and reliability of applications and services. The Foundation goes on to explain that edge computing shortens the distance between devices and the cloud, thereby mitigating the latency and bandwidth constraints of today’s Internet, resulting in the development of new applications.

Taking these three definitions into account, we can arrive at this general concept: edge computing enables data and analytics gathering at the source, and involves pushing computing applications, data and services away from centralised locations to the “edge” of the network.

Seems straightforward enough, but it gets a bit more complicated when delving into the approach and purpose of specific technology deployments. This is especially true if you’re thinking about deploying an edge device to help with your operation’s real-time analytics capabilities.

If that’s the case, here are three questions to keep in mind:

How much data do you have and where is it stored? 

With edge computing, companies benefit from real-time processing capabilities, decreased latency and reduced costs. When considering how to deploy edge computing, knowing the amount of data that your operations will be processing and storing at the edge will ultimately help you determine the best course of action.

Given the broad range of industries and processes that could benefit from edge computing, it’s impossible to predict how much data individual industrial and manufacturing companies will actually push to the edge in the long run. What we can be confident of is that edge computing needs will only increase. New research from Gartner estimates that, by 2022, 50 percent of data is going to be created and processed at the edge.

How connected is your facility?

Most edge definitions presume that high levels of connectivity are required for edge devices. However, many industries have been deploying systems that would now be considered “edge” using minimal connectivity to the outside world. For example, the oil and gas industry has been utilising edge computing to monitor conditions on remote rigs located hundreds of miles away from the nearest data centre. In these scenarios the edge computing systems only share a subset of the most important data with core systems at headquarters or in regional data centres.

How secure are your operations?

A key difference about many edge environments is that there tends to be fewer humans around to effectively manage the hardware and software. In the past, limited or no connectivity often meant that these systems or sites were largely ignored. However, as these remote sites and systems become more connected, a higher level of security is needed. In short, you will need some sort of edge security strategy for these environments, and it is a good idea to look at them as having unique requirements rather than simply viewed them as an extension of your existing security measures.

When thinking about how to deploy edge devices, consider your operation’s data storage, connectivity levels and security requirements to best determine the right course of action. It’s also important to keep in mind the newness of the edge concept as a whole. As more use cases are developed for new operations, edge devices will become even more varied in their use.


Contact Details and Archive...

Print this page | E-mail this page


Predictive Maintenance

This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.