3 Steps to Optimizing System Reliability

Author: Jon Herlocker


recent post in this blog introduced the three steps you can take to optimizing asset reliability in your facilities through the use of physics-based modeling and digital twin technology. Today we’ll cover the first step in this optimization process: reducing the total sensor count in your environment.

When the internet of things (IoT) came along and the notion of 24/7 monitoring promised end-to-end detectability for system-wide issues, facilities everywhere began equipping their systems with sensors everywhere a sensor would fit. The idea was that “more was more”—the more sensors you can add, the more data you could collect about system performance, and the more you could do to monitor and maintain the system effectively.

Sounds good, right? Except for this: More sensors also means more hassles—mostly tied to the cost and nuisance of adding sensors to your environment:

  • The expense of buying, installing, and maintaining new sensors causes costs to go up.
  • Engaging IT to plan and carry out expanded sensor coverage typically take months of everyone’s valuable time.
  • Teams see new workload burdens as the scope of duties for a typical operator is expanded.
  • In some cases, life safety concerns are an issue, when sensors are added in hard-to-reach places.

So, while it’s important to gather as much relevant sensor data as you can to achieve world-class condition monitoring for your mechanical assets, it’s ideal if you can do this without having to install any new sensors. What’s the secret? Instead of placing new sensors into your existing environment, consider the ways you can calculate “virtual” sensor values by applying physical laws to the data you already collect.

In the example shown below, the cooling tower in a building contains equipment that exchanges water at varying temperatures and uses a fan to produce cool air for the building. Let’s assume that as part of maintaining this system, the operator wants to keep a keen eye on the system’s overall power consumption.

The common way of tracking power consumption would be to attach a kilowatt power sensor that sends data back to the operator’s monitoring software. But let’s say it’s difficult to access some portions of the tower where sensors would be required, and the expertise to maintain these sensors means hiring outside contractors every time service or replacement is needed. What if there were a way to get by without using a sensor at all?

By mapping all system elements onto a digital twin, the operator has the data he needs to determine expected power consumption in the tower by reverse engineering basic physics calculations. The temperature differentials of the water entering and exiting the tower provide key factors, along with the known physical properties of the system itself. This in turn lets the monitoring system calculate how hard the tower needs to work to deliver the current cooling level, and the consequent power utilization required for doing so.

By forgoing the installation of new sensors and instead making better use of the sensors that are already part of your environment, you can save on installation costs while reducing  your maintenance burden and also the number of unwanted outcomes.

But sometimes you need sensors anyway, and maintaining them is an inescapable part of your operational duties. In the next post in this series, we’ll talk about how you can also use physics and a digital twin to calculate sensor failure points, making it easier to assure reliability through basic computations of the sensor data you collect.