With a rich background in analytics, intelligence, and security, Malik Haidar has spent his career on the front lines, helping multinational corporations navigate the complex intersection of technology and business strategy. Today, he shares his insights on the maturation of the Industrial Internet of Things, exploring the practical challenges and strategic imperatives driving the industry. We will delve into the dual motivations behind edge computing, the intricate process of deploying AI on resource-constrained devices, the critical convergence of IT and OT systems, and the strategic importance of open standards and robust connectivity in building flexible, future-proof industrial ecosystems.
Edge computing is increasingly driven by technical needs like low latency and legal issues like data sovereignty. How do you see companies balancing these different motivations, and could you share an example of how this plays out in a real-world IIoT deployment?
It’s a fascinating and necessary balancing act. In my experience, the most successful deployments don’t see these as competing motivations but as two sides of the same coin. The technical need for speed and the legal need for control are converging. Take a modern manufacturing facility, for instance. You have a high-speed production line where quality inspection has to happen in real-time. We’re talking about reaction times measured in tens of milliseconds. Sending image data to the cloud for analysis and waiting for a response is simply not an option; the latency would be catastrophic for quality control. By placing AI-powered vision systems and processing right there on the line, you solve the technical problem. At the same time, you’ve solved a major data sovereignty issue because that sensitive, proprietary production data never has to leave the four walls of the plant, satisfying governance and security mandates. This integrated approach, where the solution for latency is also the solution for security, is where the real value of edge is unlocked.
Deploying AI on low-powered edge devices presents challenges related to model size and computing overhead. What are the key steps an enterprise should take to move a project from a prototype to a fully operational system, and what common pitfalls should they avoid?
Moving from a successful prototype to a full-scale operational system is often where projects stumble. The biggest pitfall is underestimating the engineering effort required to bridge that gap. A model that runs beautifully on a powerful workstation in a lab behaves very differently on a low-powered microcontroller on the factory floor. The first key step is to think about the entire lifecycle from the beginning—from dataset creation and model training all the way to on-device inference. A common mistake is to treat these as separate stages. Instead, you need a cohesive platform approach, like the ones we’re seeing from companies like Edge Impulse, that allows you to manage this entire flow. This helps you right-size your AI models, optimizing them to run efficiently without overwhelming the device’s computing resources. The goal is to achieve a “plug-and-play” capability, minimizing the bespoke integration that can sink budgets and timelines, so the system works seamlessly across a variety of hardware.
Bridging the gap between IT and OT systems is a major focus, with industrial PCs and modular gateways becoming more common. Can you walk us through the practical challenges of integrating these systems on a factory floor and provide a key metric for success?
The factory floor is an entirely different world from the corporate data center. Just a decade ago, the idea of putting robust computing power right next to heavy machinery was a recipe for disaster due to heat, vibration, and electrical noise. The practical challenge is creating a secure and reliable bridge. On one side, you have the OT world of industrial control systems, which prioritize uptime and safety above all else. On the other, you have the IT world, focused on data analysis, scalability, and security. Bringing them together with edge-ready industrial PCs and modular gateways, like what Rexroth offers with its ctrlX Automation ecosystem, is about creating a safe translation layer. You need to lift that critical sensor and process data from the OT side into actionable formats for IT analytics without compromising the integrity or speed of the control systems. A key metric for success here is “data-to-decision latency”—how quickly can you pull raw data from a machine, process it locally, and turn it into an actionable insight that either improves the process or prevents a failure, all without costly, high-speed connections to the cloud.
Many organizations fear vendor lock-in, which can limit scalability and strategic choices. How do open standards and interoperable tools directly address this concern? Could you share an anecdote where a company successfully used them to maintain flexibility in its technology rollout?
Vendor lock-in is a very real and justified fear. It’s a strategic straitjacket. When you commit to a single supplier’s proprietary ecosystem, you’re not just buying their hardware; you’re buying their roadmap, their pricing structure, and their limitations. Open standards and interoperable tools are the antidote. They ensure that different pieces of the puzzle—devices, gateways, platforms—can speak a common language. I worked with a logistics company that was rolling out a massive asset-tracking system across several warehouses. Initially, they were pushed toward a single-vendor solution. Instead, they insisted on a platform that used industry-standard protocols. Halfway through the project, a new, more cost-effective sensor technology became available from a different vendor. Because their core system was built on open standards, they were able to integrate these new devices seamlessly without having to rip and replace their existing infrastructure. This flexibility saved them a significant amount of money and allowed them to scale their operations much faster than if they’d been locked into their original supplier’s product range.
With connectivity options ranging from LoRaWAN to 6G+ and Wi-Fi 7, how should an organization select the right communication stack for a specific industrial edge project? What are the critical trade-offs between cost, speed, and reliability they must consider during this process?
The sheer number of connectivity options can be paralyzing, but the choice always has to come back to the specific use case. There’s no single “best” option; it’s always a series of trade-offs. The first thing you have to evaluate is the data requirement. Are you sending tiny packets of sensor data once an hour, or are you streaming high-definition video for real-time quality control? For the former, a low-power, long-range technology like LoRaWAN is perfect—it’s cost-effective and reliable for non-urgent data. For the latter, you’ll need the high bandwidth and low latency of private 5G, 6G+, or Wi-Fi 7. The critical trade-off is almost always between bandwidth, power consumption, and cost. High-speed options like 6G+ offer incredible performance but come with higher infrastructure costs and power demands. The key is to map the physical environment and the application’s needs to the technology’s strengths, using a layered approach with standardized solutions from partners like Telit Cinterion or 1GLOBAL to ensure you have a robust, reliable, and cost-effective communications layer.
What is your forecast for industrial edge computing?
My forecast is that industrial edge computing will become so integral to operations that we’ll eventually stop calling it “edge.” It will simply be the default architecture for modern industry. The convergence of powerful, robust local processing with sophisticated AI models and seamless IT/OT integration is lowering the barrier to entry for even small and mid-sized companies. We are moving away from massive, monolithic cloud-centric systems toward a more intelligent, distributed, and resilient model of computation. This will unlock unprecedented levels of automation, efficiency, and predictive capability right on the factory floor, fundamentally changing how physical goods are made and managed. The focus will shift from just connecting things to creating autonomous systems that can sense, reason, and act locally in real-time.

