From edge AI to physical AI in smart factories: A shift in how machines perceive and act



The concept of the “smart factory” has evolved significantly over the past decade. Early industrial AI deployments, often categorized as Industry 4.0, focused on centralized analytics. This typically involved collecting data from machines, transmitting it to the cloud, and generating insights for later action.

While useful for optimization and reporting, that model is no longer sufficient. What’s changing now is not just where AI runs, but how it operates—shifting from centralized analysis to systems that can perceive, decide, and act in real time within the physical environment.

Today’s factories demand intelligence that operates in real time, directly at the point of action. Whether detecting defects on a production line, coordinating robotic motion, or identifying safety hazards, AI is increasingly expected to function as an always-on, embedded capability within industrial systems.

This shift marks a broader transition in smart factories, from traditional edge AI toward more contextual awareness and autonomous operation: systems that not only analyze data, but perceive, decide, and act within the physical world. While the promise is substantial, realizing it introduces a new set of technical challenges that require purpose-built solutions.

Why edge AI Is moving closer to the machine in smart factories

Several converging forces are pushing AI workloads out of centralized infrastructure and toward the factory floor, where real-time interaction with physical systems is required.

Latency is among the most critical. In applications such as robotics, inspection, and safety monitoring, even small delays can result in defects, downtime, or safety risks. Round-trip communication to the cloud is often incompatible with these requirements. This is further compounded by the fact that many industrial environments operate with constrained, segmented, or variable network connectivity, making consistent low-latency cloud access difficult to guarantee.

Data volume is another key driver. Modern industrial systems generate vast streams of multimodal data—high-resolution video, audio signatures, vibration patterns, and increasingly, tactile inputs. Transmitting all of this data offsite is not only expensive but also unnecessary. In most cases, only a small fraction of events—such as anomalies, defects, or threshold violations—require action, making local inference far more efficient.

Figure 1 The transition from centralized AI to edge AI represents a fundamental shift in industrial computing. Source: Synaptics

Security and data sovereignty further make this trend important. Manufacturing processes and operational data are highly sensitive, and many organizations prefer to keep raw data within controlled environments.

The emergence of physical AI

On top of those factors, as AI moves closer to machines, its role is expanding. Instead of simply classifying or predicting, systems are beginning to interact with their environments in more dynamic ways.

This is the essence of physical AI in industrial systems, where they can:

  • Interpret complex, multimodal sensory input in real time
  • Adapt to changing physical conditions
  • Execute actions with precise timing and coordination

Figure 2 The edge AI-enabled systems are now interacting with their environments in more dynamic ways. Source: Synaptics

Consider robotics as a leading example. Advances in tactile sensing now allow robotic systems to “feel” objects, adjusting grip force based on material properties. In one recent deployment developed with our partner Grinn, a robotic hand integrates distributed touch sensing with embedded machine learning, enabling nuanced manipulation of objects ranging from fragile materials to rigid components.

Such capabilities represent a shift from scripted automation to adaptive, context-aware behavior, bringing machines closer to human-like interaction with the physical world.

Key challenges in deploying edge and physical AI

Despite the momentum, implementing AI at the edge, and especially physical AI, presents several challenges.

  1. Balancing performance and power

Industrial AI systems must operate continuously, often in constrained thermal and power environments. Unlike data centers, where peak performance is the primary metric, factory deployments prioritize sustained performance per watt.

Always-on workloads, for instance, predictive maintenance or safety monitoring, require efficient architectures that can run continuously without excessive energy consumption.

  1. Managing workload diversity

Industrial AI is inherently multimodal. A single system may combine:

  • Vision for inspection
  • Audio for anomaly detection
  • Vibration analysis for predictive maintenance
  • Sensor fusion for robotics and control

These workloads have different computational characteristics, making it difficult to rely on a single type of processor. Increasingly, heterogeneous architectures that combine CPUs, GPUs, NPUs, and specialized sensors are required to efficiently handle diverse tasks.

  1. Ensuring long-term reliability

Industrial systems often remain in operation for years or even decades. This creates unique requirements around:

  • Silicon longevity and availability
  • Stable software ecosystems
  • Predictable behavior across revisions

Frequent hardware changes or software incompatibilities can disrupt operations and increase lifecycle costs.

  1. Addressing model drift and lifecycle management

Unlike controlled lab environments, factories are dynamic. Lighting conditions change, materials vary, and equipment degrades over time. These factors can lead to model drift, where AI performance degrades after deployment.

Addressing this requires:

  • Continuous monitoring and validation
  • Local recalibration capabilities
  • Secure, manageable update mechanisms

AI in industrial environments must be treated not as a static feature, but as a lifecycle-managed subsystem.

  1. Integrating compute and connectivity

As systems become more distributed, the interaction between compute and connectivity becomes critical. Many manufacturers still rely on separate vendors for processing and wireless communication, leading to integration challenges and fragmented support models.

In physical AI systems, high-bandwidth, low-latency data movement between sensors, processors, and actuators is essential for safe and reliable operation.

The role of Wi-Fi 7 and next-generation connectivity

Connectivity is often a critical enabler of physical AI in smart factories, where real-time coordination between distributed systems depends on low-latency, high-reliability communication. As industrial systems scale in complexity and device density, traditional wireless technologies struggle to meet performance requirements.

Advancements in Wi-Fi and Bluetooth are addressing this, but wireless connectivity can no longer be viewed as a standalone, discrete capability. Without this level of connectivity, many physical AI use cases, particularly those requiring coordination across multiple systems, are not feasible.

There is a growing need, and clear benefits, in integrating processing and connectivity. This helps reduce system complexity, improve reliability, strengthen security, and simplify development for design teams.

Bringing together connectivity and processing changes how design decisions are made early in the product lifecycle. When core system functions work together, teams can simplify architecture choices from the outset and reduce the number of variables that typically slow progress.

Integrating connectivity and compute has benefits beyond the engineering and manufacturing phase. Over the lifetime of a product, integration helps reduce power consumption, lower device weight, and decrease overall system cost. At scale, even small reductions in size, mass, and power can translate into meaningful savings across production, shipping, and years of deployment.

Of course, wireless performance, range, and reliability are still critical in their own right. While existing Wi-Fi and Bluetooth standards have advanced the state of wireless connectivity, the emergence of Wi-Fi 7 introduces capabilities that enable more scalable and deterministic edge AI, supporting higher device densities and more predictable low-latency communication in smart factory environments.

  • Multi-link operation (MLO) allows devices to transmit data simultaneously across multiple frequency bands. This provides redundancy and helps maintain consistent, low-latency communication even in environments with interference or congestion.
  • Wider channel bandwidth (up to 320 MHz) supports high-throughput applications such as machine vision, where large volumes of image data must be transmitted quickly and reliably.
  • Higher spectral efficiency (via 4K QAM) enables more devices to share the same wireless spectrum without degrading performance, an essential feature as industrial systems scale.

Toward a new system architecture

The convergence of edge AI, physical AI, and advanced connectivity is reshaping how industrial systems are designed, requiring more integrated and system-level approaches.

Some guiding principles to consider in developing such intelligent deployments are:

  1. Start with system constraints

Rather than beginning with AI models, successful deployments start with system-level requirements:

  • Latency and timing constraints
  • Power and thermal limits
  • Reliability and safety considerations

These factors should guide architecture decisions, including silicon selection and model design.

  1. Embrace distributed intelligence

Instead of centralizing all processing, intelligence should be distributed across the system:

  • Sensor-level processing for early data reduction
  • Edge inference for real-time decisions
  • Connection to cloud-based training and optimization for continuous improvement

This layered approach balances performance, efficiency, and scalability.

  1. Design for multimodal integration

Physical AI systems rely on combining multiple sensing modalities. Architectures must support efficient data fusion and coordination across these inputs.

  1. Treat AI as a lifecycle capability

Deployment is only the beginning. Ongoing monitoring, updates, and optimization are essential to maintaining performance over time.

The path forward

The smart factory is no longer defined solely by automation, but by intelligence embedded throughout the system, enabling decision-making that operates in real time, it adapts to its environment, and interacts with the physical world.

This transition from centralized AI to edge AI represents a fundamental shift in industrial computing. Performance and accuracy are still important, but what matters most is whether AI can operate reliably under real-world constraints: continuously, efficiently, securely, and in close coordination with physical processes.

Advances in heterogeneous computing, integrated connectivity, and open software ecosystems—as evidenced by AI-native platforms such as the Synaptics Astra Platform—are enabling this shift.

As these elements come together, the factory floor is becoming not just automated, but perceptive and adaptive, comprised of increasingly autonomous systems that do more than execute tasks; they understand context and respond accordingly.

Neeta Shenoy is VP of marketing at Synaptics.

Special Section: Smart Factory

The post From edge AI to physical AI in smart factories: A shift in how machines perceive and act appeared first on EDN.



Source link