The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Why the Future of Automation Is Teaching Machines to See Like Humans

DATE POSTED:October 7, 2025

Watch more: Digital Shift: GridRaster, Dijam Panigrahi

The conversation around robotics often swings between two poles: either a utopian vision of fully autonomous machines or a dystopian fear of human workers being displaced.

But as the global push toward more automated, efficient and safe industrial environments accelerates, a third option is emerging. One where machines and humans collaborate in tightly integrated workflows, guided by advances in spatial artificial intelligence (AI) and mixed reality.

“You’re going to see a lot of collaboration between robots and humans,” Dijam Panigrahi, co-founder of GridRaster, told PYMNTS, framing the next phase of industrial transformation not as a replacement of human labor but as an augmentation of it. “It’s not that one is replacing the other. Still 80 to 90 percent of the job has to be a collaborative effort. And this technology can play a huge role in upskilling the workforce.”

After all, as hardware matures, costs decline, and spatial intelligence platforms continue to improve, the line between the physical and digital in industrial settings will blur further.

“We call ourselves the spatial intelligence company for a reason,” Panigrahi said. “Understanding the 3D space changes how you can improve automation, safety and quality. That’s what unlocks the next stage of manufacturing.”

 

 

Mapping Space to Unlock Productivity

GridRaster’s thesis is that spatial intelligence, an AI-powered, real-time understanding of the physical environment, is the missing layer in many automation initiatives. By combining mixed reality interfaces with 3D mapping and computer vision, GridRaster enables both humans and machines to “see” and navigate their environment more effectively.

“By having the understanding of the 3D space, you can improve the automation efficiency of any process,” Panigrahi said. “You can enable an operator to do the task much more assuredly, much more safely, and at much higher quality.”

That capability is particularly important in settings where production is complex, variable, and often hazardous. Think: aircraft maintenance depots, metal forging operations, or additive manufacturing shops. These environments resist the static, rule-based automation of assembly lines. Instead, they require flexible systems that can adapt to the real-world variability of parts, layouts and workflows.

Panigrahi described a typical scenario: “You have an aircraft wing lying there, maybe a dome next to it. You want to guide a robot to go and do a specific inspection. The operator wears a headset, walks around, and our technology registers everything in that environment, identifies what is what, and creates the 3D surface of each object. That information is instantaneously passed on to the robot.”

Tasks that once required hours of manual setup can now be completed in minutes. Operators can even simulate robotic movements virtually before executing them, gaining confidence in the process. This looks particularly favorable in high-value industries like aerospace and defense.

“If I can get an aircraft up and running and save one hour, I pretty much save them a hundred thousand dollars or more,” Panigrahi said. “That justifies spending $3,500 on a headset.”

Cobots: Human Judgment, Robotic Precision

The focus on “cobots” (collaborative robots) reflects Panigrahi’s conviction that machines excel at repetition and endurance, while humans bring intuition and contextual judgment.

Robots thrive on consistency and can take on dangerous, fatiguing tasks. In heavy industries like casting and forging, where workers handle hot metal while wearing protective gear, cobots can do much of the hazardous work.

Beyond productivity and safety, spatial intelligence and mixed reality hold promise for workforce development. As veteran technicians retire and manufacturing faces a skills shortage, intuitive, immersive tools can help new workers learn complex tasks more quickly.

By embedding expertise in the tools themselves — allowing operators to visualize tasks, simulate procedures, and interact with cobots — companies can bridge generational knowledge gaps and accelerate training.

“You don’t have to be an expert,” Panigrahi said. “Just an operator who can put on a headset and walk around. The rest will be taken care of. You just tell what needs to be done.”

That simplicity lowers barriers for small and mid-size manufacturers, which often lack the capital and technical expertise to deploy advanced robotics. By shifting from capital expenditures to service-based models and reducing the need for specialized programming, spatial intelligence solutions could democratize access to automation.

Data as New Industrial Substrate

Underpinning this human-robot collaboration is a sophisticated data architecture. Spatial AI depends on massive streams of visual and positional data, but Panigrahi noted that not all of it needs to be shipped to the cloud.

“The AI is going to be pretty much everywhere,” he said. “But people think all the data has to go to the cloud. Not necessarily. You can run models locally, even in completely air-gapped systems without Wi-Fi.”

That approach is essential in industries like aerospace and defense, where security restrictions are stringent. GridRaster’s platform supports multilevel security, ensuring that sensitive data can be confined to devices or local servers as needed. The ability to run inference at the edge, close to the machine and the human operator, also reduces latency, making real-time guidance and feedback practical.

The post Why the Future of Automation Is Teaching Machines to See Like Humans appeared first on PYMNTS.com.