As humanoid robots move closer to real-world deployment, the conversation is shifting from innovation to trust. Beyond performance, one question stands out:
That was the focus of a recent seminar hosted by Lattice Semiconductor, bringing together experts from across the ecosystem, including Eric Sivertson, Petr Shamsheyeu, and Steve Clark.
Humanoid robots are cyber-physical systems. Safety has always been a priority—but security is now just as critical. A robot can be mechanically safe, yet still vulnerable.
Once connected, it becomes exposed to:
As Eric Sivertson put it during the session: “You can have a safe humanoid but if it’s connected and vulnerable, it can still be exploited.”
And unlike traditional devices, humanoids interact with the real world. A breach doesn’t just compromise data, it can impact physical environments.
The panel made one thing clear: security can’t be layered on later. It must be built from the ground up, combining:
Technologies like FPGA enable real-time threat detection, while TPMs anchor trust at the hardware level. Together, they create systems that are not only functional but resilient.
In parallel, insights shared by Petr Shamsheyeu highlighted the importance of hardware-level safety mechanisms, such as sensor fusion and real-time validation, to reduce risks like delayed response or false positives in safety-critical environments.
As Steve Clark underlined, Quantum Threat isn’t a future problem, it’s already underway.
Today’s encryption (RSA, ECC) will be broken by future quantum computers. That creates a critical risk: “harvest now, decrypt later.” (HNDL)
Sensitive data captured today could be exposed tomorrow. And regulators are not waiting:
The timeline is set:
For systems designed to last 10–20 years, like humanoid robots, this is not optional, it’s a design constraint.
Beyond encryption, Steve highlighted a more fundamental layer: device identity. In a world of autonomous systems, trust starts with knowing:
This is where PKI (Public Key Infrastructure) becomes critical. It enables:
Combined with PQC, it ensures that trust holds even against future threats.
The Trusted Platform Module (TPM) plays a central role in enforcing this trust. It provides:
In practice, this means:
For humanoids operating in open environments, this hardware root of trust is essential.
What emerged from the discussion is a clear architecture for securing humanoids:
This combination delivers something critical: deterministic, verifiable, and resilient systems.
And while humanoids are the focus, the same principles apply across:
Humanoid robots won’t be short-lived devices. They’ll operate for a decade or more. That changes everything.
Security must:
As Steve pointed out, a robot that becomes insecure five years after deployment isn’t just obsolete, it could become a liability.
Looking ahead, another challenge is emerging: delegated autonomy. Humanoids won’t just execute commands, they’ll act on behalf of users (Booking services. Accessing systems. Making decisions, …)
That raises a new question: How do you secure agency?
Defining and enforcing what a robot is allowed to do and preventing abuse, will be a key frontier in cybersecurity.
The message from the seminar is clear: Security is not a feature. It’s a foundation.
As humanoid technologies evolve, trust will determine adoption.
That means:
SEALSQ, alongside its partners, is helping organizations build that foundation—combining device identity, PKI, and post-quantum readiness to secure the next generation of connected systems.