In this Help Net Security interview, Rich Kellen, VP, CISO at IFF, explains why security teams should not treat OT labs like IT environments. He discusses how compromise can damage scientific integrity and create safety risks that backups cannot fix.

Kellen also outlines what “good enough” OT visibility looks like, why compensating controls can backfire, and how partnering with scientists improves security outcomes.

Where do security teams make the most dangerous false equivalencies between OT and IT in lab environments?

Across many enterprises, cybersecurity frameworks born in IT are extended, often without modification, into operational technology and laboratory environments. While well-intentioned, this extension is one of the most common sources of hidden risk for science-driven organizations.

Labs are not miniature data centers, and OT systems are not simply “special IT.” Treating them as such creates false equivalencies that quietly compromise scientific integrity, safety, and regulatory trust. The most dangerous of these is the assumption that recoverability in OT mirrors recoverability in IT. In IT, systems are considered disposable, states reversible, data recoverable, and users tolerant of delay.

None of those conditions hold in laboratories. In a lab, the system is the experiment, and its state is often nondeterministic and impossible to recreate. Restoring a system does not restore truth, and factors like temperature curves, reaction windows, and calibration drift make time alignment as critical as availability. A system brought back online may already have invalidated months of work.

Other risky assumptions include equating availability with uptime, because in labs, “available but wrong” is far more dangerous than offline. Patchability is also distinct: OT updates are limited by validation cycles, regulatory requirements, and recalibration processes, not IT maintenance windows. And user intent differs profoundly; scientists bypass controls not out of negligence, but to protect experimental integrity under mission pressure. Controls designed for IT resilience can inadvertently elevate scientific and safety risks in OT.

How should teams rethink “impact” when compromise means corrupted science or unsafe conditions?

When a compromise affects a laboratory, traditional IT impact models, such as minutes of downtime, data loss, or SLA breaches, fail to capture what truly matters. In science‑led environments like IFF, impact must shift from service‑centric metrics to outcome‑centric consequences. Invalidated research, false positives or negatives, regulatory exposure due to corrupted data, loss of ownership or provenance, and physical safety risks all represent significant and often irreversible effects.

If an incident response plan assumes that “restore from backup” is sufficient, it is fundamentally incomplete for laboratories. Recovery without confidence in the integrity and traceability of scientific data is not recovery; it is risk amplification.

Recently, IFF received ISO/IEC 27001 certification. This framework and certification establish a formal, auditable Information Security Management System (ISMS) that is inherently risk‑based, ensuring that security controls in OT and laboratory environments are selected, prioritized, and maintained based on business impact rather than generic compliance. Within this framework, the risk register serves as the central mechanism for identifying OT‑ and lab‑specific assets, threats, vulnerabilities, and potential operational, safety, and regulatory impacts, enabling informed and defensible security decisions.

What does “good enough visibility” look like for OT in practice?

In OT environments, discovery and exhaustive asset inventories are rarely practical, or welcomed. “Good enough visibility” means knowing which systems communicate, why they do so, and how changes may influence scientific outcomes or safety. Effective visibility enables teams to detect unexpected behavior quickly and answer essential questions such as, “Which experiments are at risk if this system is touched?”

When operators and scientists trust the visibility model, it becomes a reliable basis for decision‑making; theoretical visibility that exists only on paper does not support real‑world operations. The goal is a level of insight that supports data integrity, protects safety, and respects the realities of scientific workflows.

When does a compensating control become a liability instead of a safeguard?

Compensating controls are indispensable in constrained OT environments, but without ongoing management, they can quietly age into liabilities. Risks emerge when controls are forgotten, when manual steps rely on a single expert, or when network segmentation blocks essential diagnostics. Jump boxes can turn into single points of failure, and “temporary” firewall rules often outlive the equipment they were introduced to protect.

A compensating control becomes a liability when it cannot be validated without disrupting operations, when it impedes modernization by being treated as permanent, or when the likelihood of its failure becomes greater than the risk it was designed to mitigate. In OT settings, security debt rarely reveals itself until it fails, usually at moments when reliability matters most.

How does treating scientists as “users” undermine security outcomes?

Stakeholder partnership is key to shaping positive security outcomes that support IFF’s future-driven innovation. In my experience, when scientists feel security is imposed rather than co-created, workarounds become inevitable, security can be perceived as an obstacle to discovery, and risk moves underground.

Treating scientists as stakeholders changes the dynamic entirely: edge‑case risks surface earlier, signals become easier to distinguish from noise, and controls align with the realities of scientific workflows. Trust replaces bypassing.

Successful lab security protects epistemic integrity, the fundamental question of whether a result is true. It respects the constraints of the scientific method and recognizes operators as co‑defenders who share responsibility for safeguarding data and safety. Security programs that overlook the scientist’s mission inevitably fail, quietly, expensively, and often too late for remediation.



Source link