Pentesting Healthcare: Where Two Engineering Philosophies Collide

July 4, 2025

Every industry presents unique challenges for penetration testing, but healthcare stands apart as one of the most technically demanding environments we assess. Medical technology and cybersecurity operate on fundamentally incompatible design philosophies.

The Medical Trade-craft

Medical systems engineering is built around a single imperative: keep the patient alive. Every design decision — protocol selection, authentication model, network architecture — is optimized for speed of access, interoperability between systems, and uninterrupted availability. A clinician needs to pull imaging from any workstation on the floor in under two seconds. A patient monitor needs to transmit telemetry data continuously without a single dropped packet. An infusion pump needs to receive dosing updates from the pharmacy system the moment they’re entered. In this world, any barrier to data flow is a barrier to patient care, and barriers to patient care cost lives. The engineering philosophy is: open by default, available always, fast above all else.

The Security Trade-craft

Cybersecurity engineering operates from the opposite starting point: assume the network is hostile and every access request is potentially malicious until proven otherwise. The discipline is built around controlling access, verifying identity, encrypting data, and limiting the blast radius when — not if — something is compromised. Every connection should be authenticated. Every data flow should be encrypted. Every user should have the minimum access required to do their job and nothing more. Systems should be segmented so that a breach in one area cannot cascade into another. In this world, convenience is a vulnerability, and openness is an attack surface. The engineering philosophy is: closed by default, verified always, contained above all else.

Real Technical Conflicts

Legacy Protocols: A significant portion of medical infrastructure still communicates over SMBv1 and SSLv3. These aren’t oversights — they’re constraints imposed by embedded systems with fixed firmware that physically cannot support modern protocol stacks. The devices were designed for clinical reliability over 10-15 year lifecycles, and the protocols they ship with are the protocols they’ll run for the duration of that lifecycle. Air-gapping these systems sounds straightforward on paper, but it falls apart when real-time patient monitoring depends on continuous network connectivity. A cardiac telemetry system feeding data to a nursing station across the building needs that connection to function. The clinical requirement for real-time data flow and the security requirement for protocol-level isolation are in direct opposition, and resolving that tension requires compensating controls designed around each specific device’s communication patterns.

Authentication Paradox: Enterprise authentication systems are tuned for standard IT environments — session timeouts, password complexity, MFA challenges, Kerberos ticket lifetimes. These settings work well in an office. In a clinical environment, they create dangerous friction. LDAP timeout values that make sense for a desktop user cause clinical applications to drop authenticated sessions during patient procedures. A Kerberos ticket that expires while a surgeon is mid-operation forces a re-authentication that pulls attention away from the patient. Extending these timeouts globally weakens the security posture across the entire organization. The engineering challenge is building authentication architectures with context-aware policies — systems that understand the difference between a workstation in the billing department and a clinical terminal in the ICU, and apply appropriate controls to each without creating a management nightmare.

Testing Limitations: Standard penetration testing tooling was built for IT infrastructure that can absorb aggressive scanning, enumeration, and exploitation attempts without operational impact. Medical devices operate under fundamentally different assumptions. An Nmap SYN scan against a networked infusion pump can cause a firmware-level fault that requires a physical reboot — while the device is actively delivering medication to a patient. Vulnerability scanners generating high volumes of traffic on a VLAN shared with patient monitoring equipment can introduce latency that degrades real-time waveform data. The testing methodology for healthcare environments has to account for these realities at every phase. Scope definition, tool selection, scan intensity, and timing all need to be coordinated with clinical engineering and informed by the device criticality matrix built during passive reconnaissance. The goal is thorough coverage without introducing clinical risk — and achieving that requires a deeper understanding of the target environment than most standard assessments demand.

Compliance Confusion: Healthcare organizations operate under overlapping and sometimes contradictory regulatory frameworks. HIPAA requires encryption of protected health information in transit and at rest. Meanwhile, certain medical device manufacturers specify that enabling encryption on their devices voids the FDA 510(k) clearance — the regulatory approval that allows the device to be used on patients. Both requirements cite patient safety as their justification. The organization is left navigating between a federal mandate for encryption and a manufacturer declaration that encryption invalidates the device’s regulatory status. Resolving this requires detailed documentation: which devices fall into this conflict, what compensating controls are in place, and how those controls satisfy the intent of each regulatory requirement. Auditors from both sides — HIPAA and FDA — need to see that the organization has thought through the conflict deliberately rather than simply ignoring one requirement in favor of the other.

Navigating Between Two Worlds

When we pentest medical facilities, we’re navigating between two engineering philosophies: medical engineers who’ve spent decades perfecting open, fast, life-saving protocols — and security engineers who’ve spent decades perfecting carefully controlled, defensive architectures.

Both are right. Both have valid technical requirements. Neither can simply override the other.

The most successful healthcare security programs employ what we call “bilateral architects” — professionals who deeply understand both domains. They implement healthcare-specific security frameworks, compensating controls for legacy systems, and “break-glass” authentication models for emergencies.

The programs that work are built WITH clinical staff, not FOR them. Risk assessments measure patient safety impact alongside technical vulnerabilities. Security policies account for the reality that a nurse at 3 AM needs access to work, not a helpdesk ticket.

Best Practices That Bridge the Gap

These are the approaches we’ve seen consistently succeed across healthcare environments. Each one respects the technical requirements of both disciplines and treats the tension between them as a design constraint worth solving, not a problem to work around.

Passive-First Reconnaissance: Every healthcare engagement starts with passive network analysis — mirror ports, traffic captures, and protocol mapping — before running a single active scan. This builds a complete picture of device communication patterns and dependencies. Working alongside clinical engineering to build a device criticality matrix adds the context that purely technical discovery misses: which systems are life-sustaining, which tolerate brief interruptions, and which have known sensitivities to active probing. This groundwork makes every subsequent phase of testing safer and more precise.

Segmentation That Respects Clinical Workflow: Effective network segmentation in healthcare starts with understanding how clinical data actually flows. Workflow-aware VLANs and firewall rules — designed around real communication patterns like DICOM to PACS, HL7 to the EHR, and telemetry to monitoring stations — provide strong isolation while preserving the integrations clinicians depend on. The key is testing failover paths before enforcing new rules. When a cardiologist needs to pull up imaging during a catheterization, that pathway has to work every time. Segmentation designed with that understanding earns clinical trust and stays in place.

Compensating Controls for Legacy Devices: Legacy medical devices running older operating systems and deprecated protocols are a reality in every hospital. The path forward is building layered protections around them: network-level encryption wrapping unencrypted protocols, host-based firewalls on devices that support them, dedicated monitoring for anomalous traffic from legacy segments, and application-layer gateways translating between legacy and modern protocols. Each compensating control should be documented with the specific risk it mitigates and the device it protects — this level of traceability satisfies both auditors and clinical engineering teams who need to understand what’s protecting their equipment.

Break-Glass Authentication: Healthcare demands authentication systems that are both secure and survivable under emergency conditions. Well-designed break-glass models provide time-limited elevated access, triggered by defined clinical scenarios, with full audit logging and automatic post-event review. The focus is on making emergency access auditable and accountable, while keeping it fast enough to meet clinical urgency. A code blue team needs access in seconds. Designing for that reality — rather than treating it as an exception — produces authentication architectures that are both stronger and more trusted by the people using them.

Coordinated Vulnerability Disclosure with Manufacturers: Vulnerabilities discovered in medical devices require a different disclosure cadence than standard IT findings. Coordinating with the device manufacturer, the hospital’s clinical engineering team, and in some cases the FDA ensures that remediation accounts for patient safety, regulatory requirements, and operational continuity. Some fixes require new FDA submissions and take considerable time to implement. Documenting compensating controls and monitoring for exploitation during that window keeps the organization protected while the longer-term fix is developed.

Joint Security-Clinical Tabletop Exercises: The most revealing assessments happen when clinical staff participate alongside IT in incident response exercises. Simulating a ransomware event that encrypts the EHR tests whether downtime procedures actually work. Simulating a compromised infusion pump tests whether clinical engineering can isolate it without disrupting medication schedules for an entire floor. These exercises surface gaps at the human and procedural layer that no technical scan will find — and they build the cross-team relationships that make real incident response effective when it matters most.

The Bottom Line

System availability in healthcare isn’t measured in nines — it’s measured in lives. The organizations getting this right treat security and clinical operations as equal partners with a shared objective: keeping patients safe. That means security teams who understand medicine and clinical teams who understand risk. It means policies written in collaboration, tested in practice, and revised based on what actually happens at 3 AM when the systems go down.

Defining a New Boundary of Trust: Attacking and Defending AI-Integrated Systems →