Episode 58 — Validate network design continuously by testing intended paths versus actual reachability
Reachability drift is how surprise exposure and lateral movement sneak into otherwise well-designed environments. In this episode, we focus on continuously validating network design by comparing intended paths to actual reachability, because network posture is not static and small changes compound over time. Teams add new services, modify routing, open ports for troubleshooting, create new peering links, and adjust security rules to meet delivery deadlines, and each change can alter who can reach what. The risk is that the environment slowly becomes more permissive than anyone realizes, and the first time that permissiveness becomes visible is during an incident. Validation is how you prevent that surprise, because it treats reachability as a measurable property rather than an assumption. When validation is routine, the organization can catch drift early, reduce blast radius, and keep segmentation meaningful even as systems evolve. Validation also improves incident response because responders can rely on known boundaries and known paths rather than guessing under pressure. The goal is not to constantly redesign the network, but to continually confirm that the network still behaves the way you think it does. If you want network security to remain real, you have to test it.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
Intended paths are the documented allowed connectivity model, meaning the set of network flows that should exist for the business to operate and for services to function. This includes which segments accept inbound traffic and from where, which tiers can talk to which tiers, what management paths exist, and which outbound destinations are permitted. Intended paths are usually expressed as architectural expectations, such as internet traffic reaches only the public tier, application services reach databases on specific ports, and sensitive systems are reachable only from approved internal services. Documentation does not need to be a complex diagram to be useful; it needs to be a clear statement of allowed connectivity and disallowed connectivity. Intended paths also include the rationale, such as why a service must be reachable from a specific partner network or why a workload requires outbound access to specific update repositories. The key is that intended paths are explicit, because implicit intent cannot be validated and will not survive team changes. When intended paths are well defined, they become a baseline against which drift can be detected. Without an intended model, validation turns into random probing rather than a disciplined check against expectations.
Actual paths are what can really happen in the environment, and comparing them to intended paths requires evidence rather than assumptions. Logs, routes, and rule evaluation provide that evidence, because they show what traffic was attempted, what traffic succeeded, and what network and policy decisions allowed or blocked it. Route tables determine which destinations have a path, security groups and access controls determine which paths are permitted, and gateways determine whether traffic can reach external networks. Flow logs and gateway logs reveal observed traffic patterns, including unexpected east-west flows, unusual outbound destinations, and repeated denied attempts that suggest scanning or misconfiguration. Control-plane logs show when routing or policy changes occurred, which helps explain why reachability changed over time. Rule evaluation, whether through structured review or automated analysis, helps confirm whether the current policy set aligns with the intended model. The practical mindset is to treat reachability as a result of multiple linked decisions rather than one setting. When you compare actual paths to intended paths using evidence, you can identify which changes created drift and which controls failed to enforce the intended boundaries.
Inbound exposure validation should be performed for each segment and critical service because inbound reachability is where most exposure risk begins. The validation question is simple: what can reach this segment or service from outside, and is that reachability intended. For public-facing services, validate that only the approved entry points are reachable, that only required ports and protocols are open, and that management interfaces are not exposed. For private segments, validate that no internet paths exist, that no unintended public addressing is present, and that inbound rules from other internal segments are as narrow as expected. Inbound validation should also consider partner and hybrid connectivity, because external exposure can occur through private links as well as through the public internet. This validation should be repeated over time because emergency changes, new deployments, and routing updates can change exposure without anyone explicitly intending to do so. The goal is to ensure that every internet-facing door is known, justified, and monitored, and that private segments remain private in practice, not just in naming. Inbound validation reduces the chance that a forgotten rule or misrouted subnet quietly creates a new attack surface.
East-west access validation is the core of segmentation assurance because lateral movement depends on east-west reachability. Validation should confirm that tiers can communicate only as intended, such as public tier to application tier, application tier to data tier, and not public tier directly to data tier. It should also confirm that micro-segmentation boundaries hold for high-value systems, meaning that only approved workloads can reach sensitive systems on the required ports. East-west validation should examine both routing and permission layers because a path may exist in routing but be blocked by permissions, or vice versa, and understanding both is necessary for accurate conclusions. Flow evidence can reveal whether unexpected east-west communication is occurring, which may indicate either misconfiguration or compromise-driven scanning. It can also reveal whether denied traffic spikes are occurring, which can be a signal that boundaries are working but are under probing. Validating east-west access also includes validating management plane reachability, ensuring administrative interfaces are not reachable from normal workload networks. When east-west validation is routine, segmentation becomes a maintained control rather than a historical design artifact.
Egress validation is critical because outbound reachability determines exfiltration paths and command-and-control options for compromised systems. Validation should confirm which destinations workloads can reach, whether outbound traffic is forced through monitored egress gateways or proxies, and whether sensitive workloads have unnecessary internet access. It should also confirm that D N S resolution is constrained to approved resolvers, because unrestricted resolver use can bypass outbound controls and enable tunneling. Egress validation benefits from both policy review and observed evidence, because policies may be permissive while actual usage appears narrow, or policies may be narrow while hidden dependencies cause unexpected outbound traffic. Flow logs and proxy logs can reveal new destinations, unusual outbound volumes, and changes in outbound patterns that may signal compromise or drift. The goal is to ensure that outbound access remains intentional and that exfiltration paths are limited by design, not merely by hope. Egress validation also supports operational stability because it helps teams understand real dependencies and avoid breaking critical functions when tightening controls. When egress is validated routinely, the organization is less likely to discover during an incident that everything can talk to everywhere.
Describing a reachability test in plain language steps is a useful practice because it forces clarity about what you are validating and how you will interpret results. Start by stating the intent, such as the database should be reachable only from the application tier on the database port and should not be reachable from the public tier. Next, identify the sources and destinations involved, such as which segment represents the application tier and which represents the public tier, and specify the ports and protocols that should be allowed and denied. Then describe the evidence you will use, such as confirming route paths, checking security group and access control rules, and reviewing flow evidence for accepted and denied connections. Include how you will validate both positive and negative cases, meaning you confirm intended connectivity works and you confirm unintended connectivity fails. Also include what success looks like, such as no observed flows from disallowed sources and consistent denies when disallowed traffic is attempted. Finally, include what you will do if results differ from intent, such as capturing the rule or route that enabled unexpected access, assigning an owner, and planning a corrective change. Plain language steps make reachability validation repeatable and easy to communicate across teams, which is essential for operationalizing the practice.
Assuming policies work without periodic verification is a common pitfall because policies are only as effective as their current configuration and their ongoing enforcement. Policies can drift through manual changes, exceptions, and configuration sprawl, and cloud environments amplify that drift because changes are frequent and distributed across teams. Another pitfall is treating segmentation and routing design as a completed project, which leads to long periods where no one checks whether the environment still matches the original model. Over time, the environment may accumulate broad inbound rules, permissive east-west connectivity, and unconstrained egress, all while documentation remains unchanged. This creates a dangerous gap between what leadership believes is true and what is actually true, and attackers tend to exploit that gap because it represents unmonitored reachability. Verification is also necessary because some controls may appear to exist but may not be applied consistently across accounts and regions. Without periodic verification, the organization cannot be confident that its intended boundaries still hold. The practical result is that incidents become harder to contain because the real reachability graph is unknown. Verification turns network posture from an assumption into an observable property.
A quick win is scheduled checks for public exposure and open ports, because those checks catch some of the highest-impact drift with relatively low complexity. Public exposure checks confirm which services are internet reachable, whether they match the approved inventory of entry points, and whether any new public endpoints appeared unexpectedly. Open port checks validate that only required ports are open on public-facing systems and that management ports are not exposed broadly. These checks can be run routinely, such as weekly or monthly, and the results can be reviewed by owners with clear remediation expectations. The point is not to replace deeper reachability validation, but to establish a baseline habit of checking the most dangerous exposures regularly. Scheduled checks also reduce the risk that temporary emergency rules become permanent, because they are more likely to be discovered and corrected. Over time, the organization can expand scheduled validation to include east-west and egress paths, but starting with public exposure often delivers immediate risk reduction. Quick wins matter because they build momentum and demonstrate value without requiring perfect instrumentation on day one.
A scenario that clarifies why validation is needed is a new peering connection creating unexpected lateral reach. Peering and transit relationships can expand reachability dramatically, especially when routes are propagated broadly or when security controls are not constrained to match the intended connectivity. In this scenario, a team creates peering to enable a legitimate service dependency, but the connection inadvertently allows broad access between networks, enabling one environment to reach sensitive systems in the other. The exposure may remain unnoticed because everything is private and no internet-facing endpoint changed, but the lateral movement surface has expanded significantly. Validation would detect this by comparing intended paths to observed reachability, such as discovering that subnets that should not communicate now have a route path and permissive allow rules. Flow evidence would reveal new cross-network traffic or denied attempts that indicate probing, and control-plane logs would show when the peering and routing changes occurred. Response would involve restricting routes, tightening allow rules, and documenting the intended scope of the peering relationship so future changes do not reintroduce broad reach. The scenario reinforces that private connectivity does not automatically mean safe connectivity and that reachability expansion should be measured, not assumed. Peering changes are therefore a prime candidate for post-change validation routines.
Findings must be tracked, owners assigned, and fixes verified with evidence, because validation without follow-through becomes a report that everyone agrees with and nobody acts on. Tracking findings means capturing what was observed, how it differs from the intended model, what risk it creates, and what control change is required to restore alignment. Assigning owners ensures the right team is responsible for remediation, such as network teams for routing changes, platform teams for shared security controls, or application teams for workload-level security group adjustments. Verification with evidence is essential because many network issues appear fixed but are not, especially when multiple controls interact and when changes are rolled out gradually across accounts and regions. Evidence can include updated rule evaluations, confirmed route table associations, and observed flow evidence showing that unintended traffic is now denied and intended traffic still works. Verification should also capture that the change did not create outages, which helps maintain trust in the validation program. Over time, tracking and ownership create a feedback loop where the most common drift sources are identified and addressed through better guardrails and change control. A mature program uses validation not only to find problems but to improve the system that produces those problems.
A memory anchor that fits continuous validation is a map you update after every road change. If you drive using an old map, you eventually take a wrong turn, end up somewhere unexpected, or miss a critical detour, and the risk increases as the road system changes. Your intended network design is the map, and actual reachability is the road system as it exists today. Every routing change, new peering link, and rule update is a road change that can alter where traffic can go, and if you do not update and validate the map, you cannot rely on it. The anchor also reinforces that mapping is not a one-time activity, because environments evolve constantly. Updating the map means validating reachability and documenting intent so future teams can understand what paths are allowed and why. It also suggests that unexpected reachability is like discovering a new road that was built without signage, which should trigger review and control adjustments. When teams internalize this anchor, they are more likely to treat validation as routine maintenance rather than as a special audit exercise. A current map is what allows safe navigation under pressure, and a current reachability model allows safe containment during incidents.
As a mini-review, intended paths are the documented allowed connectivity model, and actual reachability is what can really happen based on routes, rules, and connectivity constructs. Comparing intent to reality requires evidence, including flow logs, gateway logs, control-plane change logs, route evaluations, and permission rule analysis. Validation should focus on inbound exposure for each segment and critical service, ensuring internet reachability is deliberate and management interfaces are protected. It should validate east-west access between tiers and sensitive systems so segmentation boundaries remain real and lateral movement is constrained. It should validate egress destinations so exfiltration paths and command-and-control reachability remain limited and observable. Scheduled checks, starting with public exposure and open ports, provide quick wins and build routine discipline. Scenarios like new peering connections highlight how private connectivity can expand reachability unexpectedly, reinforcing the need for post-change validation. Findings must be tracked with owners and fixes verified with evidence so the program produces real posture improvement rather than reports. When these elements work together, reachability becomes a maintained, measurable property rather than a hopeful assumption.
To conclude, create a monthly reachability validation routine today, because cadence is what turns good intent into sustained control. A monthly routine can start small, focusing on confirming public exposure inventory, reviewing any broad ingress changes, validating key tier boundaries, and sampling egress destinations for sensitive workloads. It should include review of recent control-plane changes that affect connectivity, such as routing updates, peering changes, and security rule modifications, because those are the changes most likely to create drift. The routine should produce trackable findings with clear owners and required evidence for closure, ensuring the organization actually restores alignment rather than merely observing drift. Over time, the routine can expand to include more systematic validation and more environments, but a steady cadence is more valuable than an occasional deep dive. When monthly validation becomes normal, surprises become rare, and incident response becomes faster because the team trusts its reachability map. Create a monthly reachability validation routine today.