Episode 63 — Detect identity abuse by correlating logins, token use, and privilege changes
Identity abuse tends to show up early, often well before anyone sees a suspicious executable, a strange process tree, or a classic malware signature. In this episode, we start from the practical reality that many modern intrusions begin with stolen credentials, abused sessions, or manipulated trust, and the attacker tries hard to look like a normal user while they find the fastest path to value. That means your best chance to catch the intrusion is often in the identity layer itself, where authentication, tokens, and privilege boundaries leave a trail. The core skill is correlation, because the attacker’s story is rarely one perfect indicator but rather a sequence of small, connected events that make sense when viewed together. When you can connect sign-ins to token behavior and privilege changes, you move from chasing isolated alerts to seeing the shape of an attack in motion.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
The first correlation set starts with sign-ins, but not as simple pass or fail records. You want sign-ins correlated with device context, location context, and timing context so you can distinguish routine behavior from suspicious behavior that merely uses valid credentials. Device context includes whether the device is known or newly seen for the identity, whether the device posture matches expectations for privileged access, and whether the device is associated with other risky signals. Location context includes geographic region, network origin characteristics, and consistency with historical patterns for that user or role. Timing context includes unusual hours, unusual frequency, and sequences like rapid sign-ins across locations or services in a way that does not fit normal workflows. None of these factors alone proves compromise, but together they can convert a low-confidence login anomaly into a high-confidence incident lead.
Token behavior is where identity abuse becomes more subtle, because tokens are designed to make access smooth and persistent. Correlating token issuance with unexpected refresh behavior and unusually long sessions helps reveal when an attacker is trying to hold onto access even if the user changes their password or the environment rotates keys. An unexpected refresh pattern might look like refresh events occurring from a new network location, occurring at a tempo that does not match typical user activity, or persisting for long durations without the normal interactive sign-in patterns you would expect. Long sessions can be legitimate for certain services, but they can also represent an attacker using non-interactive access to remain present while they explore. The point is not to distrust tokens; it is to treat tokens as a trail that can reveal session hijacking, token replay, and persistence techniques that never require malware. When token issuance and refresh patterns line up with suspicious sign-in context, you have a stronger narrative than either data source can provide alone.
Privilege changes are the most leverage-heavy identity events, and they should be treated as gateways to subsequent behavior. Correlating privilege changes with the sensitive actions that follow is a powerful detection strategy because attackers typically change privileges for a reason, and they often act quickly after gaining expanded access. A privilege change might be the assignment of a higher role, the creation of a new role with broad permissions, or the modification of a policy that increases access scope. The subsequent sensitive actions might include reading sensitive data stores, modifying security settings, creating new credentials, or changing network boundaries. When your monitoring ties the privilege event to the actions that follow, the alert becomes less about a theoretical risk and more about an observed sequence that indicates intent. This also helps reduce false positives, because legitimate privilege changes are often followed by a predictable set of administrative tasks, while malicious changes often lead to unusual access patterns or defensive evasion.
Some identity signals should be treated as high-suspicion by default because they represent the attacker building new footholds that can survive remediation. New keys, new roles, and new trust relationships are in this category because they often create durable access paths. A new access key or credential created by a user who does not normally create credentials is a strong signal, especially when paired with unusual sign-in context or token behavior. A new role, particularly one with broad privileges, is suspicious when created outside normal change windows or by identities that do not typically author authorization changes. New trust relationships, such as cross-account trust or federated trust adjustments, are especially important because they can allow access to be re-established even if one account is cleaned up. These events can be legitimate, but legitimacy should be supported by expected ownership, expected timing, and expected subsequent actions rather than assumed.
Failed logins are often dismissed as background noise, but they can be meaningful when you track patterns that precede a successful compromise. Correlating failed login patterns with a later successful login can reveal password spraying, targeted guessing, and user enumeration that finally lands on a working credential. The signal is rarely one failure; it is the shape of the failures over time, such as repeated attempts across many accounts from a common source, a spike in failures for a specific account, or failures that shift in method or endpoint until a success appears. You also want to pay attention to failures that occur immediately before token issuance, privilege elevation, or sensitive access, because that sequence can indicate an attacker testing and then acting quickly once they get in. When you connect the failures to the success and the downstream activity, you can often identify the compromise window more accurately, which improves both containment and investigation.
A practical way to build correlation skill is to walk through a complete correlation story for a compromised account, focusing on what you can actually observe. The story often begins with an unusual sign-in, perhaps from a new region or a new device, at a time that does not match the user’s typical pattern. Shortly after, token issuance occurs for services the user rarely uses, and refresh activity continues in the background even when the user is not actively working. Next, a privilege change appears, such as a role assignment or policy modification, followed quickly by sensitive actions like reading high-value data or changing security settings. Along the way, there may be a trail of failed logins, indicating the attacker was testing access before succeeding. The value of this exercise is not the specific details of any one environment, but the habit of thinking in sequences: what happened first, what enabled what, and what the attacker did with the access once they had it.
A major pitfall is treating each alert as isolated, which is exactly what attackers count on. If you treat an anomalous sign-in as one ticket, a token anomaly as another, and a privilege change as a third, you risk missing the fact that they are all the same incident. Isolation encourages shallow triage because each alert seems explainable when viewed alone. The attacker’s advantage is that identity abuse is often composed of normal-looking primitives: log in, get a token, change a role, access data, and create persistence. Defenders win by connecting these primitives into a storyline that exposes intent. Correlation is also a force multiplier for response teams, because it reduces duplicative effort and helps prioritize the most dangerous sequences instead of the loudest individual signals.
A quick win that produces meaningful detection value is alerting on privilege change followed closely by data access. The reason this works is that privilege changes represent enablement, and data access represents impact, so the combination narrows the set of events that actually matter. You can tune the sequence to focus on the most sensitive data sources, the most privileged roles, or the most unusual access patterns. For example, a role assignment that grants broader access, followed by a mass download from a sensitive repository within a short window, is a very different situation than a role assignment followed by routine administrative checks. This approach also encourages better data classification and ownership, because you have to define what counts as sensitive data access in your environment. It is a pragmatic way to get correlation benefits without building an overly complex detection program on day one.
To make the threat concrete, consider a scenario where an attacker quietly creates a new administrative role. The attacker may begin with a compromised identity that has enough permissions to author roles or modify policies, even if it is not fully privileged. They then create a role that looks plausible, with a name that blends into existing naming patterns, and they assign it broad permissions or attach it to a trust relationship that allows access from another identity. Next, they either assign that role to their compromised account or to a newly created service identity, so the privilege appears decoupled from the initial compromise. After that, they perform sensitive actions such as accessing protected data, disabling certain logging controls, or creating additional credentials to diversify persistence. The detection opportunity is in the chain: new role creation, role assignment, and sensitive access shortly after, especially when these events do not match normal change cadence or ownership.
Allow lists can be helpful, but they can also become a quiet way to hide real attacks if they are applied too broadly or without continuous review. The danger is that allow lists are often created to reduce noise, and over time they become exceptions that suppress signals from identities, networks, or behaviors that should still be monitored. If an attacker learns that certain pathways are considered trusted, they will try to route activity through those pathways, and allow lists can unintentionally help them. A careful approach is to keep allow lists narrow, time-bound where possible, and tied to explicit ownership so changes are reviewed. You also want to monitor the allow list itself as a sensitive object, because changes to allow lists can be a form of defensive evasion. The goal is to reduce false positives without creating blind spots that attackers can exploit.
A memory anchor that fits this episode is the idea that identity abuse is a chain of events, not single dots on a chart. A single dot might be a suspicious login, a token refresh, or a privilege change, and any one of those can be explainable in isolation. The chain is what reveals intent: the order of events, the relationships between them, and the way one event enables the next. When responders adopt a chain mindset, they naturally ask better questions, such as what the attacker needed to do first, what they did immediately after gaining access, and what they changed to keep access. It also helps with prioritization, because the most urgent cases are usually the ones where the chain includes privilege expansion and sensitive actions. In practice, chain thinking turns identity monitoring from a set of alerts into an investigation framework.
Before closing, it helps to quickly stitch together the core inputs and sequences you want to keep in view as you build correlation. The inputs include sign-in events with device, location, and timing context, token issuance and refresh activity with session duration and service scope, privilege changes like role assignments and policy edits, and downstream sensitive actions like data access and administrative configuration changes. The key sequences often involve a suspicious sign-in followed by token persistence behavior, a privilege change followed by immediate sensitive access, and trails of failed logins that precede success. The pitfalls include isolating alerts instead of connecting them, and overusing allow lists such that real attacker activity gets suppressed. The most effective practice is building and rehearsing correlation stories so the team recognizes how identity abuse unfolds in your environment. When these pieces come together, detection gets faster, more confident, and less dependent on perfect single-event indicators.
To conclude, build one correlation rule that watches for a privilege change followed by a download from a sensitive data source within a defined time window. Keep the rule narrow enough to be meaningful, and ensure it includes ownership and a clear response path so it results in action rather than debate. Use the correlation to capture the story: who gained privilege, what privilege changed, what data was accessed, and what token or session context was involved. Then review the results over time to tune thresholds and reduce false positives without suppressing the core pattern. When you can reliably detect privilege expansion followed by data access, you have a practical foothold in catching identity abuse early, before it turns into broader compromise.