OAuth consent phishing is an identity attack where a user or administrator is tricked into granting permissions to a malicious OAuth application. The attacker does not need the user's password after the grant is approved. The application can access Microsoft 365 or other protected resources according to the permissions that were granted, which makes this a different problem from normal credential phishing, password spraying, or MFA fatigue.
The scope of this article is Microsoft Entra ID, Microsoft 365, and OAuth-based application consent. It does not treat every malicious sign-in as consent phishing. The focus is the consent grant itself: what the app receives, how defenders can see it, how to remove it, and how to prevent the same pattern from coming back.
What Is OAuth Consent Phishing?
OAuth consent phishing is the abuse of a legitimate consent model. Microsoft describes consent phishing as attacks that trick users into granting permissions to malicious cloud applications. The consent screen shows the permissions the application receives, but a realistic brand name, familiar logo, or believable business pretext can still cause a user or admin to approve access they did not intend to grant.
In the Microsoft identity platform, OAuth lets an application request access to a protected resource. RFC 6749 defines OAuth 2.0 as a framework that allows a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner or on its own behalf. Microsoft Entra ID implements that model for resources such as Microsoft Graph, Exchange Online, SharePoint Online, and other APIs that integrate with the Microsoft identity platform.
Scope Limits
The important defender point is that consent phishing targets authorization, not only authentication. In a password-phishing incident, the defender usually asks whether an attacker stole credentials, whether MFA was satisfied, and whether a suspicious session exists. In OAuth consent phishing, the defender must also ask whether an application received a durable permission grant that remains valid after the original interactive session.
This is close to the risk described in Azure App Registrations: Over-Privileged Tenant Apps, but the attacker path is different. Over-privileged legitimate apps are usually an internal governance problem. OAuth consent phishing is a malicious or deceptive app gaining access through a consent prompt.
How OAuth Consent Phishing Works
The attack chain starts with a normal OAuth authorization request. A user follows a link or opens an application that sends them to the Microsoft identity platform. The user authenticates if needed. If the requested permissions have not already been consented to, Microsoft Entra presents a consent prompt. That prompt includes the requested permissions and publisher information.
Consent State, Not Just Sign-In State
If the user or administrator approves the request, Microsoft Entra records a grant for the application. For delegated Microsoft Graph permissions, the result of consent is represented as an OAuth2PermissionGrant. For application permissions, the result is represented as an appRoleAssignment. Those two objects matter because they are what defenders must enumerate, review, and remove during cleanup.
A malicious app does not need to ask for every permission at once. Microsoft documents dynamic consent as a case where an application asks for new permissions as needed at run time. That makes consent review more than a one-time onboarding check. A tenant can be clean during initial review and later receive a request for broader access if the app is coded to request additional scopes.
Offline Access Nuance
The offline_access scope also changes the defender model. Microsoft documents offline_access as giving an app access to resources on behalf of the user for an extended time. The consent page shows this as maintaining access to data the app has been given access to. Microsoft also notes that if any delegated permission is granted, offline_access is implicitly granted, and refresh tokens are long-lived. This does not mean the attacker has permanent access. It means incident response must include grant removal and token/session cleanup, not only password reset.
This is why OAuth consent phishing sits near, but not inside, other Entra identity attacks. Device Code Phishing: How OAuth Device Flow Compromises Entra ID Accounts abuses a different OAuth flow. Both attacks are OAuth-centered, but device code phishing is about authorizing a session through the device code flow, while consent phishing is about granting permissions to an application.
Delegated Permissions vs Application Permissions
The most important technical split is delegated permissions versus application permissions.
Delegated Access
Delegated permissions are used when an application acts on behalf of a signed-in user. Microsoft describes them as permissions that allow the application to act on a user's behalf, while the application cannot access anything the signed-in user could not access. If a user grants a delegated permission that allows reading mail, the app's access is tied to what that user is allowed to read. If a privileged administrator grants delegated permissions while signed in, the impact can be much higher because the user's own privileges are higher.
App-Only Access
Application permissions, also known as app roles or app-only permissions, are used when no signed-in user is present. Microsoft describes app-only access as the application acting as its own identity. Application permissions can allow broad tenant data access through Microsoft Graph or another resource API, and Microsoft states that generally only an administrator or the owner of an API service principal can consent to application permissions exposed by that API.
Permission Type Triage
The distinction drives investigation priority. A suspicious delegated grant by a low-privilege user may expose that user's mailbox, files, or accessible resources. A suspicious application permission can be tenant-wide, depending on the permission and the resource API. For example, Microsoft documentation uses Files.Read.All to illustrate the difference: delegated Files.Read.All allows the app to read files the user can personally access, while application Files.Read.All can read any file in the tenant through Microsoft Graph once granted.
Do not collapse both cases into a generic "OAuth app risk." The cleanup object, blast radius, approval path, and validation check are different.
Why MFA Does Not Stop Consent Abuse by Itself
MFA remains important. It reduces the chance that an attacker can sign in as a user, approve prompts as that user, or complete other interactive actions. The problem is narrower: MFA does not automatically make an already granted OAuth permission harmless.
Once a malicious app has a valid delegated grant, the application can request tokens according to that grant and the user/resource context. Once an app has application permissions, it can operate without a signed-in user for the data that permission covers. Conditional Access and MFA policies may still affect how tokens are issued in specific scenarios, but the existence of a consent grant must be investigated directly.
This is the same broader lesson covered in Azure Identity Security: Why MFA Alone Is Not Enough. MFA is one control. It is not a substitute for app consent governance, permission review, audit log monitoring, publisher evaluation, or revocation of suspicious grants. If your team also tracks push-based social engineering, MFA Fatigue: Detection and Prevention for Microsoft Entra ID covers a separate attack pattern that can occur before or alongside consent abuse.
The practical defender mistake is to close the incident after resetting a password or confirming MFA was enabled. In a consent phishing case, that is incomplete. The response must answer whether a service principal exists, which grants are attached to it, which users consented, whether admin consent was granted, and whether the app has been disabled, removed, or blocked from receiving new grants.
The Attack Chain
A realistic OAuth consent phishing chain has several stages. The exact lure can vary, but the control points remain consistent.
- The attacker prepares or controls an application that can request Microsoft identity platform permissions.
- A user or admin is sent to an authorization URL that presents a Microsoft-hosted sign-in and consent experience.
- The consent prompt lists permissions and publisher information. The user or admin approves the request.
- Microsoft Entra records the grant. Delegated permissions appear as
OAuth2PermissionGrant; application permissions appear asappRoleAssignment. - The app uses access tokens, and potentially refresh tokens when available, to call APIs such as Microsoft Graph according to the granted permissions.
- The attacker keeps access until defenders revoke grants, disable or remove the malicious service principal where appropriate, block re-consent, invalidate relevant sessions/tokens, or Microsoft disables an application that violates service terms.
What This Is Not
This chain is not password spraying, where the attacker tries credentials against accounts at scale. It is not Kerberoasting, where the target is Kerberos service ticket material in Active Directory. It is not normal OAuth device code phishing, where the attacker persuades the user to complete a device-code authorization. Those distinctions matter because telemetry and remediation are different.
For defenders, the point of the chain is to identify durable state. A bad sign-in can be investigated through sign-in logs. A bad consent grant must be investigated through application objects, permission grants, audit logs, and app governance controls.
Detection
No single Microsoft Entra event proves OAuth consent phishing by itself. A legitimate administrator can consent to a legitimate app. A legitimate user can approve a low-risk delegated permission. Detection requires correlation across consent activity, app metadata, permissions, user context, and follow-on API behavior.
Audit Log Pivots
Start with Microsoft Entra audit logs. Microsoft documents these application permission activities in the Core Directory service and ApplicationManagement category:
| Activity | Why it matters |
|---|---|
Consent to application | A user granted consent to an application. Review actor, target app, publisher, permissions, and timing. |
Add delegated permission grant | A delegated permission grant was added. Review the scopes, client service principal, resource service principal, and consenting user/admin. |
Add app role assignment to service principal | App-only access was granted. Treat broad Microsoft Graph or Exchange permissions as high priority. |
Remove delegated permission grant / Remove app role assignment from service principal | Useful for validating cleanup and detecting repeated re-consent. |
Add service principal | May show when a new enterprise application object is instantiated in the tenant. Correlate with consent events. |
Set verified publisher / Unset verified publisher | Helps track publisher state changes, but it is not a complete trust decision. |
Correlation Questions
Use these events as pivots, not as final verdicts. A useful detection pipeline asks:
- Was the app newly added to the tenant?
- Is the publisher verified, and does the publisher align with the business purpose?
- Which permissions were requested, and are they low-risk or high-impact data permissions?
- Was the actor a normal user, an admin, a break-glass account, or an account outside its normal behavior?
- Was consent granted shortly after a suspicious sign-in, impossible travel signal, risky user signal, or phishing report?
- Did the application start calling Microsoft Graph, Exchange, SharePoint, or other APIs in a way that does not match an approved business workflow?
- Did multiple users authorize the same uncommon application?
Defender for Cloud Apps can add another layer where it is available. Microsoft documents OAuth app policies that can alert on apps matching criteria such as high permission level, many authorizing users, uncommon usage, misleading names, misleading publisher names, or potentially malicious OAuth app consent. Treat those alerts as prioritization signals and still inspect the underlying service principal and grants.
A mature detection also watches governance drift. If user consent settings are relaxed, app consent policies are changed, or admin consent workflow reviewers are removed, the tenant becomes easier to abuse. Those changes are not evidence of a malicious app by themselves, but they lower the control strength around future consent requests. Pair this monitoring with How to Audit Microsoft Entra ID Security so app consent is reviewed with Conditional Access, privileged roles, risky users, and tenant-wide identity settings.
For hybrid investigations, keep the boundary clear: this article focuses on Entra audit activity, but Active Directory Monitoring: Security Event IDs That Matter can help incident responders correlate on-premises identity activity when the same user or admin account is involved.
Remediation
Remediation has two tracks: remove the confirmed malicious access, then close the consent path that allowed it.
Remove Existing Grants
First, identify the application object and service principal in Enterprise Applications. Review the Admin consent and User consent tabs for granted permissions. Microsoft documents that admins can revoke organization-wide permissions in the Admin consent tab, while user consent grants may need Microsoft Graph API or PowerShell for removal. For complete cleanup, enumerate both delegated permissions and application permissions.
For delegated permissions, remove suspicious OAuth2PermissionGrant objects. For application permissions, remove suspicious appRoleAssignment entries. Microsoft's permission review documentation provides Graph and PowerShell paths for both cases. If users or groups were assigned to the application, review and remove those assignments as part of containment.
Contain the Service Principal
Second, disable or remove the service principal when the application is confirmed malicious or unauthorized. Be careful with applications that have both suspicious and legitimate usage; removing a service principal can break business workflows. If Microsoft has disabled an OAuth application for violating service terms, Microsoft documents that new token requests and refresh token requests are denied, but existing access tokens remain valid until expiration. That is why responders should not rely on one action alone.
Third, address tokens and sessions. For delegated abuse, revoke refresh tokens or invalidate user sessions where appropriate. For app-only abuse, focus on removing the app role assignments, removing credentials from the service principal if it is tenant-owned, disabling the service principal, and validating that no app-only grant remains. Password reset can be relevant if the user was also phished, but password reset alone does not remove an OAuth grant.
Harden Future Consent
Fourth, harden consent governance:
- Review user consent settings and avoid broad user consent for applications and permissions that should require admin review.
- Use app consent policies to restrict what users can consent to based on criteria such as verified publisher status and permission risk.
- Enable and configure admin consent workflow so users can request access to apps that require approval instead of bypassing review through informal channels.
- Limit who can grant tenant-wide admin consent. Microsoft recommends least-privilege roles and warns that Global Administrator is highly privileged.
- Review publisher verification as one signal, not as a complete security approval. Microsoft explicitly states that a verified publisher badge does not imply app quality criteria, certifications, compliance, or security best practices.
- Use Defender for Cloud Apps OAuth app policies where available to alert on risky, uncommon, high-permission, or potentially malicious apps.
Do not create a blanket rule that blocks every third-party app without an exception process. That usually drives users toward unmanaged workarounds. The stronger approach is a controlled consent path: allowed low-risk delegated permissions, admin review for sensitive delegated scopes and application permissions, and periodic review of granted apps.
Validation After Cleanup
Validation is where many consent-phishing responses fail. A ticket should not close because the app disappeared from a visible portal page. It should close because the durable authorization paths have been checked.
Closure Checklist
Use this validation sequence:
- Confirm the suspicious service principal is disabled or removed when the app is confirmed malicious.
- Confirm no suspicious delegated
OAuth2PermissionGrantremains for the client service principal. - Confirm no suspicious
appRoleAssignmentremains for app-only permissions. - Confirm user and group assignments to the application were removed if they contributed to access.
- Confirm audit logs show the expected removal events, such as removed delegated permission grants or removed app role assignments.
- Confirm user consent settings, app consent policies, and admin consent workflow match the intended control model.
- Confirm relevant users have had sessions/tokens revoked where delegated access was involved.
- Confirm Defender for Cloud Apps or app governance policies, where deployed, no longer show the app as authorized or active.
- Confirm new attempts to grant the same permission path require the expected admin workflow or are blocked by policy.
For tenant-wide investigations, pair this with Azure Identity Protection: Blocking Leaked Credentials. Consent phishing is not the same as leaked credentials, but risky user and risky sign-in signals can help decide whether the user who granted consent was also compromised.
How EtcSec Detects Related Exposure
EtcSec should treat OAuth consent phishing as an exposure pattern across application inventory, identity governance, and monitoring readiness. The useful checks are not "does a malicious app exist" in isolation. The useful checks are whether the tenant would make malicious consent easy to obtain, hard to detect, or slow to revoke.
Exposure Signals
Relevant checks include:
- Enterprise applications with broad delegated permissions or app-only Microsoft Graph permissions.
- Applications authorized by many users without a clear owner or business justification.
- Applications without verified publisher status, especially when requesting sensitive permissions.
- User consent settings that allow users to approve more than low-risk delegated permissions.
- Missing or weak admin consent workflow.
- Lack of recurring review for admin consent and user consent grants.
- Application permissions that are inconsistent with the app's stated purpose.
- Privileged users who have consented to third-party applications from lower-trust devices or contexts.
- Monitoring gaps around
Consent to application,Add delegated permission grant, andAdd app role assignment to service principal.
This is adjacent to Azure Conditional Access: MFA Bypass With Stolen Passwords, but the remediation is different. Conditional Access is still part of identity defense, but OAuth consent hardening requires app governance, permission review, consent policies, and service principal cleanup.
Related Controls
OAuth consent phishing is best managed as a set of controls rather than a single toggle.
| Control | Defender outcome |
|---|---|
| User consent settings | Limits what end users can approve without administrator review. |
| App consent policies | Defines which apps and permission requests are eligible for consent. |
| Admin consent workflow | Gives users a managed path to request app approval instead of approving risky apps directly. |
| Enterprise application permission review | Finds delegated and app-only grants that no longer match business need. |
| Publisher verification review | Helps assess app authenticity, while recognizing that verified publisher status is not a full security certification. |
| Defender for Cloud Apps OAuth app policies | Adds alerting and governance for risky OAuth apps where licensed and enabled. |
| Audit log monitoring | Provides evidence for consent, permission grant, app role assignment, and cleanup events. |
| Incident response playbooks | Ensures responders revoke grants and disable service principals instead of only resetting passwords. |
These controls also reduce exposure from over-permissioned legitimate apps. That makes consent governance part of normal Entra security operations, not only an incident response task after a phishing report.
Primary References
The technical claims in this article are based on primary documentation and standards:
- Microsoft Learn: Protect against consent phishing
- Microsoft Learn: Permissions and consent overview
- Microsoft Learn: Scopes and permissions in the Microsoft identity platform
- Microsoft Learn: Configure the admin consent workflow
- Microsoft Learn: Manage app consent policies
- Microsoft Learn: Review permissions granted to enterprise applications
- Microsoft Learn: View activity logs of application permissions
- Microsoft Learn: Microsoft Entra audit log activity reference
- Microsoft Learn: Publisher verification
- Microsoft Defender for Cloud Apps: Create policies to control OAuth apps
- RFC 6749: The OAuth 2.0 Authorization Framework
- RFC 6819: OAuth 2.0 Threat Model and Security Considerations

