Blog
Trust at scale: How GenAI Descriptions for Entitlements earned customer confidence
When SailPoint launched GenAI Descriptions for Entitlements in early 2024, we knew we were entering uncertain territory. Generative AI was - and still is - a technology that promises a lot but can easily overpromise. The question we kept coming back to was: how do you earn trust with a technology that people associate with hallucinations and unpredictability?
The answer, it turns out, was to not try to do too much.
The problem we chose to solve
Over 60% of entitlements in SailPoint Identity Security Cloud had no descriptions. This created a cascading problem: reviewers in certification campaigns were being asked to approve or deny access they couldn't understand. The result? Rubber-stamping, slower decisions, and audit findings.
We weren't trying to build the Star Trek computer - at least not yet. We picked a narrow outcome: generate accurate, understandable descriptions for entitlements so that humans could make better decisions. That's it.
Trust through controlled outcomes
Today, customers have generated hundreds of thousands of suggested entitlement descriptions. The median approval rate across all customers is 98%. In January, the average unedited approval rate was 99%.
These numbers didn't happen by accident. They happened because we controlled for outcomes that fit the use case we defined:
1. Human-in-the-loop from day one. We never assumed the AI was right. Every description goes through a review and approval workflow. Admins can edit, approve, or reject. Subject matter experts can be assigned as reviewers.
2. Distributed workload, not concentrated burden. We heard early on that admins didn't want another task on their plate. So we built the review process to allow delegation—send descriptions to the people who actually know what the entitlements do, rather than bottlenecking everything through a single admin.
3. Narrow scope, deep investment. We started with entitlements, the smallest unit of access. We resisted the urge to expand to roles and applications before we had proven the core value.
Custom context: The unlock
When we first launched, adoption was gradual. Customers were kicking the tires, generating one or two descriptions to see what happened. The heavy users were the early adopters willing to take a bet.
The inflection point came when we released custom context. This feature allows customers to add their own key-value pairs to guide the model, essentially letting them "tune" the descriptions for their specific environment.
Shortly after that release, we saw a meaningful uptick in weekly active organizations using the feature. Here's the interesting part: while not every heavy user of GenAI descriptions actually adds custom context, 18 of the top 20 users have at least visited the settings page. The presence of the control might be as important as using it. It signals to customers that if results aren't accurate enough, they can come back and adjust.
Several customers told us directly: they wanted the ability to add context before they would adopt the feature. For some, they've added extensive context. For others, it's peace of mind.
From feature to critical infrastructure
Something interesting happened that we didn't fully anticipate: GenAI descriptions became critical for quarterly audits.
When auditors ask how access decisions were made, reviewers can now point to complete, understandable information. The AI-generated descriptions give certifiers the context they need to make - and defend - their decisions. This isn't about replacing human judgment; it's about arming humans with the information they need to exercise judgment effectively.
One customer put it simply: "Once you have confidence in what is designed, you can trust the system."
That captures the journey. We didn't ask customers to trust AI blindly. We gave them controls, transparency, and the ability to shape outcomes. Over time, that confidence compounds.
What we learned
Building GenAI descriptions taught us some things about deploying AI in enterprise software.
Focus beats ambition. It's tempting to pursue the big vision - autonomous identity security, natural language governance, the "Star Trek computer." But trust is built incrementally. Start narrow, deliver value, then expand.
Controls create confidence. The ability to edit, review, and add custom context isn't overhead, it's the foundation of trust. Customers need to know they can intervene.
Listen, then iterate. Almost every major improvement, the review workflow, the ability to delegate to SMEs, custom context - came from customer conversations. The best roadmap is often hidden in customer feedback.
What's next
GenAI descriptions started with entitlements, but accurate descriptions are fundamental to everything that follows, intelligent automation, policy-driven decisioning, and reasoning about access at scale.
We're expanding to roles, applications, and metadata. We're making it possible to generate descriptions automatically as sources are onboarded. And we're using this descriptive layer to power our privilege discovery and classification efforts - helping AI models infer which entitlements carry elevated risk.
The foundation we built - narrow scope, human-in-the-loop, customer-tunable - will guide that expansion. Trust at scale isn't about removing humans from the loop. It's about giving them better tools to do their jobs. We're excited to see where this goes, and to keep earning the trust our customers have placed in us.
To learn more about GenAI descriptions for entitlements and SailPoint Identity Security Cloud, visit sailpoint.com.