Zapier Just Published an AI Fluency Rubric That Looks a Lot Like What We've Been Building

Zapier released Version 2 of their AI Fluency Rubric in March 2026. It defines what AI competency looks like across every department, broken into four levels: Unacceptable, Capable, Adoptive, and Transformative. It evaluates mindset, strategy, building, and accountability. Each department gets its own criteria.
It's good work. It's also deeply aligned with the frameworks we've been using at Gadoci Consulting for over a year.
The Same Structure, Arrived at Independently
Zapier's four levels map closely to the L1/L2/L3 framework we use with every client. Their "Capable" level describes someone solving problems with existing AI tools, using repeatable processes, improving individual productivity. That's our L1. "Adoptive" describes workflow-based solutions, integration across systems, teaching others. That's L2. "Transformative" is orchestration, custom builds, re-engineering how work happens. L3.
We use this framework to scope engagements, design training progressions, and give organizations a concrete picture of where their teams sit and where they need to go. Seeing Zapier land on the same structure reinforces something we've believed from the start: AI maturity isn't about tool knowledge. It's about how deeply AI is woven into real work, and that progression looks similar regardless of who maps it.
The People Who Make It Happen
One of the strongest ideas in Zapier's rubric is that fluency shows up differently by department and that managers are accountable for driving adoption across their teams. We see the same thing.
In our work, we call the people who drive this internally AI Operators: existing employees who are already experimenting, curious, high-agency, systems thinkers who don't need to be technical. About 5 to 10 percent of people at a company raise their hand for this role. They identify AI opportunities in their department, evangelize adoption, share wins across teams, and over time start implementing solutions directly. Zapier's "Adoptive" and "Transformative" columns describe exactly this kind of person.
The AI Operators produce Solution Briefs, standardized artifacts that capture what the current workflow looks like, where the friction is, who's involved, and what improvement looks like. That's how fluency turns into action. It's one thing to assess that someone operates at an "Adoptive" level. It's another to give them a structured way to channel that fluency into real outcomes for the business.
Why This Convergence Matters
The frameworks we use at Gadoci Consulting were formalized about a year ago, but they grew out of four years of hands-on AI operations work that started when ChatGPT launched in late 2022. The patterns we saw then, fluency as a spectrum, workflow integration as the real measure of maturity, the need for named roles to drive adoption internally, are the same patterns Zapier is now codifying.
When a company like Zapier publishes a detailed fluency rubric and it aligns this closely with frameworks we've been delivering to clients across industries, it signals that the market is coalescing around a shared understanding of what AI maturity actually looks like. The conversation is moving past "are you using AI?" and into "how deeply, how repeatably, and with what accountability?"
That's a meaningful shift. Most organizations we work with can tell you they want their people to "use AI more." Zapier's rubric, and frameworks like our L1/L2/L3 model, replace that vague aspiration with something specific enough to measure. That specificity is what makes real progress possible.
If your organization hasn't mapped out what AI fluency looks like for each of your teams, at each level of maturity, in terms concrete enough to assess and develop, Zapier just gave you a strong reference point. And if you want help building that picture for your own teams, grounded in your actual workflows and your people, that's the work we do every day.