Prioritizing design systems
A step-by-step approach to managing and prioritizing requests in your design system.

The team
Context
Since launching in 2018, Agoda’s Design System (ADS) has grown into a nearly 20-person cross-functional team supporting over 60 product teams and 1,600+ designers and engineers.
With strong leadership buy-in, adoption soared. Nearly 100% of consumer-facing teams now use ADS in their day-to-day work.
Challenge
As adoption increased, prioritization became increasingly difficult.
With rising stakeholder demands, we struggled to manage expectations, balance short-term asks with long-term system health, and ensure internal alignment.
We realized that our old ways of working did not scale.
I worked with my PM (Hanne) and Dev Lead (Lars) to uncover the root causes, leading to four clear focus areas:
1 - Prioritize
better
2 - Track &
coordinate
3 - Manage
expectations
4 - Improve
transparency
To tackle these, we introduced a new process, set up a prioritization framework, and streamlined how we communicate with stakeholders.
Process overview
Anyone at Agoda can submit a request. We review, prioritize, and groom it. Once groomed, it’s ready to be picked up by us or any contributing team.
New requests
Requests are managed on a separate Jira board. It tracks all requests by type, status, and priority—making it easy to filter by platform or progress and see what’s in the pipeline:

We support several types of requests:
Features – Improved or new components and patterns.
Visual assets – Icons, illustrations, flags, logos, etc.
Tokens – New design tokens or updates to existing ones.
Tooling – Docs, Plugins and tech tooling improvements.
Anyone can create a new request by adding a ticket. The form adjusts per type to capture key details:

To save us time and avoid confusion we ask designers to provide design references in a Figma template for more complex requests.
The process may seem detailed, but it helps us understand what’s needed and ensure we’re solving the right problems.
Prioritization
To ensure transparent, impact-driven prioritization, we score each request across four weighted criteria:
Product area
Reusability
Alternative solutions
Effort
Each is rated from “high” to “won’t fix,” and the total determines the final priority—favoring scalable, long-term value improvements.
We customized our framework to fit our needs. It shares similarities with established models like RICE, which Stuart Smith highlights in 3 ways we’ve energised our design system governance, which I can recommend to read.
Support
Now we had a framework, next we needed to adapt our processes to support it.
@ads-on-call
To distribute the load fairly and give team members ample time for focused work, we took inspiration from DevOps and set up a rotating "on-call" squad made up of a designer, developer, and QA.

Their task is to review request, align with stakeholders and answer questions.
Rituals
Our rituals have been adapted to ensure requests move forward:

Intake review: A new weekly ritual by our rotating on-call squad where we prioritize or send back requests.
Request grooming: A deep-dive where we break down the highest priority request into separate tickets in our regular board(s).
Announcement: At the end of each sprint we loop back to check which requests we have solved, then announce it in our public channels.

What worked
1. Shared ownership
A clearer structure empowered the whole team to lead rituals, take initiative, and grow — while freeing up more time for deep, focused work.
2. Constructive convos
Rather than debating every request, we now fine-tune the framework together. It defines the “what”, while real conversations shape the “how”.
3. Positive sentiments
Making our decisions more transparent built stronger relationships across the org. Sentiment surveys show clear improvement in trust and alignment.
4. Faster alignment
By mapping impact and effort, we aligned faster on the right priorities — helping teams understand the why behind every “yes”, “later”, or “no”.
What didn't
1. We can’t do it all
We get 5x more requests than we can handle. Prioritization helps, but we’re still exploring how to scale contributions without compromising system integrity.
2. Timeline tensions
As a team with many dependencies, we move slower than high-velocity teams. Aligning on timelines without becoming blockers remains challenging.
3. Requests ≠ Root Causes
Individual requests often don't paint the full picture of a problem, but getting to the root-cause and fixing system-wide issues is still hard to justify.
4. Frequent releases
Frequent small updates can frustrate consuming teams, who must constantly update dependencies to stay current.
5. Scattered focus
Switching between siloed requests creates context-switching overhead, breaking flow and reducing the team’s productivity.
1 year later
The framework and processes have held up well, and we’ve continued to refine them through feedback and real-world use.
To stay focused, address root causes, and reduce the noise of constant updates, we now group work into larger, themed batches—like form fields or buttons. This gives consuming teams more predictability and helps us stay in flow.
Batching also unlocks better outcomes: more reusable solutions, leading to better cohesiveness. We prioritize these batches not just by request volume, but also by benchmarking insights and accessibility gaps.
Stay tuned for more insights on this!
Resources
This case-study is a shorter version of the full talk, article and community resources that can be found here:
Mentions
I'm happy to see my article referenced across many well-respected community resources — as it turned out, we were not the only team faced with these challenges!
← Back
Next: Refining the foundations →