The Prioritization Stage
How submissions are scored, ranked, and filtered by the community
The Prioritise stage is step 3 in the Beyond MVG process. After identifying problems and recommendations in the Identify stage, community members score and rank submissions using a RICE model to surface the highest-impact governance improvements.
Goal
Surface the most impactful governance improvements through community-driven scoring and ranking. Submissions compete on evidence and merit, with transparent scoring that anyone can verify.
- Score submissions with a transparent formula combining 10 weighted factors across 5 benefit and 5 cost dimensions
- Elevate important tags to connect related submissions across categories
Features
Additive Scoring (0-100)
Each submission scores 0-100 using a weighted average of 10 dimensions. 5 benefit dims (Impact, Urgency, Community Demand, Roadmap Alignment, Time to Value) each contribute more when their value is higher. 5 cost dims (Feasibility, Technical Risk, Cost, Legal Risk, Maintainability) each contribute more when their value is lower (low cost = good). All values at mid-range gives ~50. MoSCoW (Must/Should/Could/Won't) is a separate categorical label only -- it does not affect the score.
MoSCoW Tier Labels
Every submission carries a categorical priority tier: Must, Should, Could, or Won't. Tiers are set by the submitter and refined by community vote. They are used for filtering and communication -- they do not affect the numeric score.
Tag Elevation System
Community members can mark tags as important, elevating them for visibility. Tags help connect related submissions across categories.
How Scoring Works
Benefit Dimensions
Higher value = higher score. Rating Impact 9/10 adds more than 3/10.
Cost Dimensions
Lower value = higher score. Technical Risk 1/5 scores better than 5/5. Feasibility is the exception: higher feasibility (easier to build) raises the score.
Each dimension's displayed value is the average of all community scoresfor that dimension. You can skip any dimension you don't have enough information to rate -- only dimensions you score are included in your average.
Score Reference (0 – 100)
- Score = 100: every benefit at maximum, every cost at minimum (ideal).
- Score ≈ 50: all dimensions near mid-range (expected starting point for a new submission).
- Score near 0: every benefit at minimum, every cost at maximum (worst case).
Benefit Dimensions (Higher = Better)
Impact
How significant is the effect on governance?
Consider how many stakeholders, DReps, or SPOs are affected. 10 = fundamentally changes governance for the entire ecosystem; 1 = affects a niche group.
Urgency
How time-sensitive is this?
Is there a hard deadline (e.g. a protocol upgrade)? Will the problem worsen if delayed? 10 = immediate action required; 1 = can wait indefinitely.
Community Demand
How many people want this addressed?
Based on upvotes, forum discussions, and how frequently the topic has been raised. Higher demand signals broader community alignment.
Roadmap Alignment
Does it fit current Cardano roadmap priorities?
Items aligned with active governance milestones or CIP processes score higher. 5 = directly on the roadmap; 1 = unrelated to current priorities.
Time to Value
How quickly can benefits be realised?
5 = benefits appear within weeks of implementation; 1 = years before any measurable outcome.
Cost Dimensions (Lower = Better Score)
Feasibility
Can this be built with existing tools and knowledge?
Higher feasibility = easier to build = higher score. 10 = proven approaches exist, straightforward to deliver. 1 = no clear path, approach is unproven.
Technical Risk
How complex or uncertain is the technical approach?
Consider dependencies, integration complexity, and unknowns. 5 = highly uncertain; 1 = straightforward and well-understood.
Cost Estimate
What financial or infrastructure resources are required?
Includes developer time, infrastructure, third-party services, and ongoing costs. 10 = very expensive; 1 = minimal resources needed.
Legal Risk
Are there regulatory or compliance concerns?
5 = significant legal hurdles (jurisdictional, regulatory); 1 = no legal considerations.
Maintainability
How much ongoing effort to keep it running?
5 = requires constant updates and dedicated maintenance; 1 = set-and-forget.
MoSCoW Tier System
MoSCoW is a standard prioritization method used alongside the numeric score. Each submission carries one of four tiers set by the submitter and refined through community voting. Tiers are categorical labels only-- they do not change the 0–100 score. Two submissions with identical dimension ratings always score the same regardless of their tier.
A critical governance gap that must be addressed. Blocking or high-stakes -- left unresolved it causes significant harm or prevents progress.
Example: "Voting power is not transparent to delegators."
An important improvement with clear value. Not critical to the current cycle but should be prioritized soon -- workarounds exist but are suboptimal.
Example: "DRep profiles need richer metadata for informed delegation."
A nice-to-have enhancement. Useful and desirable, but lower urgency. Include if capacity allows without impacting higher-tier items.
Example: "On-chain discussion threads linked to governance actions."
Explicitly out of scope for this cycle. Not a rejection -- it documents a conscious decision to defer so the community can focus on higher-priority items.
Example: "Full protocol-level voting reform -- out of scope for now."
How tiers are used
- Filtering: the Prioritise dashboard lets you filter by tier to focus on Must or Should items first.
- Community signalling: community members can vote to upgrade or downgrade a tier if they disagree with the initial classification.
- No score impact: numeric score is determined solely by the 10 scoring dimensions. A "Won't" item with strong dimension ratings will still rank high numerically.
Ready to Start?
Choose where to go next: