Decision-Criteria Elicitation Before Solutioning

ByGrais Research Team, Communication Science

A communication protocol for defining what will count as a good decision before you present options, so conversations produce clearer commitments and fewer reversals.

Teams often think they have a persuasion problem when they actually have a criteria problem.

One team asks for "the best option" and gets a strong recommendation in ten minutes. Two weeks later, finance objects to the cost, operations objects to the rollout risk, and the original sponsor says the recommendation solved the wrong problem. The advice was not weak. The criteria were missing.

In early conversations, people frequently request a recommendation before they have agreed on what a good outcome means. They ask for a solution, but the decision frame is incomplete: priorities are unstated, tradeoffs are hidden, and constraints are only partially visible.

When we answer too quickly, we create temporary momentum with fragile commitment. The conversation looks efficient in the moment, then slows down later through rework, second-guessing, and reversals.

A stronger sequence is to define decision criteria first and propose options second.

This article introduces a practical protocol for decision-criteria elicitation before solutioning. It combines evidence from shared decision-making and motivational interviewing, plus guidance on plain-language communication, risk communication, and lifecycle governance for AI-assisted workflows [1] [2] [3] [4] [5] [6].

Quick Takeaways

  • Recommendation quality depends on criteria quality, not only answer quality.
  • A usable first exchange names decision purpose, hard constraints, and acceptable risk.
  • Question-prompt methods improve participation when stakes and uncertainty are high.
  • Plain-language compression reduces interpretation drift across stakeholders.
  • Protocol discipline prevents fast-but-fragile commitments.

The Core Mechanism: Why Criteria Come Before Advice

Most communication failures in complex decisions are not caused by missing options. They are caused by missing filters.

If two people evaluate the same option using different criteria, disagreement is guaranteed even when both are competent and aligned in good faith. One side optimizes speed, the other side optimizes reliability. One side protects short-term budget, the other side protects downstream risk. Without explicit criteria, each recommendation sounds reasonable while still failing the actual decision.

The shared decision-making literature captures this directly: higher-quality decisions emerge when the conversation makes the choice explicit, compares alternatives, and surfaces preferences before commitment [1]. In practice, this means advice should be downstream of criteria elicitation, not a substitute for it.

Motivational interviewing findings reinforce the same operational lesson. Outcomes improve when communication elicits a person’s own reasons and priorities rather than imposing an external script [2]. The process matters because autonomy and clarity interact: people follow through more consistently on decisions they can explain in their own terms.

So the sequence is straightforward:

  1. Define what success requires.
  2. Define what failure must avoid.
  3. Only then compare solution paths.

This article sits between First-Turn Intent Clarification Protocol and Multi-Stakeholder Decision Clarity Framework. Clarification gets the right problem on the table. Criteria elicitation makes the evaluation frame explicit enough that several people can compare options honestly.

What Research Suggests About Better Early Exchanges

Three evidence-backed behaviors are especially actionable:

  1. Structured prompting increases useful participation. Question prompt list interventions tend to raise question-asking and communication quality in high-stakes contexts [3]. In practice, that means criteria should be elicited on purpose, not left to emerge accidentally.
  2. Plain-language design improves comprehension under pressure. Public-health communication guidance emphasizes organizing around what people need to know and do, in language they can act on quickly [4]. That matters because decision criteria fail when different stakeholders attach different meanings to the same abstract term.
  3. Decision-enabling communication is an operational capability. WHO’s risk communication framing treats timely, understandable, context-fit information as essential to protective action [5]. NIST extends the same logic to AI-assisted workflows: systems should make risk, governance boundaries, and review triggers visible instead of hiding them under fluent output [6].

The synthesis: robust early conversations are not naturally emergent. They are designed interactions with clear information structure. A decision-quality conversation should make three things visible before solutioning starts: what success looks like, what cannot be violated, and which downside is unacceptable enough to change the recommendation.

The Decision-Criteria Elicitation Protocol

Use this protocol when a request is broad, ambiguous, or emotionally charged.

Step 1: Frame the decision object

Name the decision in one sentence before discussing methods.

Example:

"Before I recommend an approach, let’s define the decision this recommendation must support."

This shifts the exchange from answer-hunting to decision design.

Step 2: Surface non-negotiables

Ask for hard constraints first.

  • Time boundary: what deadline cannot move?
  • Resource boundary: what budget or capacity is fixed?
  • Policy boundary: what rules or obligations cannot be violated?

Non-negotiables determine feasibility. If they stay implicit, solution quality is mostly luck.

Step 3: Define success criteria as observable signals

Replace vague goals with measurable indicators.

Weak:

"We need this to work better."

Stronger:

"We need fewer stalled decisions, faster first commitment, and fewer rollback requests."

Observable criteria reduce interpretive conflict later.

Step 4: Elicit acceptable risk and tradeoff tolerance

Every decision trades one value against another. Make that explicit.

Ask:

  • Which matters more right now: speed, certainty, or reversibility?
  • What downside is unacceptable?
  • What downside is tolerable if upside is strong?

This step prevents false consensus where people appear aligned but evaluate outcomes differently.

Step 5: Generate bounded options

Only after criteria are explicit, generate 2-3 options and map each to criteria.

For each option, state:

  • expected benefit,
  • major risk,
  • required effort,
  • best-fit conditions.

This matches the structure described in shared decision models: alternatives become comparable when evaluation criteria are shared [1].

Step 6: Force restatement before close

Ask the counterpart to restate the selected option and rationale in their own words.

If restatement is fuzzy, criteria are still unstable. Do not move to execution yet.

If the team sounds aligned but keeps paraphrasing the criteria differently, route through Restatement Checkpoint Before Action before locking the recommendation.

Step 7: Set a review trigger

End with one progress signal and one pivot trigger.

  • Progress signal: what indicates the decision is working?
  • Pivot trigger: what evidence requires changing course?
  • Review point: when do we check?

This makes the decision adaptive without making it unstable.

Common Edge Cases and How to Handle Them

Edge Case A: "Just tell me what to do"

When someone asks for direct advice immediately, treat it as urgency plus uncertainty, not resistance.

Use a compact criteria check:

  1. one desired outcome,
  2. one hard constraint,
  3. one unacceptable downside.

Then provide options. This preserves speed while avoiding blind recommendations.

Edge Case B: Multi-stakeholder conversations

Different stakeholders often optimize different objectives. One wants predictability, another wants innovation, another wants low short-term cost.

Use a criteria board with three columns:

  • shared criteria,
  • stakeholder-specific criteria,
  • conflict criteria.

Then evaluate options against all three columns. This keeps disagreement in the open where it can be negotiated.

Edge Case C: High emotion, low articulation

When stress is high, language precision drops.

Start with acknowledgment, then simplify the prompt structure:

"Let’s make this manageable. What outcome matters most today, and what can we not afford to get wrong?"

Plain-language sequencing matters here because cognitive load is already elevated [4].

Edge Case D: Late-arriving vetoes or authority ambiguity

Sometimes the criteria appear settled until a new stakeholder introduces a veto late in the process.

Treat that as a criteria-governance problem, not simple interpersonal friction. Ask:

  • "Who can block this even if they are not in today’s discussion?"
  • "Which criterion is advisory, and which one is approval-gating?"

If the answer is unclear, do not present a final recommendation as though the decision surface is complete.

Failure Modes And Limits

Watch for these recurrent breakdowns:

  • Criteria inflation: listing too many criteria makes comparison impossible.
  • Criteria drift: criteria shift mid-conversation without explicit acknowledgment.
  • Surrogate certainty: strong tone used to hide weak decision framing.
  • Premature optimization: detailed tactics before feasibility checks.

Protocol quality is not "more questions." It is asking the minimum set of high-yield questions in the right order.

There is also a limit to this framework. It helps when the real problem is unclear decision architecture. It does not solve obvious no-fit situations, absent authority, or cases where one blocker already dominates the conversation. If the conversation is actually about whether the opportunity should continue at all, No-Fit Check Before Persuasion is often the cleaner move. If the issue is conflicting owners rather than missing criteria, Multi-Stakeholder Decision Clarity Framework is the better route.

Implementation Example

A team receives the request: "We need to improve conversation outcomes this quarter."

Fast but weak response:

"Use this script and enforce it across all channels."

Protocol-based response:

"Before selecting a script, which outcome has priority this quarter: faster commitments, fewer escalations, or higher completion reliability? What constraint is fixed? Which failure is unacceptable?"

The criteria discussion reveals the real objective is fewer reversals after initial agreement, not simply faster first replies. That changes the intervention from generic scripting to decision-checkpoint design, including explicit tradeoff statements and restatement checks.

The visible difference is not rhetorical style. It is decision architecture.

A second example shows up in internal tooling decisions. A leader asks whether the team should automate a workflow with AI this quarter. If the group jumps straight to vendor comparison, it will likely over-index on feature breadth. A criteria-first exchange surfaces the actual filters: regulated steps still need review, reversibility matters more than raw automation rate, and the first success signal is lower handoff failure rather than full task replacement. Those criteria often change the recommendation from "pick the most capable platform" to "pilot the narrowest workflow that can be reviewed safely."

Evidence Triangulation

References

  1. Makoul G, Clayman ML. An integrative model of shared decision making in medical encounters. PubMed
  2. Rubak S, Sandbaek A, Lauritzen T, Christensen B. Motivational interviewing: a systematic review and meta-analysis. PubMed
  3. Wang SJ, Hu WY, Chang YC. Question prompt list intervention for patients with advanced cancer: a systematic review and meta-analysis. PubMed
  4. Centers for Disease Control and Prevention. Plain Language Materials & Resources. CDC
  5. World Health Organization. Risk communication and community engagement. WHO
  6. National Institute of Standards and Technology. AI Risk Management Framework. NIST

Continue reading

Similar research articles

Browse all research