Back to Blog
AI Technology

Understanding the 4-Round Consensus Process

A deep dive into how agents analyze, challenge, refine, and finalize bid prices through structured debate.

JR

James Rodriguez

Head of Product

Feb 20, 2026 6 min read

The heart of Celestix AI is the Full Cycle — a 4-round process where 32 AI agents collaborate and compete to arrive at the most accurate bid price. Understanding this process helps you interpret the results and build confidence in your bids.

Round 1: Reconnaissance (~22 minutes)

In the first round, every agent works independently. They don't see each other's analysis. The Cost Estimator pulls RSMeans data and applies local labor rates. The Geospatial agent adjusts for regional cost factors. The Temporal agent accounts for seasonal pricing fluctuations. The Risk Quantifier identifies and prices potential risks.

Each agent produces an independent estimate along with a confidence score and detailed methodology notes. At the end of Round 1, you typically see a wide spread — often 15-20% between the highest and lowest estimates. This spread is actually valuable information: it tells you where the uncertainty lies.

Round 2: Cross-Review (~10 minutes)

Now agents can see each other's work. This is where the magic happens. The Cost Estimator sees that the Risk Quantifier added a $12,000 asbestos contingency and disagrees — the building was renovated in 2019 and asbestos was already remediated. A formal dispute is filed.

Meanwhile, the Temporal agent notices that the Schedule Analyst assumed a summer start date, but the solicitation specifies October. This changes the weather premium calculation. Another dispute filed.

Disputes aren't arguments — they're structured evidence presentations. Each dispute includes the challenging agent's reasoning, supporting data points, and a recommended adjustment.

Round 3: Refinement (~5 minutes)

Agents that received disputes must now respond. They can accept the challenge (adjusting their estimate), reject it with counter-evidence, or partially accept. The system checks convergence: are all estimates within 5% of each other?

If convergence is achieved, we move to Round 4. If not, disputed agents go through another refinement cycle. In practice, 87% of contracts achieve convergence by the end of Round 3.

Round 4: Finalization (~3 minutes)

The GOD Agent takes center stage. It reviews all remaining disputes, examines the convergence metrics, and makes final arbitration decisions. Then every agent votes: AGREE or DISAGREE with the final recommended price.

A healthy consensus looks like 28+ agents agreeing out of 32. If fewer than 24 agree, the system flags the contract for human review — this usually indicates genuinely ambiguous requirements or insufficient market data.

The final output includes: the recommended bid price, a confidence score (green/yellow/red), a complete price breakdown, the dispute log showing how the price was refined, and competitive positioning analysis.

Why This Matters

The 4-round process isn't just theater. Each round measurably reduces error. Our data shows: Round 1 average MAPE: 14.2%. After Round 2: 8.1%. After Round 3: 5.3%. After Round 4: 4.7%. The debate process cuts error by two-thirds compared to single-pass estimation.