Back to Learn More

Scoring Methodology

How AIM calculates, curates, and audits RAO dimension scores for technology recommendations.

AIM uses RAO (Ranking and Optimization) to score technology candidates across six dimensions. This page documents what each dimension means, how scores are determined, and how the scoring process can be audited.


1. The Six RAO Dimensions

Every technology candidate is scored on these six dimensions. Scores range from 0 to 100.

Security

Measures the technology's security posture, including vulnerability exposure, lifecycle status, and data protection capabilities.

High Score (70-100)

Strong security controls, actively maintained, regular patches, good track record

Low Score (0-30)

Known vulnerabilities, EOL/unsupported, limited security features

Compliance

Measures regulatory readiness for frameworks like HIPAA, CJIS, PCI-DSS, FedRAMP, GDPR, and SOC 2.

High Score (70-100)

Pre-certified, audit-ready, comprehensive compliance documentation

Low Score (0-30)

No compliance certifications, limited audit support, regulatory gaps

Cost

Measures total cost of ownership including licensing, infrastructure, operational expenses, and — when Sustainment Planning is enabled — annual sustainment costs and staffing gap estimates.

High Score (70-100)

Cost-effective, predictable pricing, good value for capabilities

Low Score (0-30)

High licensing costs, expensive infrastructure, unpredictable pricing

Maturity

Measures market adoption, stability, ecosystem support, and track record in production environments.

High Score (70-100)

Widely adopted, stable releases, strong ecosystem, proven at scale

Low Score (0-30)

Early-stage, limited adoption, frequent breaking changes, small community

Vendor Lock-In Risk

⚠️ INVERTED: Lower is better

Measures how difficult it would be to migrate away from this technology. Unlike other dimensions, a LOW score is desirable.

Low Score (0-30) ✓

Open standards, portable data, easy migration, no proprietary lock-in

High Score (70-100) ⚠️

Proprietary formats, difficult exit, vendor-specific APIs, high switching costs

Example: A score of 93/100 means this technology has very high vendor lock-in risk. Consider alternatives with lower lock-in if flexibility is important.

Operational Complexity

⚠️ INVERTED: Lower is better

Measures how difficult the technology is to operate, maintain, and support. Unlike other dimensions, a LOW score is desirable.

Low Score (0-30) ✓

Easy to operate, minimal specialized skills, good automation, low overhead

High Score (70-100) ⚠️

Requires specialized skills, complex configuration, high maintenance burden


2. How Scores Are Calculated

The RAO Scoring Formula

The final RAO score is a proprietary weighted composite of the six dimension scores, calibrated for your assessment context. Two of the six dimensions — Vendor Lock-In Risk and Operational Complexity — are treated as inverse contributors: lower values indicate better outcomes, so they are reflected before being added to the composite.

Proprietary Methodology: The specific weighting algorithm, dimension inversion mechanics, and calibration constants are proprietary to The Freedom Project and protected as trade secrets. The scoring is fully deterministic and auditable — what drives your scores is always explained in your reports — but the internal formula is not published.

Weights are dynamically adjusted based on your assessment's risk tolerance, budget sensitivity, regulatory requirements, staffing capacity, and inferred modernization focus archetype. When Sustainment Planning or AI Planning is enabled, those signals additionally influence the Cost and Compliance weight dimensions. Dimension weights are bounded to prevent extreme skew toward any single factor.

Tier Assignment

Based on the final RAO score, candidates are assigned to tiers:

Must Have
≥ 80
Strong Candidate
60 – 79
Consider
40 – 59
Avoid
< 40

3. Risk Profile Evaluation

In addition to RAO dimension scores, each technology recommendation includes a Risk Profile — a breakdown of five risk dimensions that help you understand implementation and operational risks at a glance.

Risk Level Color Coding

Risk levels are displayed as color-coded pills for quick visual scanning:

Low Risk

Minimal concerns. Safe to proceed with standard practices.

Medium Risk

Some considerations. Plan for mitigation strategies.

High Risk

Significant risk. Requires careful review and mitigation.

The Five Risk Dimensions

Security

Security risk associated with adopting this technology.

Low: Minimal security concerns. Well-established controls.
Medium: Standard security practices apply.
High: Requires additional security review.

Tech Debt

Technical debt risk from implementation and maintenance.

Low: Modern architecture. Easy to maintain.
Medium: Standard maintenance requirements.
High: May require future refactoring.

Cost Impact

Budget and cost predictability risk.

Low: Predictable costs. Clear pricing model.
Medium: Some cost variability.
High: Variable or high TCO risk.

GRC (Governance, Risk & Compliance)

Governance, Risk, and Compliance implementation burden.

Low: Pre-certified and audit-ready.
Medium: Standard audit preparation.
High: Significant compliance effort.

Change Complexity

Implementation and organizational change complexity.

Low: Straightforward implementation.
Medium: Moderate change management.
High: Complex training required.

How Risk Profile is Determined

Risk profile ratings are derived from the underlying RAO dimension scores and assessment context:

  • Security Risk maps inversely to the Security RAO score
  • Tech Debt Risk considers Maturity score and technology lifecycle
  • Cost Impact Risk factors in Cost score and pricing predictability
  • GRC Risk maps inversely to the Compliance RAO score
  • Change Complexity maps to Operational Complexity score

4. Score Curation & Audit Defensibility

Designed for Auditability & Evaluation Scrutiny

AIM's scoring methodology is designed to withstand rigorous audit review. Every score is traceable, reproducible, and based on documented criteria — not vendor influence or algorithmic opacity.

Data Sources

AIM uses a combination of automated data collection and curated assessments:

Automated Sources

  • Cloud Pricing APIs – Pricing from AWS, Azure, GCP, and OCI price list APIs (automated monthly refresh)
  • Compliance Flags – FedRAMP authorization status, GovCloud availability, and HIPAA BAA support tracked per product

Curated Assessments

  • Dimension Scores – Security, compliance, cost, maturity, vendor lock-in, and operational complexity scores curated using documented rubrics
  • Lifecycle Classification – Products categorized as evergreen (SaaS/Cloud), versioned (on-prem software), hardware, or community (open source)

Reference Materials

Curated scores reference the following when available:

  • • Vendor security documentation
  • • Published compliance certifications
  • • Official pricing pages
  • • Vendor lifecycle/support policies
  • • FedRAMP Marketplace listings
  • • Product release histories

Scoring Cadence: Dimension scores are reviewed quarterly and updated when significant changes occur (certification changes, EOL announcements, major security events). Each score includes a source citation and last-reviewed timestamp for audit purposes.

Curation Process

1

Initial Assessment

Technology is evaluated against each dimension using documented criteria

2

Source Verification

Scores are validated against multiple independent sources

3

Catalog Entry

Scores are recorded in the versioned technology catalog with metadata

4

Periodic Review

Scores are updated monthly (or as needed) when new information becomes available

Update Cadence

Dimension Scores
Reviewed quarterly; updated when significant product changes occur
Compliance Flags
Updated when certifications change or new attestations are published
Cloud Pricing
Automated daily refresh via AWS, Azure, GCP, OCI price APIs

Audit Trail

Every recommendation and report includes audit metadata:

  • Input Hash – Cryptographic hash of assessment inputs
  • Output Hash – Cryptographic hash of scored outputs
  • Methodology Version – Version of scoring algorithm used
  • Catalog Version – Version of technology catalog at time of scoring
  • Timestamp – When the recommendation was generated

5. What AIM Does NOT Do

  • No vendor payments – AIM does not accept sponsorships, referral fees, or placement payments from vendors
  • No AI score manipulation – AI is used for narrative assistance only; it cannot change scores
  • No black-box algorithms – All scoring logic is deterministic and documented
  • No arbitrary weighting – Weights are derived from your assessment constraints, not hidden preferences

6. Intelligence Layers Beyond RAO

In addition to the core RAO scoring engine, AIM runs six deterministic intelligence layers that enrich every assessment with additional context. These layers do not replace RAO scores — they supplement them with purpose-built analysis. The Modernization Focus Classification layer also feeds back directly into RAO scoring, adjusting weights and category priorities based on the identified archetype.

AI Readiness Score & Archetype Classification

When AI is in scope for a project, AIM calculates a separate AI Readiness Score (0–100) based on three signals from the assessment: data access governance posture, operational decision authority appetite, and data structure maturity. This score is tier-classified as:

Not Ready
Tier 0 — prerequisites must be addressed
Foundational
Tier 1 — basic readiness established
Operational
Tier 2 — ready for operational AI
Advanced
Tier 3 — ready for agentic AI

From the readiness tier, AIM deterministically classifies a primary and optional secondary AI implementation archetype (e.g., Document Intelligence, Workflow Automation AI, Predictive Analytics, Agentic Orchestration) and recommends a deployment model (on-prem, private cloud, hybrid, or managed). AI tool recommendations in the catalog are filtered and ranked against this classification.

Compliance filtering: AI tool recommendations are additionally filtered by the assessment's regulatory scopes (CJIS, FedRAMP, HIPAA, PCI-DSS) to ensure only compliant tools surface for regulated environments.

Modernization Focus Classification

AIM automatically infers the likely modernization focus of every project from existing assessment signals — no extra user input required. Signals such as environment type, scope areas, regulatory requirements, integration complexity appetite, and AI readiness are mapped through a weighted scoring matrix to produce:

  • • A primary focus (e.g., Cloud Migration, Cybersecurity Modernization, Legacy System Replacement)
  • • An optional secondary focus (set only when the second score is meaningfully close to the first)
  • • A confidence level (high / medium / low) based on score separation
  • • A concise human-readable reasoning explaining what signals drove the classification

The nine supported focus areas are: Cloud Migration, Hybrid Infrastructure Modernization, Infrastructure Refresh, Application Modernization, Legacy System Replacement, Data Platform Modernization, Cybersecurity Modernization, Enterprise Integration Modernization, and ERP Modernization.

Downstream use: The modernization focus directly influences RAO scoring weights and category priorities in the recommendation engine — archetypes like Cybersecurity Modernization increase the security and compliance weight dimensions, while archetypes like Cloud Migration boost scores for cloud, identity, and observability categories. It also feeds the staffing estimator, FTE ranges, labor cost modeling, RFP generator, white paper, modernization report, and historical procurement benchmarking.

FTE Estimation Engine

AIM deterministically estimates the full-time equivalent (FTE) staffing range and project duration for every assessment using a base team model per modernization archetype, adjusted by complexity signals from the assessment.

  • Base team — defined per archetype (e.g., Cloud Migration requires a different headcount profile than ERP Modernization)
  • Complexity adjustments — applied across multiple dimensions of project complexity derived from your assessment signals
  • Role mix breakdown — headcount estimate distributed across key delivery roles aligned to the project's modernization archetype and scope
Output: Low–high FTE range + duration range in months. Feeds the Labor Cost Intelligence Layer and appears in the white paper (Section 15), modernization report, and assessment export PDF.

Labor Cost Intelligence Layer

Consumes the FTE estimate and calculates delivery cost ranges under three labor models, allowing organizations to compare approaches side-by-side:

Market Hiring
Direct-hire labor costs using BLS Occupational Employment Statistics median wages, adjusted for geographic region and seniority level.
Federal Staffing
GS pay scale costs from OPM pay tables with locality adjustments and applicable overhead factors for benefits and indirect costs.
Contractor Delivery
GSA CALC / IT Schedule 70 awarded labor rate benchmarks, annualized using standard government contract hours.
Organization-aware: AIM detects the org sector (federal, healthcare, state/local, private) from regulatory scopes and recommends a primary labor model accordingly. All three models are always shown for comparison. Appears in the white paper, assessment export PDF, and the project overview dashboard.

Historical Procurement Benchmarking

AIM compares every project against a normalized database of real public-sector IT contract award data sourced from USASpending.gov. Each assessment is fingerprinted using key dimensional signals from the assessment and matched against historical contracts using deterministic similarity scoring.

  • • Returns P10/median/P90 cost and duration ranges from comparable contracts
  • • Reports a match confidence (high/medium/low/insufficient) based on sample size and data quality
  • • Lists the top scope signals that drove comparability
  • • Database is refreshed annually via automated Vercel Cron Job
Disclaimer: Benchmarks are provided as planning context only and do not constitute cost guarantees. Minimum of 5 comparable contracts required before results are surfaced. Appears in white paper Section 15, assessment export PDF, and the project overview dashboard.

Sustainment Cost Analysis (opt-in)

When Sustainment Planning is enabled in the assessment, AIM extends its cost analysis beyond implementation to cover the full long-term operational picture. This layer calculates:

  • Annual sustainment cost per recommended product — derived from each product's role requirements and the organization's stated annual IT operations budget
  • Staffing gap analysis — comparing the organization's declared IT capabilities against the roles each recommended product requires to operate, with shortfalls estimated using Bureau of Labor Statistics (BLS) salary data adjusted for the project's state
  • 3-year TCO projections — combining implementation cost, annual sustainment costs, and estimated staffing gap costs
Effect on RAO: Sustainment costs feed directly into the Cost dimension score and are factored into TCO estimates in the whitepaper, compliance gap report, and modernization report. When disabled, AIM analyzes implementation costs only.

Related Documentation