Scoring Methodology
How AIM calculates, curates, and audits RAO dimension scores for technology recommendations.
AIM uses RAO (Ranking and Optimization) to score technology candidates across six dimensions. This page documents what each dimension means, how scores are determined, and how the scoring process can be audited.
1. The Six RAO Dimensions
Every technology candidate is scored on these six dimensions. Scores range from 0 to 100.
Security
Measures the technology's security posture, including vulnerability exposure, lifecycle status, and data protection capabilities.
Strong security controls, actively maintained, regular patches, good track record
Known vulnerabilities, EOL/unsupported, limited security features
Compliance
Measures regulatory readiness for frameworks like HIPAA, CJIS, PCI-DSS, FedRAMP, GDPR, and SOC 2.
Pre-certified, audit-ready, comprehensive compliance documentation
No compliance certifications, limited audit support, regulatory gaps
Cost
Measures total cost of ownership including licensing, infrastructure, operational expenses, and — when Sustainment Planning is enabled — annual sustainment costs and staffing gap estimates.
Cost-effective, predictable pricing, good value for capabilities
High licensing costs, expensive infrastructure, unpredictable pricing
Maturity
Measures market adoption, stability, ecosystem support, and track record in production environments.
Widely adopted, stable releases, strong ecosystem, proven at scale
Early-stage, limited adoption, frequent breaking changes, small community
Vendor Lock-In Risk
⚠️ INVERTED: Lower is betterMeasures how difficult it would be to migrate away from this technology. Unlike other dimensions, a LOW score is desirable.
Open standards, portable data, easy migration, no proprietary lock-in
Proprietary formats, difficult exit, vendor-specific APIs, high switching costs
Example: A score of 93/100 means this technology has very high vendor lock-in risk. Consider alternatives with lower lock-in if flexibility is important.
Operational Complexity
⚠️ INVERTED: Lower is betterMeasures how difficult the technology is to operate, maintain, and support. Unlike other dimensions, a LOW score is desirable.
Easy to operate, minimal specialized skills, good automation, low overhead
Requires specialized skills, complex configuration, high maintenance burden
2. How Scores Are Calculated
The RAO Scoring Formula
The final RAO score is a proprietary weighted composite of the six dimension scores, calibrated for your assessment context. Two of the six dimensions — Vendor Lock-In Risk and Operational Complexity — are treated as inverse contributors: lower values indicate better outcomes, so they are reflected before being added to the composite.
Proprietary Methodology: The specific weighting algorithm, dimension inversion mechanics, and calibration constants are proprietary to The Freedom Project and protected as trade secrets. The scoring is fully deterministic and auditable — what drives your scores is always explained in your reports — but the internal formula is not published.
Weights are dynamically adjusted based on your assessment's risk tolerance, budget sensitivity, regulatory requirements, staffing capacity, and inferred modernization focus archetype. When Sustainment Planning or AI Planning is enabled, those signals additionally influence the Cost and Compliance weight dimensions. Dimension weights are bounded to prevent extreme skew toward any single factor.
Tier Assignment
Based on the final RAO score, candidates are assigned to tiers:
3. Risk Profile Evaluation
In addition to RAO dimension scores, each technology recommendation includes a Risk Profile — a breakdown of five risk dimensions that help you understand implementation and operational risks at a glance.
Risk Level Color Coding
Risk levels are displayed as color-coded pills for quick visual scanning:
Minimal concerns. Safe to proceed with standard practices.
Some considerations. Plan for mitigation strategies.
Significant risk. Requires careful review and mitigation.
The Five Risk Dimensions
Security
Security risk associated with adopting this technology.
Tech Debt
Technical debt risk from implementation and maintenance.
Cost Impact
Budget and cost predictability risk.
GRC (Governance, Risk & Compliance)
Governance, Risk, and Compliance implementation burden.
Change Complexity
Implementation and organizational change complexity.
How Risk Profile is Determined
Risk profile ratings are derived from the underlying RAO dimension scores and assessment context:
- •Security Risk maps inversely to the Security RAO score
- •Tech Debt Risk considers Maturity score and technology lifecycle
- •Cost Impact Risk factors in Cost score and pricing predictability
- •GRC Risk maps inversely to the Compliance RAO score
- •Change Complexity maps to Operational Complexity score
4. Score Curation & Audit Defensibility
Designed for Auditability & Evaluation Scrutiny
AIM's scoring methodology is designed to withstand rigorous audit review. Every score is traceable, reproducible, and based on documented criteria — not vendor influence or algorithmic opacity.
Data Sources
AIM uses a combination of automated data collection and curated assessments:
Automated Sources
- Cloud Pricing APIs – Pricing from AWS, Azure, GCP, and OCI price list APIs (automated monthly refresh)
- Compliance Flags – FedRAMP authorization status, GovCloud availability, and HIPAA BAA support tracked per product
Curated Assessments
- Dimension Scores – Security, compliance, cost, maturity, vendor lock-in, and operational complexity scores curated using documented rubrics
- Lifecycle Classification – Products categorized as evergreen (SaaS/Cloud), versioned (on-prem software), hardware, or community (open source)
Reference Materials
Curated scores reference the following when available:
- • Vendor security documentation
- • Published compliance certifications
- • Official pricing pages
- • Vendor lifecycle/support policies
- • FedRAMP Marketplace listings
- • Product release histories
Scoring Cadence: Dimension scores are reviewed quarterly and updated when significant changes occur (certification changes, EOL announcements, major security events). Each score includes a source citation and last-reviewed timestamp for audit purposes.
Curation Process
Initial Assessment
Technology is evaluated against each dimension using documented criteria
Source Verification
Scores are validated against multiple independent sources
Catalog Entry
Scores are recorded in the versioned technology catalog with metadata
Periodic Review
Scores are updated monthly (or as needed) when new information becomes available
Update Cadence
Audit Trail
Every recommendation and report includes audit metadata:
- Input Hash – Cryptographic hash of assessment inputs
- Output Hash – Cryptographic hash of scored outputs
- Methodology Version – Version of scoring algorithm used
- Catalog Version – Version of technology catalog at time of scoring
- Timestamp – When the recommendation was generated
5. What AIM Does NOT Do
- No vendor payments – AIM does not accept sponsorships, referral fees, or placement payments from vendors
- No AI score manipulation – AI is used for narrative assistance only; it cannot change scores
- No black-box algorithms – All scoring logic is deterministic and documented
- No arbitrary weighting – Weights are derived from your assessment constraints, not hidden preferences
6. Intelligence Layers Beyond RAO
In addition to the core RAO scoring engine, AIM runs six deterministic intelligence layers that enrich every assessment with additional context. These layers do not replace RAO scores — they supplement them with purpose-built analysis. The Modernization Focus Classification layer also feeds back directly into RAO scoring, adjusting weights and category priorities based on the identified archetype.
AI Readiness Score & Archetype Classification
When AI is in scope for a project, AIM calculates a separate AI Readiness Score (0–100) based on three signals from the assessment: data access governance posture, operational decision authority appetite, and data structure maturity. This score is tier-classified as:
From the readiness tier, AIM deterministically classifies a primary and optional secondary AI implementation archetype (e.g., Document Intelligence, Workflow Automation AI, Predictive Analytics, Agentic Orchestration) and recommends a deployment model (on-prem, private cloud, hybrid, or managed). AI tool recommendations in the catalog are filtered and ranked against this classification.
Modernization Focus Classification
AIM automatically infers the likely modernization focus of every project from existing assessment signals — no extra user input required. Signals such as environment type, scope areas, regulatory requirements, integration complexity appetite, and AI readiness are mapped through a weighted scoring matrix to produce:
- • A primary focus (e.g., Cloud Migration, Cybersecurity Modernization, Legacy System Replacement)
- • An optional secondary focus (set only when the second score is meaningfully close to the first)
- • A confidence level (high / medium / low) based on score separation
- • A concise human-readable reasoning explaining what signals drove the classification
The nine supported focus areas are: Cloud Migration, Hybrid Infrastructure Modernization, Infrastructure Refresh, Application Modernization, Legacy System Replacement, Data Platform Modernization, Cybersecurity Modernization, Enterprise Integration Modernization, and ERP Modernization.
FTE Estimation Engine
AIM deterministically estimates the full-time equivalent (FTE) staffing range and project duration for every assessment using a base team model per modernization archetype, adjusted by complexity signals from the assessment.
- • Base team — defined per archetype (e.g., Cloud Migration requires a different headcount profile than ERP Modernization)
- • Complexity adjustments — applied across multiple dimensions of project complexity derived from your assessment signals
- • Role mix breakdown — headcount estimate distributed across key delivery roles aligned to the project's modernization archetype and scope
Labor Cost Intelligence Layer
Consumes the FTE estimate and calculates delivery cost ranges under three labor models, allowing organizations to compare approaches side-by-side:
Historical Procurement Benchmarking
AIM compares every project against a normalized database of real public-sector IT contract award data sourced from USASpending.gov. Each assessment is fingerprinted using key dimensional signals from the assessment and matched against historical contracts using deterministic similarity scoring.
- • Returns P10/median/P90 cost and duration ranges from comparable contracts
- • Reports a match confidence (high/medium/low/insufficient) based on sample size and data quality
- • Lists the top scope signals that drove comparability
- • Database is refreshed annually via automated Vercel Cron Job
Sustainment Cost Analysis (opt-in)
When Sustainment Planning is enabled in the assessment, AIM extends its cost analysis beyond implementation to cover the full long-term operational picture. This layer calculates:
- • Annual sustainment cost per recommended product — derived from each product's role requirements and the organization's stated annual IT operations budget
- • Staffing gap analysis — comparing the organization's declared IT capabilities against the roles each recommended product requires to operate, with shortfalls estimated using Bureau of Labor Statistics (BLS) salary data adjusted for the project's state
- • 3-year TCO projections — combining implementation cost, annual sustainment costs, and estimated staffing gap costs