Back to Learn More

Scoring Methodology

How AIM calculates, curates, and audits RAO dimension scores for technology recommendations.

AIM uses RAO (Retrieval-Augmented Optimization) to score technology candidates across six dimensions. This page documents what each dimension means, how scores are determined, and how the scoring process can be audited.


1. The Six RAO Dimensions

Every technology candidate is scored on these six dimensions. Scores range from 0 to 100.

Security

Measures the technology's security posture, including vulnerability exposure, lifecycle status, and data protection capabilities.

High Score (70-100)

Strong security controls, actively maintained, regular patches, good track record

Low Score (0-30)

Known vulnerabilities, EOL/unsupported, limited security features

Compliance

Measures regulatory readiness for frameworks like HIPAA, CJIS, PCI-DSS, FedRAMP, GDPR, and SOC 2.

High Score (70-100)

Pre-certified, audit-ready, comprehensive compliance documentation

Low Score (0-30)

No compliance certifications, limited audit support, regulatory gaps

Cost

Measures total cost of ownership including licensing, infrastructure, and operational expenses.

High Score (70-100)

Cost-effective, predictable pricing, good value for capabilities

Low Score (0-30)

High licensing costs, expensive infrastructure, unpredictable pricing

Maturity

Measures market adoption, stability, ecosystem support, and track record in production environments.

High Score (70-100)

Widely adopted, stable releases, strong ecosystem, proven at scale

Low Score (0-30)

Early-stage, limited adoption, frequent breaking changes, small community

Vendor Lock-In Risk

⚠️ INVERTED: Lower is better

Measures how difficult it would be to migrate away from this technology. Unlike other dimensions, a LOW score is desirable.

Low Score (0-30) ✓

Open standards, portable data, easy migration, no proprietary lock-in

High Score (70-100) ⚠️

Proprietary formats, difficult exit, vendor-specific APIs, high switching costs

Example: A score of 93/100 means this technology has very high vendor lock-in risk. Consider alternatives with lower lock-in if flexibility is important.

Operational Complexity

⚠️ INVERTED: Lower is better

Measures how difficult the technology is to operate, maintain, and support. Unlike other dimensions, a LOW score is desirable.

Low Score (0-30) ✓

Easy to operate, minimal specialized skills, good automation, low overhead

High Score (70-100) ⚠️

Requires specialized skills, complex configuration, high maintenance burden


2. How Scores Are Calculated

The RAO Scoring Formula

The final RAO score is a weighted average of the six dimension scores, adjusted for your assessment context:

RAO Score = 
  (Security × weight) +
  (Compliance × weight) +
  (Cost × weight) +
  (Maturity × weight) +
  ((100 - VendorLockInRisk) × weight) +  // Inverted
  ((100 - OperationalComplexity) × weight)  // Inverted

Weights are dynamically adjusted based on your assessment's risk tolerance, budget sensitivity, regulatory requirements, and timeline pressure.

Tier Assignment

Based on the final RAO score, candidates are assigned to tiers:

Must Have
≥ 80
Strong Candidate
60 – 79
Consider
40 – 59
Avoid
< 40

3. Risk Profile Evaluation

In addition to RAO dimension scores, each technology recommendation includes a Risk Profile — a breakdown of five risk dimensions that help you understand implementation and operational risks at a glance.

Risk Level Color Coding

Risk levels are displayed as color-coded pills for quick visual scanning:

Low Risk

Minimal concerns. Safe to proceed with standard practices.

Medium Risk

Some considerations. Plan for mitigation strategies.

High Risk

Significant risk. Requires careful review and mitigation.

The Five Risk Dimensions

Security

Security risk associated with adopting this technology.

Low: Minimal security concerns. Well-established controls.
Medium: Standard security practices apply.
High: Requires additional security review.

Tech Debt

Technical debt risk from implementation and maintenance.

Low: Modern architecture. Easy to maintain.
Medium: Standard maintenance requirements.
High: May require future refactoring.

Cost Impact

Budget and cost predictability risk.

Low: Predictable costs. Clear pricing model.
Medium: Some cost variability.
High: Variable or high TCO risk.

GRC (Governance, Risk & Compliance)

Governance, Risk, and Compliance implementation burden.

Low: Pre-certified and audit-ready.
Medium: Standard audit preparation.
High: Significant compliance effort.

Change Complexity

Implementation and organizational change complexity.

Low: Straightforward implementation.
Medium: Moderate change management.
High: Complex training required.

How Risk Profile is Determined

Risk profile ratings are derived from the underlying RAO dimension scores and assessment context:

  • Security Risk maps inversely to the Security RAO score
  • Tech Debt Risk considers Maturity score and technology lifecycle
  • Cost Impact Risk factors in Cost score and pricing predictability
  • GRC Risk maps inversely to the Compliance RAO score
  • Change Complexity maps to Operational Complexity score

4. Score Curation & Audit Defensibility

Designed for Auditability & Evaluation Scrutiny

AIM's scoring methodology is designed to withstand rigorous audit review. Every score is traceable, reproducible, and based on documented criteria — not vendor influence or algorithmic opacity.

Data Sources

AIM uses a combination of automated data collection and curated assessments:

Automated Sources

  • Cloud Pricing APIs – Pricing from AWS, Azure, GCP, and OCI price list APIs (automated monthly refresh)
  • Compliance Flags – FedRAMP authorization status, GovCloud availability, and HIPAA BAA support tracked per product

Curated Assessments

  • Dimension Scores – Security, compliance, cost, maturity, vendor lock-in, and operational complexity scores curated using documented rubrics
  • Lifecycle Classification – Products categorized as evergreen (SaaS/Cloud), versioned (on-prem software), hardware, or community (open source)

Reference Materials

Curated scores reference the following when available:

  • • Vendor security documentation
  • • Published compliance certifications
  • • Official pricing pages
  • • Vendor lifecycle/support policies
  • • FedRAMP Marketplace listings
  • • Product release histories

Scoring Cadence: Dimension scores are reviewed quarterly and updated when significant changes occur (certification changes, EOL announcements, major security events). Each score includes a source citation and last-reviewed timestamp for audit purposes.

Curation Process

1

Initial Assessment

Technology is evaluated against each dimension using documented criteria

2

Source Verification

Scores are validated against multiple independent sources

3

Catalog Entry

Scores are recorded in the versioned technology catalog with metadata

4

Periodic Review

Scores are updated monthly (or as needed) when new information becomes available

Update Cadence

Dimension Scores
Reviewed quarterly; updated when significant product changes occur
Compliance Flags
Updated when certifications change or new attestations are published
Cloud Pricing
Automated daily refresh via AWS, Azure, GCP, OCI price APIs

Audit Trail

Every recommendation and report includes audit metadata:

  • Input Hash – Cryptographic hash of assessment inputs
  • Output Hash – Cryptographic hash of scored outputs
  • Methodology Version – Version of scoring algorithm used
  • Catalog Version – Version of technology catalog at time of scoring
  • Timestamp – When the recommendation was generated

5. What AIM Does NOT Do

  • No vendor payments – AIM does not accept sponsorships, referral fees, or placement payments from vendors
  • No AI score manipulation – AI is used for narrative assistance only; it cannot change scores
  • No black-box algorithms – All scoring logic is deterministic and documented
  • No arbitrary weighting – Weights are derived from your assessment constraints, not hidden preferences

Related Documentation