Scoring Methodology
How AIM calculates, curates, and audits RAO dimension scores for technology recommendations.
AIM uses RAO (Retrieval-Augmented Optimization) to score technology candidates across six dimensions. This page documents what each dimension means, how scores are determined, and how the scoring process can be audited.
1. The Six RAO Dimensions
Every technology candidate is scored on these six dimensions. Scores range from 0 to 100.
Security
Measures the technology's security posture, including vulnerability exposure, lifecycle status, and data protection capabilities.
Strong security controls, actively maintained, regular patches, good track record
Known vulnerabilities, EOL/unsupported, limited security features
Compliance
Measures regulatory readiness for frameworks like HIPAA, CJIS, PCI-DSS, FedRAMP, GDPR, and SOC 2.
Pre-certified, audit-ready, comprehensive compliance documentation
No compliance certifications, limited audit support, regulatory gaps
Cost
Measures total cost of ownership including licensing, infrastructure, and operational expenses.
Cost-effective, predictable pricing, good value for capabilities
High licensing costs, expensive infrastructure, unpredictable pricing
Maturity
Measures market adoption, stability, ecosystem support, and track record in production environments.
Widely adopted, stable releases, strong ecosystem, proven at scale
Early-stage, limited adoption, frequent breaking changes, small community
Vendor Lock-In Risk
⚠️ INVERTED: Lower is betterMeasures how difficult it would be to migrate away from this technology. Unlike other dimensions, a LOW score is desirable.
Open standards, portable data, easy migration, no proprietary lock-in
Proprietary formats, difficult exit, vendor-specific APIs, high switching costs
Example: A score of 93/100 means this technology has very high vendor lock-in risk. Consider alternatives with lower lock-in if flexibility is important.
Operational Complexity
⚠️ INVERTED: Lower is betterMeasures how difficult the technology is to operate, maintain, and support. Unlike other dimensions, a LOW score is desirable.
Easy to operate, minimal specialized skills, good automation, low overhead
Requires specialized skills, complex configuration, high maintenance burden
2. How Scores Are Calculated
The RAO Scoring Formula
The final RAO score is a weighted average of the six dimension scores, adjusted for your assessment context:
RAO Score = (Security × weight) + (Compliance × weight) + (Cost × weight) + (Maturity × weight) + ((100 - VendorLockInRisk) × weight) + // Inverted ((100 - OperationalComplexity) × weight) // Inverted
Weights are dynamically adjusted based on your assessment's risk tolerance, budget sensitivity, regulatory requirements, and timeline pressure.
Tier Assignment
Based on the final RAO score, candidates are assigned to tiers:
3. Risk Profile Evaluation
In addition to RAO dimension scores, each technology recommendation includes a Risk Profile — a breakdown of five risk dimensions that help you understand implementation and operational risks at a glance.
Risk Level Color Coding
Risk levels are displayed as color-coded pills for quick visual scanning:
Minimal concerns. Safe to proceed with standard practices.
Some considerations. Plan for mitigation strategies.
Significant risk. Requires careful review and mitigation.
The Five Risk Dimensions
Security
Security risk associated with adopting this technology.
Tech Debt
Technical debt risk from implementation and maintenance.
Cost Impact
Budget and cost predictability risk.
GRC (Governance, Risk & Compliance)
Governance, Risk, and Compliance implementation burden.
Change Complexity
Implementation and organizational change complexity.
How Risk Profile is Determined
Risk profile ratings are derived from the underlying RAO dimension scores and assessment context:
- •Security Risk maps inversely to the Security RAO score
- •Tech Debt Risk considers Maturity score and technology lifecycle
- •Cost Impact Risk factors in Cost score and pricing predictability
- •GRC Risk maps inversely to the Compliance RAO score
- •Change Complexity maps to Operational Complexity score
4. Score Curation & Audit Defensibility
Designed for Auditability & Evaluation Scrutiny
AIM's scoring methodology is designed to withstand rigorous audit review. Every score is traceable, reproducible, and based on documented criteria — not vendor influence or algorithmic opacity.
Data Sources
AIM uses a combination of automated data collection and curated assessments:
Automated Sources
- Cloud Pricing APIs – Pricing from AWS, Azure, GCP, and OCI price list APIs (automated monthly refresh)
- Compliance Flags – FedRAMP authorization status, GovCloud availability, and HIPAA BAA support tracked per product
Curated Assessments
- Dimension Scores – Security, compliance, cost, maturity, vendor lock-in, and operational complexity scores curated using documented rubrics
- Lifecycle Classification – Products categorized as evergreen (SaaS/Cloud), versioned (on-prem software), hardware, or community (open source)
Reference Materials
Curated scores reference the following when available:
- • Vendor security documentation
- • Published compliance certifications
- • Official pricing pages
- • Vendor lifecycle/support policies
- • FedRAMP Marketplace listings
- • Product release histories
Scoring Cadence: Dimension scores are reviewed quarterly and updated when significant changes occur (certification changes, EOL announcements, major security events). Each score includes a source citation and last-reviewed timestamp for audit purposes.
Curation Process
Initial Assessment
Technology is evaluated against each dimension using documented criteria
Source Verification
Scores are validated against multiple independent sources
Catalog Entry
Scores are recorded in the versioned technology catalog with metadata
Periodic Review
Scores are updated monthly (or as needed) when new information becomes available
Update Cadence
Audit Trail
Every recommendation and report includes audit metadata:
- Input Hash – Cryptographic hash of assessment inputs
- Output Hash – Cryptographic hash of scored outputs
- Methodology Version – Version of scoring algorithm used
- Catalog Version – Version of technology catalog at time of scoring
- Timestamp – When the recommendation was generated
5. What AIM Does NOT Do
- No vendor payments – AIM does not accept sponsorships, referral fees, or placement payments from vendors
- No AI score manipulation – AI is used for narrative assistance only; it cannot change scores
- No black-box algorithms – All scoring logic is deterministic and documented
- No arbitrary weighting – Weights are derived from your assessment constraints, not hidden preferences