AIM
Architectural Insight for Modernization

Decision Transparency
How AIM ensures reproducible, explainable, and auditable recommendations
AIM is designed to withstand skeptical oversight. Every recommendation can be explained, every score can be traced, and every decision can be reproduced. This page documents how AIM achieves hostile auditor defensibility.
1. How AIM Ranks Options
AIM uses RAO (Retrieval-Augmented Optimization) to score and rank modernization candidates. Scores are computed deterministically using:
- Security – Lifecycle status, exposure, data sensitivity
- Compliance – Regulatory fit (HIPAA, CJIS, PCI, etc.)
- Cost – Total cost of ownership and budget alignment
- Complexity – Operational complexity and skill requirements (inverted: lower is better)
- Lock-in Risk – Vendor dependency and exit difficulty (inverted: lower is better)
- Maturity – Market adoption and stability
📖 Detailed Methodology: See Scoring Methodology for a complete explanation of each dimension, what high/low scores mean, and how scores are curated for audit defensibility.
Candidates are assigned tiers based on score thresholds:
- Recommended Score ≥ 80
- Acceptable Score 65–79
- Avoid Score < 65
Learn more: Constraints & RAO Scoring →
2. What AIM Is NOT
Not an ITSM or CMDB
AIM is a project-phase architecture planning tool, not a continuous IT service management or configuration management database. It produces point-in-time modernization plans for 3–24 month projects.
Not an Autonomous Decision Maker
AIM provides decision support, not decisions. All outputs are designed to be reviewed, edited, and approved by human architects and stakeholders. Humans remain accountable for all implementation decisions.
Not a Replacement for Expertise
AIM accelerates planning by structuring data and surfacing options. It does not replace subject matter expertise, procurement processes, or vendor negotiations.
3. Vendor Neutrality Statement
Platform Independence
- No paid placements – AIM does not accept vendor sponsorships, referral fees, or kickbacks
- Consistent methodology – All candidates are scored using the same RAO dimensions
- Open competition – Any qualified vendor can bid on work derived from AIM assessments
- Catalog-backed – Product recommendations come from a curated, version-controlled catalog
When AIM names specific products, these represent a solution baseline for planning and vendor comparison — not vendor preference.
Learn more: How AIM Works: Platform Independence →
4. Reproducibility & Audit Logs
Every recommendation and report generation is logged with:
| Field | Description |
|---|---|
| input_hash | SHA-256 hash of normalized assessment inputs |
| output_hash | SHA-256 hash of ranked outputs and scores |
| methodology_version | Version of scoring methodology used |
| rao_version | Version of RAO algorithm used |
| created_by | User who triggered the generation |
| created_at | Timestamp of generation |
Reproducibility Guarantee
Re-running an assessment with the same inputs, catalog version, and methodology version will produce matching hashes. This enables verification that no unauthorized changes have occurred.
5. Data Freshness & Updates
AIM uses multiple data sources with defined update cadences:
| Source | Update Cadence | Notes |
|---|---|---|
| Cloud Provider APIs | Daily | AWS, Azure, GCP pricing |
| Software/SaaS Pricing | Monthly | Vendor public pricing pages |
| Hardware MSRP | Monthly | Dell, HP, Cisco, Palo Alto, etc. |
| BLS Labor Rates | Annual | Bureau of Labor Statistics OES data |
| Technology Catalog | As needed | Product lifecycle, scoring, metadata |
Learn more: Pricing Confidence Methodology →
6. Uncertainty & Missing Prices
AIM explicitly flags uncertainty rather than hiding it:
- “Requires Quote” – Products with enterprise custom pricing or population-based licensing
- Confidence Levels – HIGH/MED/LOW ratings on pricing data based on source type
- Stale Data Flags – Warnings when data sources exceed expected update cadence
- Missing Input Warnings – Explicit flags when assessment inputs are incomplete
Honest Uncertainty
AIM prefers to say “This information was not provided” rather than invent data. All estimates are planning-level and should be validated with vendor quotes before procurement.
7. AI Role Statement
How AI Is Used
- Narrative assistance – AI improves readability of report text
- Context normalization – AI helps structure free-text inputs into normalized constraints
- Pattern detection – AI assists in identifying architecture patterns from system descriptions
Deterministic Scoring
AI does not change scores. All RAO scores, tier assignments, and rankings are computed using deterministic, rule-based logic that can be reproduced and audited.
AI narrative assistance is clearly disclosed in reports. The underlying scoring methodology is fully transparent and documented.
8. For Auditors
If you are auditing an AIM-generated report or recommendation:
Check the Explainability Appendix
Reports include an appendix with methodology version, data freshness, scorecards, and reproducibility hashes.
Verify Hashes
Re-running the same assessment should produce matching input and output hashes.
Review Decision Runs
Decision runs are immutable audit records that cannot be modified after creation.
Request Data Exports
Structured data exports are available upon request via privileged access.
Learn More About AIM
Explore how AIM works and understand the methodology behind recommendations.