AIM

Architectural Insight for Modernization

Freedom AIM Logo

Decision Transparency

How AIM ensures reproducible, explainable, and auditable recommendations

AIM is designed to withstand skeptical oversight. Every recommendation can be explained, every score can be traced, and every decision can be reproduced. This page documents how AIM achieves hostile auditor defensibility.


1. How AIM Ranks Options

AIM uses RAO (Retrieval-Augmented Optimization) to score and rank modernization candidates. Scores are computed deterministically using:

  • Security – Lifecycle status, exposure, data sensitivity
  • Compliance – Regulatory fit (HIPAA, CJIS, PCI, etc.)
  • Cost – Total cost of ownership and budget alignment
  • Complexity – Operational complexity and skill requirements (inverted: lower is better)
  • Lock-in Risk – Vendor dependency and exit difficulty (inverted: lower is better)
  • Maturity – Market adoption and stability

📖 Detailed Methodology: See Scoring Methodology for a complete explanation of each dimension, what high/low scores mean, and how scores are curated for audit defensibility.

Candidates are assigned tiers based on score thresholds:

  • Recommended Score ≥ 80
  • Acceptable Score 65–79
  • Avoid Score < 65

2. What AIM Is NOT

Not an ITSM or CMDB

AIM is a project-phase architecture planning tool, not a continuous IT service management or configuration management database. It produces point-in-time modernization plans for 3–24 month projects.

Not an Autonomous Decision Maker

AIM provides decision support, not decisions. All outputs are designed to be reviewed, edited, and approved by human architects and stakeholders. Humans remain accountable for all implementation decisions.

Not a Replacement for Expertise

AIM accelerates planning by structuring data and surfacing options. It does not replace subject matter expertise, procurement processes, or vendor negotiations.


3. Vendor Neutrality Statement

Platform Independence

  • No paid placements – AIM does not accept vendor sponsorships, referral fees, or kickbacks
  • Consistent methodology – All candidates are scored using the same RAO dimensions
  • Open competition – Any qualified vendor can bid on work derived from AIM assessments
  • Catalog-backed – Product recommendations come from a curated, version-controlled catalog

When AIM names specific products, these represent a solution baseline for planning and vendor comparison — not vendor preference.


4. Reproducibility & Audit Logs

Every recommendation and report generation is logged with:

FieldDescription
input_hashSHA-256 hash of normalized assessment inputs
output_hashSHA-256 hash of ranked outputs and scores
methodology_versionVersion of scoring methodology used
rao_versionVersion of RAO algorithm used
created_byUser who triggered the generation
created_atTimestamp of generation

Reproducibility Guarantee

Re-running an assessment with the same inputs, catalog version, and methodology version will produce matching hashes. This enables verification that no unauthorized changes have occurred.


5. Data Freshness & Updates

AIM uses multiple data sources with defined update cadences:

SourceUpdate CadenceNotes
Cloud Provider APIsDailyAWS, Azure, GCP pricing
Software/SaaS PricingMonthlyVendor public pricing pages
Hardware MSRPMonthlyDell, HP, Cisco, Palo Alto, etc.
BLS Labor RatesAnnualBureau of Labor Statistics OES data
Technology CatalogAs neededProduct lifecycle, scoring, metadata

6. Uncertainty & Missing Prices

AIM explicitly flags uncertainty rather than hiding it:

  • “Requires Quote” – Products with enterprise custom pricing or population-based licensing
  • Confidence Levels – HIGH/MED/LOW ratings on pricing data based on source type
  • Stale Data Flags – Warnings when data sources exceed expected update cadence
  • Missing Input Warnings – Explicit flags when assessment inputs are incomplete

Honest Uncertainty

AIM prefers to say “This information was not provided” rather than invent data. All estimates are planning-level and should be validated with vendor quotes before procurement.


7. AI Role Statement

How AI Is Used

  • Narrative assistance – AI improves readability of report text
  • Context normalization – AI helps structure free-text inputs into normalized constraints
  • Pattern detection – AI assists in identifying architecture patterns from system descriptions

Deterministic Scoring

AI does not change scores. All RAO scores, tier assignments, and rankings are computed using deterministic, rule-based logic that can be reproduced and audited.

AI narrative assistance is clearly disclosed in reports. The underlying scoring methodology is fully transparent and documented.


8. For Auditors

If you are auditing an AIM-generated report or recommendation:

1

Check the Explainability Appendix

Reports include an appendix with methodology version, data freshness, scorecards, and reproducibility hashes.

2

Verify Hashes

Re-running the same assessment should produce matching input and output hashes.

3

Review Decision Runs

Decision runs are immutable audit records that cannot be modified after creation.

4

Request Data Exports

Structured data exports are available upon request via privileged access.

Learn More About AIM

Explore how AIM works and understand the methodology behind recommendations.