Our AI Approach
How The Freedom Project builds AI-powered tools with intention, independence, and integrity.
The Freedom Project Ethos
The Freedom Project was founded on a simple principle: technology decisions should be made with clarity, not confusion. In a landscape dominated by vendor narratives, sales-driven roadmaps, and opaque AI systems, we believe organizations deserve tools that work for them—not for someone else's bottom line.
This philosophy extends to how we build our own products. When we created AIM, we made deliberate choices about how AI would be integrated—choices grounded in our values, not just technical convenience.
More Than an AI Wrapper
Many tools today are simply interfaces layered on top of large language models—what the industry calls “AI wrappers.” They take your input, pass it to an AI, and return whatever comes back. The intelligence is borrowed, not built.
AIM is different. At its core, AIM uses a proprietary methodology we call Retrieval-Augmented Optimization (RAO)—a structured approach that retrieves relevant patterns and reference architectures, scores candidate solutions across multiple dimensions, and optimizes recommendations for your specific constraints.
The AI models we use are tools within this framework, not the framework itself. The intelligence is in the methodology—the structured analysis, the scoring dimensions, the constraint normalization, the architectural patterns. The AI helps us execute at scale; it doesn't define what we do.
Provider Independence
Just as we refuse to be tied to technology vendors in our recommendations, we refuse to be locked into any single AI provider. Our architecture is designed for flexibility—we can switch providers, models, or approaches as the field evolves.
This isn't theoretical. We actively evaluate providers based on quality, cost, and alignment with our values. When better options emerge, we can adopt them without rebuilding our entire system.
What This Means for You
- •Your data isn't locked into one provider's ecosystem
- •We can improve quality by adopting better models as they emerge
- •No single provider failure can take AIM offline
- •We maintain negotiating power and avoid vendor dependency
Ethical Considerations in Our AI Choices
When selecting AI providers, we consider more than just performance benchmarks. We evaluate:
Safety & Alignment
Does the provider invest in AI safety research? Are they transparent about their alignment practices? We prioritize providers who take responsible AI development seriously.
Data Privacy
How is your data handled? We select providers with clear policies against training on customer data and strong privacy commitments.
Transparency
Is the provider open about their models, limitations, and practices? We avoid black-box systems where we can't understand or explain the behavior.
Independence
Is the provider focused on AI, or is AI a side product of a larger agenda? We prefer providers whose primary mission aligns with building helpful, safe AI systems.
These considerations led us to choose providers who share our commitment to responsible AI development. We believe that how AI is built matters as much as what it can do.
What We Don't Do
Transparency means being clear about boundaries. Here's what AIM does not do with AI:
- ✕We don't train models on your data. Your assessments, architectures, and business information are never used to train AI models—ours or anyone else's.
- ✕We don't make black-box decisions. Every recommendation AIM provides is grounded in structured analysis. We can explain why a recommendation was made, not just what it is.
- ✕We don't let AI make final decisions. AI assists analysis and generates insights, but human judgment remains central. AIM is a tool for decision-makers, not a replacement.
- ✕We don't hide behind AI limitations. When AI has limitations or uncertainties, we surface them. You'll know what's confident analysis and what requires human validation.
Our Commitment
As AI continues to evolve, so will our approach. But certain principles will remain constant:
- →We will remain provider-independent, never locked to a single vendor
- →We will choose providers based on values, not just performance
- →We will be transparent about how AI is used in our products
- →We will keep human judgment at the center of decision-making
- →We will continue building methodology-first, AI-assisted tools
This is what it means to build AI responsibly. Not as a marketing claim, but as a practice embedded in every decision we make.
Questions About Our Approach?
We're happy to discuss our AI practices in more detail. Reach out if you have questions about how AIM uses AI or how we select our providers.
View our FAQ→