There's a conversation happening in every enterprise boardroom right now. It sounds something like this.
"We need to invest in AI."
Great. Agreed. But invest in AI... on top of what, exactly?
That's the question nobody's asking loudly enough. And it's the question that keeps tripping organizations up. Not because AI is mysterious. But because the foundation underneath it determines whether AI actually works.
We built the MACH & AI Readiness Assessment because we got tired of watching that conversation happen in two separate rooms. Composable architecture in one meeting. AI strategy in another. As if they're unrelated. They're not. They're the same conversation. And the data proves it.
The Number That Started Everything
Earlier this year, the MACH Alliance published their 2026 Enterprise Technology Report. 600 enterprise IT decision-makers. Organizations with 5,000 or more employees. Real data, not vibes.
One number jumped off the page.
98% of organizations that have fully implemented composable technology feel confident their infrastructure can support AI at scale.
For organizations still in the planning stages of composable? That number drops to 33%.
Read that again. The gap between "fully composable" and "just getting started" isn't a gentle slope. It's a cliff. And it has almost nothing to do with AI expertise. It has everything to do with architecture.
The Problem We Kept Seeing
At Fidget Labs, we work with organizations at every stage of the composable and AI journey. Some are ditching their monolith for the first time. Some have a mature MACH stack and want to layer in AI capabilities. Some are doing AI on top of legacy platforms and wondering why things keep breaking.
Here's the pattern we kept seeing. Organizations that score high on composable maturity but low on AI adoption are sitting on a goldmine they don't realize they have. Their architecture is already designed for the modularity, API connectivity, and data flow that AI demands. They just haven't turned the key.
Meanwhile, organizations investing heavily in AI without a composable foundation are running into walls. Integration complexity. Legacy bottlenecks. Projects that stall. The MACH Alliance data backs this up. Only 29% of organizations at early composable stages report zero AI project failures. For fully composable organizations? 51%.
That's not a coincidence. That's architecture eating strategy for breakfast.
So We Built a Thing
The MACH & AI Readiness Assessment is a free, interactive quiz that scores your organization on two separate axes.
Composable Maturity. How far along the MACH journey you actually are. Not how far you think you are. We ask about your architecture state, platform replaceability, legacy system impact, and deployment frequency. These are the questions that separate "we have a headless CMS" from "our systems are genuinely composable."
AI Readiness. Whether your organization and architecture can support AI at scale. We cover AI adoption level, use cases you've deployed, infrastructure confidence, project success rate, and how you measure ROI. Because having ChatGPT in your Slack channel is not the same as strategic AI implementation.
Then there are five cross-cutting dimensions that feed both axes. Data governance. Skills and talent. Leadership mindset. Integration quality. Governance and standards. These are the organizational factors that either accelerate or block progress on both fronts.
18 questions. About 10 minutes. No signup required.
What You Actually Get
When you finish, you land on a results page that doesn't make you wait. Your scores animate in immediately. You get placed on a quadrant chart showing where you sit relative to four profiles.
Foundation Phase (low composable, low AI). You're early on both fronts. That's not a bad thing. You get to build both capabilities in parallel and avoid the mistakes of organizations that rushed into AI on fragile architecture.
Ready to Launch (high composable, low AI). You've built the foundation. Now activate it. The MACH Alliance data says 94% of fully composable organizations report that their architecture significantly increases AI deployment speed. You have the engine. Turn the key.
Disconnected Innovation (low composable, high AI). This one's spicy. You're investing in AI without the architectural foundation to scale it. This is the riskiest quadrant. It's where AI projects stall, integration costs balloon, and teams burn out fighting their own tech stack.
Composable Intelligence (high composable, high AI). You're at the frontier. The report shows 99% of organizations in this position are seeing measurable AI outcomes, averaging four distinct business results. Your focus shifts to optimization and governance.
Below the quadrant, you get benchmarked against the MACH Alliance dataset. Not generic percentages. Specific comparisons based on your composable stage, AI level, and industry.
Then the AI kicks in. We use Claude to generate a personalized narrative that references your actual answers. Two people in the same quadrant with different answer profiles get different analyses. A CTO at a 200-person company gets different framing than a Digital Transformation Lead at a 10,000-person enterprise. That's the whole point. Generic insights help nobody.
Why We Made It Free. And Ungated.
Let's talk about this directly, because you're probably wondering.
Yes, Fidget Labs is a consultancy. Yes, we would love to work with you. But the assessment is genuinely useful on its own. You get your scores, your quadrant, your benchmarks, and a personalized AI analysis without giving us a single piece of information about yourself.
If you want the deep-dive PDF report with expanded benchmarks, a phased action plan, and sector-specific insights, that's where we ask for an email. Fair trade. We send you a comprehensive report. We get a signal that you might be interested in a conversation. That's it. One email. No spam. No "just checking in" sequences.
We built it this way because we believe the best lead generation is actually helping people. If the free results give you what you need, great. If they spark a conversation about next steps, even better. Either way, you walk away with something useful. That felt like the right way to do it.
The Data That Powers It
Everything in the assessment traces back to the MACH Alliance Enterprise Technology Report 2026. Here are some of the findings that shaped how we built the questions and scored the results.
92% of organizations have implemented or are actively adopting composable technology. This is no longer a niche strategy. It's mainstream.
78% use AI in-house or consider themselves ahead of competitors. But. 88% still face barriers implementing it. Adoption is high. Successful implementation is a different story.
37% cite integration complexity as their top AI barrier. 32% cite skills shortages. 26% cite legacy technology. These aren't AI problems. They're architecture and organizational problems. That's why the cross-cutting dimensions matter so much in the assessment.
89% believe standards for AI in composable environments are missing. Not "could use improvement." Missing. The governance question in our assessment consistently scores lowest across all quiz takers. That tracks.
87% of fully composable organizations say leadership has the right mindset for composable and AI. For organizations still in the planning stage? 33%. Leadership mindset isn't just a soft metric. It's a predictor of success.
What We're Learning from Early Results
Without sharing anything identifying, some patterns are emerging from the assessment data.
The most common quadrant is "Ready to Launch." Organizations that have solid composable foundations but haven't activated AI on top of them. This mirrors the MACH Alliance finding that composable adoption is widespread but AI implementation is still catching up.
The governance dimension scores lowest across almost everyone. This makes sense given 89% of the surveyed enterprises agree that governance standards are missing. It's hard to score well on something the entire industry is still figuring out.
There's a consistent gap between how people describe their composable journey in the first question and what their architecture actually looks like based on subsequent answers. Lots of folks say "widely implemented" but then describe systems that are mostly bundled with a few standalone tools. Honest assessment is hard. The quiz is designed to surface that gap gently.
How We Actually Built It
For the technically curious. The assessment runs on Next.js 16 with App Router, deployed on Vercel at readiness.fidgetlabs.io. The scoring engine is entirely deterministic. No AI involved in the math. Your composable score and AI score are calculated from your answers using weighted point values that map to the MACH Alliance report's framework.
The AI layer comes in at two points. After you complete the quiz, we stream a personalized analysis using Claude via the Vercel AI Gateway. This is role-aware. A C-Suite executive gets a scannable bullet-format briefing. An Enterprise Architect gets three paragraphs of technical depth. Same data, different delivery.
If you opt in for the PDF report, a second Claude call generates an expanded analysis with sector-specific insights and a five-point prioritized action plan. That gets rendered into a branded PDF and emailed to you through Resend.
The benchmark data is injected into the AI prompts deterministically. Meaning we don't ask Claude to guess what percentage of enterprises are at a given composable stage. We look it up from the report data and feed Claude the exact number. This prevents hallucinated statistics. Every number you see traces back to the 600-respondent dataset.
Take It. Tell Us What You Think.
The assessment is live at readiness.fidgetlabs.io. 10 minutes. 18 questions. Two scores. A personalized analysis. Benchmarked against 600 enterprise decision-makers.
If you're in the middle of a composable migration, it gives you a snapshot of where you are. If you're pitching AI investment to leadership, it gives you data to anchor the conversation. If you're just curious where your organization stacks up, it scratches that itch.
We'd genuinely love to hear what you think. What landed. What didn't. What surprised you about your score. The feedback shapes how we improve the tool.
And if your results spark a question about next steps. Well. You know where to find us.





