From Manual Model Switching to Multi-Model AI Orchestration

SANDBOX

Experimental Field Notes from the End-User Frontier

Sandbox lesson #4: working with multiple, separate, individual AI subscriptions is inefficient, time consuming, expensive, and very likely error-inducing. There are better ways.

In the sandbox experiments we’ve conducted to date, we’ve worked with multiple, separate, individual subscriptions to ChatGPT, Claude, Gemini, Mistral, Llama, and a few others. We would manually switch between open tabs and windows to access each platform. It was necessary, as capability advancements result in continuous leapfrogging and accelerated obsolescence - an unstable base to work from that makes model switching capability an essential element of our testing protocols. Our problem was that it was also inefficient, time consuming, expensive, and very likely error-inducing.

Is there a better way? The question has taken us on a round-the-world tour of technology options. We surveyed the field and soon found ourselves in the realm of AI test benches and other developer-side tools. For us - as technology end-users - it opened up a whole new world of possibilities. In our scientific judgment, this was just seriously cool stuff. Becoming amateur developers has its appeal, especially now that AI has introduced surface-level democratization of coding skill. For us, it has to be balanced against a key point: as a boutique consultancy, our promise to clients is cost-efficiency and thick expertise. This was a rabbit hole we couldn’t afford to indulge and is best served by appropriately competent implementers.

Horizontal orchestration. Our sandbox discussions with developers and data scientists (you know who you are) at Cognit DX, Control Risks, and a big tech company that shall remain nameless, helped steer us in the right direction. We followed breadcrumbs until we discovered the world of AI “horizontals” and AI “orchestration”. Industry reports define this a number of different ways. That probably says something about how quickly the technology's been evolving, from simple workflow automation to sophisticated multi-agent coordination. It also highlights implications for different market segments, from individual researchers to enterprise deployments and how they approach platform selection and adoption. That’s a bigger piece of analysis we’ll cover separately.

Labeling the practice. What exactly is AI orchestration? A quick skim of industry reports published over the last few years provides a sense of flavour. There are some standout concepts: Deloitte on “orchestrating workforce ecosystems” and “orchestrating complex transformations”, Gartner on “service orchestration and automation frameworks” (‘SOAP”) and “application orchestration frameworks”, or McKinsey on “orchestration and modularization” and “orchestration engines”. The big tech companies are on this too, of course - see Amazon’s “multi-agent collaboration” and “custom orchestrator” feature. Another intriguing frame is Eric Broda’s “agentic-AI mesh” or “data mesh”.

Defining the practice. Earlier this year, San Francisco-based Venture Beat magazine explained that “2025 will be the year of AI orchestration.” It’s now a rhetorical commonplace, and the cash numbers being thrown at it mean the words aren’t mere puffery. Market.us defines it as "a technological framework that automates and integrates various AI models, tools, and workflows within an organization." For SNS Insider, the AI orchestration deals in “management and synchronization of diverse AI technologies and processes in order to produce desired results”, by “combining several AI models, algorithms, and tools into a single framework to tackle difficult challenges and improve AI workflows." Research and Markets describes it this way:

As organizations deploy multiple AI models and manage complex data pipelines, workflow intricacies multiply and demand unified governance. AI orchestration has emerged as the keystone that aligns model training, deployment, monitoring, and scaling with robust cloud infrastructure, data governance policies, and security protocols.

Model switching for pros. AI orchestration or coordination is a great concept that should resonate with AI users at any scale. Following our own requirements for viable low-cost options, we found several that will work well for independent professionals, entrepreneurs, and other serious individual conductors of their own personal AI orchestras. The big companies like ChatGPT and Anthropic offer access to multiple models within their own stables, but that’s not the same type of model selectability that we’re interested in. Perplexity, which provides unified interface access to ChatGPT, Claude, Gemini, Grok, and its own Sonar models, is probably the best known. Poe (at poe.com) offers access to dozens of models. We recently discovered another one, GMTech, that is superb. They all have very intuitive interfaces, and personal subscription pricing - with built-in access to multiple models - is commensurate with the price of a subscription to just one of those direct-play models.

Barriers to entry. There are other communities and platforms (like Huggingface, which provides access to hundreds? thousands? of models), which are also very good but assume a user-base with more advanced technical skills. The promise of AI-enabled knowledge democratization, and the coherence and efficiency achieved through orchestration suites or frameworks, are lofty and worthwhile benchmarks. How real are they at present? Our view is that affordability and therefore access is still a significant barrier to entry for serious AI-enabled research done by individuals. There is clearly a weight of investment in developer-side and enterprise-level orchestration. That makes good sense, as functional orchestration will logically benefit more complex systems. How does that compare to the field of AI orchestration technologies priced for individuals end-users in domains other than tech?

Case study: Feedly pricing. Our sense is this: the range of options available at the independent professional level is much less developed, especially in relation to AI tools for research and intelligence purposes. A good example of this is Feedly, a news aggregation tool that’s been around for a long time. It’s always been an excellent and highly affordable option for anyone who has to track high volumes of online content. The current pricing model now ranges from a starter “Pro” subscription at US$6.99 / month and “Pro+” subscription at US$12.99 / month. There’s very little AI capability built into the Pro tiers. The Enterprise tier, on the other hand, has robust AI capabilities, but the pricing means only the very well-heeled can afford it: individual subscriptions are somewhere north of US$1600/month to US$2400/month. The yawning gap between Pro and Enterprise feels to us like a missed business opportunity for developers and vendors. More, it could force some individual end-use operators to re-evaluate the otherwise daunting cost of an alternative in-house build.

Get in touch to find out more about how we can help your in-house research team

CKS is a trading name of Craighead Kellas, a private limited company registered in the United Kingdom (company registration #15995326) and in the United Arab Emirates.

©Craighead Kellas 2025. All Rights Reserved. CraigheadKellas.com.

Next
Next

ChatGPT 4o Deep Research and Midstream No-Notice Capability Suspension