MIMOSA PROTEST: What Happens When You Take LLMs Out of the Lab

THE MIMOSA SANDBOX

Experimental Field Notes from the End-User Frontier

Ethan Mollick, the author of Co-Intelligence and Co-Director of the Generative AI Lab at the Wharton School of the University of Pennsylvania, offered this comment last week: “Many of the most important ‘prompt engineering’ skills are just management skills: clearly understanding the task to be done and what information is needed to do it; explaining the task to the AI; giving useful feedback to improve outputs; and generalizing lessons learned into a process.” Reading his post, I thought it lent itself well to dispelling end-user angst about the future of education and work. My comment, in response: “Sure, but also just thinking skills. If you're any kind of trained #thinker, #researcher, #investigator, #designer, #analyst, [or] #planner… you've already got what you need to figure out how to effectively prompt your #LLM.” Some thoughts:

  • HUMANITIES OR BUST. I followed up with some unsolicited advice to tech end-users: “Don’t internalize flawed LLM responses as an indication of poor skills on your part. They might actually be poor, or they might be awesome, or they might be somewhere in between or in need of some remedial instruction. Either way, you're not the two-year old infant flailing about. That's your LLM.” Alex Strick, a former colleague and fellow traveller (of sorts), approved: “Yes! People with a grounding in humanities / related disciplines are perfectly primed to get the most out of LLMs + their associated tools / wrappers.” The “humanities” cover a lot of territory, and academic research methods vary wildly in their substance and real-world utility. Meanwhile, the “Arts, Humanities, and Social Sciences” (or “AHSS”) are an oft-neglected and increasingly important area of STEM-centric R&D. It’s a point that begs much closer attention to how people are trained to think, and which of those cognitive skillsets work best in an AI-first world.

  • NOWHERE TO HIDE. Enter Microsoft, which over the weekend culled 6000 employees, including some directly involved in AI development. Humanities graduates are traditionally the ones wondering what their post-degree careers will look like. Computer scientists and coders have been side-eyeing AI development for a while, wondering when the axe will fall. That alone would seem to upend the idea that tech skills are a natural insulator against AI-induced workforce disruptions. Overall, it’s a mixed picture. I used Perplexity Sonar, ChatGPT (4o), and Gemini 2.5 Flash to pull up a list of 2024 and 2025 news items on the issue and prompted them to distill their respective lists’ key takeaways. They covered more or less the same sources, and offered more or less the same observations, but there was an observable difference in tone and emphasis in their responses. GPT 4o was somewhere between neutral and optimistic. For Gemini, “workforce anxiety and job displacement” was top of the list.

  • TARRED AND WEATHERED. My own preference is solidly biased in favor of what my experience - as an AHSS-trained academic, as a field investigator, and as an institutionally embedded analyst - have shown me repeatedly: that “technology assisted research” (TAR) isn’t new, and learning new interfaces, operations, and functionalities is a fundamental feature of knowledge work. Being able to learn new tools and gain the experience to apply them effectively is important. While the anthropomorphic packaging of AI can understandably inspire existential shivers, it shouldn’t be given the space to obscure the benefits. This latest thing should be sensibly and effectively integrated into what we do. That means research, training, and tools to get us past scary new unknowns. The open questions surrounding these issues have to be considered head on, with serious, user-centric approaches, not assumptions and snake-oil.

  • THE MIMOSA EXPERIMENT. Where does that leave us? The hardworking staff at CKS take the assumptions and open questions seriously enough that for the last three days, we’ve been running an intensive in-house experiment, MIMOSA PROTEST. Its basic premise is that technology end-users, the ones who receive the technology developed in design labs, aren’t “lay users” or “non-technical” users. They’re technical specialists in their own fields of work, and they need better on-ramps for developing, taking ownership and putting these bits of technical wizardry to the test - without being mystified or ensorcelled by their STEM-side origins and flavouring. They need to field-test, fine-tune, and generate trust in them, on their own terms - applying AI tools to their own requirements, ways of working, and value calculations. The initial results of MIMOSA PROTEST are interesting and point to a user-enabling research architecture for benchmarking, adopting and applying AI tools. We’ll be sharing some of the readouts over the next week. Watch this space.


Get in touch to find out more about how we can help your in-house research team

CKS is a trading name of Craighead Kellas, a private limited company registered in the United Kingdom (company registration #15995326) and in the United Arab Emirates. ©Craighead Kellas 2025. All Rights Reserved. CraigheadKellas.com.

Next
Next

Semi-Automated Media Monitoring on the Cheap