Policy on Responsible AI
EXECUTIVE SUMMARY
At Craighead Kellas (CKS), our position on the use of artificial intelligence is grounded in the production of evidence and insight. We privilege methodological transparency, critical scrutiny, and a deep commitment to distinguishing evidence-based insight from unsupported assertion. We view artificial intelligence as a valuable but still experimental addition to technology-assisted research. It has the potential to be an important enabler of the evidence-based approach that defines our work. With this in mind, we energetically use and test AI tools, under controlled circumstances and within the context of our work.
We are nonetheless clear-eyed about the constraints and opportunities associated with artificial intelligence. We integrate AI technologies in our work to augment processing capability, analytical reach, and organizational efficiency. We do not defer or default to AI technologies. They reflect the data, algorithms, and design choices of their technical developers. We remain mindful of the inheritances and back-end controls that may continue to influence or route their performance. We do not substitute AI-generated logic or outputs for human reasoning, professional end-user skills and competencies, human skepticism and common sense, or context-aware interpretation.
At CKS, responsibility for determining AI value and application begins and ends with us.
*****
I. Governance & Policy
To ensure alignment with emerging standards, evolving best practices, and new regulatory requirements, CKS reviews and updates this policy regularly. We are committed to the responsible, transparent, and effective deployment of AI: always in the interests of our clients and firmly anchored to our philosophy, principles, and policies.
II. Quality Management
CKS adheres to rigorous academic, professional, and industry standards. We strive to adhere to relevant privacy, security and other requirements, and to ISO quality management standards. We view proactive quality assurance and responsive quality controls to be essential components of our research practice. Some of our own views, listed below, reflect and supplement that position.
Responsibilities: Responsibility for AI management and use sits with CKS and its people, and their application of qualified intellectual discipline and methodological best practice.
Consistency: We apply the same critical scrutiny to algorithmic results as we do to information sourced through any other technology, platform, or repository.
Provenance: Our AI-assisted work is closely monitored, meticulously logged, and carefully traced to safeguard against abuses or degradation of provenance indicators.
Limitations: We use AI outputs in isolation for demonstration purposes only and never use them as expedient solutions to work, time, or other pressures.
Assurances: We apply best practice quality assurance protocols, including mandatory review and comprehensive documentation of AI processes and products, regular monitoring of tools for accuracy and capability variances, and independent review and audit.
Competencies: We engage in continuous professional development, ethics training, collaboration with AI experts, review of emerging technologies, and publication of our views and findings. These points ensure we remain at the forefront of responsible AI use.
III. Integration Philosophy
Artificial intelligence is an essential addition to technology-assisted research capabilities. We manage and use AI technologies as tools that facilitate elements of our research design and workflow. We categorically reject the anthropomorphization of AI tools. These systems are neither “colleagues” nor “research assistants”. Nor do we “collaborate” with them, any more than we would collaborate with a search engine or cite a spellcheck utility as co-author.
Direction of Flow: Crucially, we recognize that neither human users nor AI technologies are intrinsically objective or subjective. Objectivity in research is the goal and result of properly designed, transparent, and reproducible methodologies, robust critical examination, and explicit acknowledgment of bias. Any suggestion that AI “does the objective part” while human intervention introduces subjectivity, inverts the agency that defines the process and misrepresents the strengths and weaknesses of these tools.
Specific Applications: We have identified specific areas where AI technologies enhance our research capabilities without compromising our standards. In research enhancement, AI tools accelerate document analysis and digitization, identify patterns in complex datasets, expedite literature reviews, provide multi-language research support, and improve data synthesis and visualization. These applications allow our research to focus on higher-value analytical tasks while maintaining comprehensive coverage of relevant information.
Research Integrity: We do not use AI for final analysis and interpretation, strategic recommendations without expert validation, client-facing deliverables without human review, or evidence evaluation and credibility assessment. These core research functions require judgment, experience, and contextual understanding that define professional research expertise. Our quality control processes explicitly prevent AI from bypassing established verification mechanisms, replacing peer review, substituting for domain expertise, or making automated decisions about research methodologies. These guardrails ensure that AI enhancement never becomes AI dependence.
IV. Experimentation & Innovation
AI technologies are in their infancy and fundamentally experimental. Our approach to AI flows from this point. We follow a dual track in which we use AI tools in our work while simultaneously investigating and testing the limits of what AI can do. This experimental methodology ensures that our AI integration remains both cutting-edge and empirically grounded. We apply the same rigorous testing and validation standards to our AI tools that we apply to our research processes and outputs, creating an iterative improvement cycle that strengthens both our capabilities and our methodological foundation.
Thought leadership: We research AI effectiveness, develop and test novel AI-assisted research methodologies, and create proprietary frameworks and benchmarks for testing client-specific challenges. This approach positions us to understand AI capabilities and limitations, innovate technology and methods stacks, and contribute to the broader discourse on AI. The investment generates intellectual property and positions us as thought leaders.
Prototyping Efforts: Our prototyping efforts focus on experimental AI tools for specialized research applications, development of AI-assisted provenance tracing systems, and creation of proprietary frameworks tailored to specific client challenges. We test emerging technologies through controlled experiments, comparing AI-assisted versus traditional research methods and developing and publishing metrics that evaluate AI's contribution to research quality.
Continuous Monitoring: Our AI position evolves through structured evaluation and refinement. Quarterly assessments examine AI tool effectiveness, annual policy reviews incorporate lessons learned and technological advances, and continuous client feedback shapes our approach. We monitor industry benchmarks while pursuing innovation opportunities through pilot programs, research partnerships, and custom tool development.
V. Risk Management & Mitigation
We actively manage the risks associated with AI deployment, including AI hallucination and misinformation, data privacy and confidentiality concerns, algorithmic bias in research outputs, the potential for over-reliance on AI tools, and cybersecurity threats and vulnerabilities. Our risk management framework addresses these challenges through systematic monitoring, verification, and escalation procedures.
Threats, Vulnerabilities, & Risks. We guard against AI-system failures, compromised research integrity, client trust erosion, compliance challenges, and competitive disadvantage. We do this through comprehensive, 360 reviews of our AI resources, ways of working, and engagements. Regular threat, vulnerability, and risk assessments ensure our AI policy and activities can be adjusted in step with new technology innovations and regulatory updates.
Confidentiality, Data Protection, & Privacy: CKS adheres to the highest standards of client confidentiality and data stewardship. AI tools are only used in full compliance with all relevant privacy laws, codes of practice, client obligations, and contractual requirements. Sensitive or confidential material will never be processed with AI systems that transmit data to uncontrolled third-party environments. We maintain comprehensive safeguards, both technical and procedural, to ensure client data security at every project stage.
Transparency for Clients & Stakeholders: Our research designs explicitly detail the tools we use, including AI technologies. We maintain complete transparency with clients about AI tool usage and the use of AI technologies requires informed client consent. When we use AI to support a specific deliverable, its role and limitations are disclosed and can be explained in detail upon request. We invite questions and engagement on our methods and strive to maintain the trust that comes from open, clear communication about our work.
*****
DATE OF PUBLICATION: AUGUST 4th 2025.
FEEDBACK ON THIS POLICY CAN BE EMAILED TO OFFICE@CRAIGHEADKELLAS.COM