Last reviewed: 13 April 2026
How Social Media Resilience Ltd uses artificial intelligence responsibly in our work, content and services.
Social Media Resilience Ltd uses artificial intelligence as a professional tool to support the quality and efficiency of our work. We are not an AI company, and we do not offer AI-powered products to end users.
Our programmes teach critical thinking, AI literacy and the ability to recognise manipulation and algorithmic influence. It would be inconsistent with that mission to use AI carelessly or without transparency. We hold our own AI use to the same standards of scrutiny we encourage in others.
Important: AI is not used in any client-facing or participant-facing interactions. Our sessions, workshops, assessments and impact reports are created and delivered by human specialists. AI supports our internal work only.
We use AI to assist with drafting and editing blog posts, articles, and educational resources (all reviewed and edited by a human before publication); researching and summarising academic literature, statistics, and policy documents; generating first drafts of marketing copy and case study summaries (all reviewed before use); and transcription and summarisation of internal meeting notes.
We use AI to analyse anonymised, aggregated assessment data to identify patterns; generate first drafts of impact report templates verified and finalised by our team; and support development of session frameworks and facilitation guides.
Nothing generated by AI is published, delivered, or shared with clients without being reviewed, edited, and approved by a member of our team. AI outputs are treated as drafts, not finished work.
We do not use AI in the following contexts: participant interactions (no AI during workshop delivery, live sessions, parent evenings, or any direct interaction with participants); safeguarding decisions (all safeguarding judgements are handled exclusively by trained human facilitators); individual assessment data (AI is not used to analyse or make decisions about individual participant data); personalised client advice (all recommendations to organisations are produced by human specialists); automated decision-making (we do not use AI to make automated decisions that affect people); or hiring and HR decisions.
Our AI use is governed by eight principles informed by the EU AI Act principles, and UK Government AI guidance.
1. Transparency: We are open about where and how AI is used in our work.
2. Human oversight: A human must review and approve all AI-assisted outputs before use.
3. Accuracy: We verify AI-generated content against authoritative sources before publication.
4. Fairness: We actively limit AI use to contexts where bias risk is low and review outputs before publication.
5. Privacy: We do not input personal data, client information, or identifiable participant data into AI tools.
6. Accountability: Our Founder and Director is responsible for AI use within SMR.
7. Intellectual property: We take care not to use AI in ways that reproduce or infringe third-party IP.
8. Environmental responsibility: We use AI tools proportionately and purposefully, not as a default.
Content substantially written with AI assistance will be labelled. Content where AI assisted with research or editing but a human wrote the piece will not be labelled unless the AI contribution was significant. Downloadable resources drafted using AI will note this in the document footer. All social content is reviewed and approved by a team member before posting.
AI language models can produce plausible-sounding but inaccurate content. Our verification process: all statistics and factual claims are checked against the original source; legal, compliance, and safeguarding content is reviewed by a qualified person; programme content referencing academic research is checked against the cited source; and any AI output that cannot be independently verified is removed or flagged for expert review.
Our programmes address online misogyny, digital abuse, and the experiences of neurodiverse people. AI bias would be particularly harmful in these contexts. We do not use AI to generate content about individual or group experiences of harm without expert human review. We intentionally limit AI use in high-sensitivity areas such as content about domestic abuse, coercive control, and neurodiversity. We do not use AI to represent the experiences of communities we serve without those communities being involved in review.
No personal data about programme participants, clients, or employees is entered into any AI tool. No identifiable information about schools, organisations, or contacts is entered into consumer AI tools. Assessment data entered into AI systems is anonymised and aggregated before processing. We use AI tools that comply with UK GDPR and, where applicable, ISO 27001 or SOC 2 Type II certified providers. We do not allow our institutional content or client data to be used to train third-party AI models.
Responsibility for AI use within Social Media Resilience Ltd sits with our Founder and Director. This includes deciding which AI tools may be used, ensuring team members understand and follow this policy, reviewing and updating this statement as practices or technology changes, and responding to any concerns raised about our AI use.
We use the following categories of AI tools: large language models such as Claude and ChatGPT for content drafting and research; AI writing assistants such as Grammarly for proofreading; AI transcription tools such as Otter.ai for meeting notes; and AI search and research tools such as Perplexity. We do not use in-house or custom-trained AI models. All AI tools are third-party products from established providers.
We commit to a six-monthly review of this statement and our AI tool usage, an immediate update if we introduce AI use in a new context or receive a concern from a client, an annual external perspective on our AI practices, and monitoring of UK AI regulation and ICO AI guidance. This statement was last reviewed on 13 April 2026.
If you have questions about our AI use, want to raise a concern, or would like to request our full AI practices document: Email info@socialmediaresilience.org with subject line AI practices enquiry. Phone: 0121 798 3069.