What are the risks for young people?
What is AI?
AI refers to computer systems that perform tasks that typically involve human intelligence. AI can recognise images and sounds; understand language, produce speech and written documents; learn from data, e.g. recognising patterns; and make decisions.
Commonly used AI systems include:
- Satnav
- Language processing and editing software
- Email spam filters
- Face recognition
- Music/film streaming service recommendations
AI is now very much part of our everyday lives, and young people will interact with AI in many ways. The most well-known app is probably ChatGPT and there are many others. ‘Meta’ apps (Facebook, Instagram, WhatsApp) all have Meta AI in their message/chat function, ‘X’ has ‘Grok’ and Microsoft has ‘Copilot’. These can be used to find answers to questions or ask for images. They are interactive and are designed to be very engaging.
Using AI to create text and images
AI can be used to create images and written information, including answers to questions and full essays.
Concerns with AI-generated text
For children and young people, there may be a temptation to use AI to do homework tasks. Aside from the concern that they are not developing their own reading comprehension and writing skills by doing so, they could find that their work is rejected, or that they are disciplined for cheating. Schools and universities are now commonly checking work for use of AI due to its popularity.
AI-generated text online cannot be guaranteed to be accurate and there is considerable misinformation or ‘fake news’. Search engines such as ‘Google’ are currently presenting AI-generated responses to searches at the top of the page, and users can be led to assume that this is the ‘best’ or ‘most relevant’ information.
Concerningly, the 2025 Ofcom ‘Children and Parents: Media Use and Attitudes Report’ showed that more than half of teenagers would trust an AI article at least as much as one written by a human.
Concerns with AI-generated images
AI image generators can be used to create a wide range of different pictures in many styles. This can be very useful to illustrate schoolwork and is often harmless, particularly if the pictures are of someone/something that does not exist in real life (entirely AI-generated). It can also be highly entertaining, so is appealing to young people.
There are some ways, however, in which AI images can be used to cause harm:
- AI can be used to modify existing images, e.g. undressing people, face-swap or age alteration. This can be used to bully or intimidate, or to create sexual images, and might be illegal.
- AI can be asked to generate new images or videos of someone from a photograph prompt. These images may be sexual in nature, or embarrassing, and might be illegal.
- Sex offenders may use AI technology to generate fake personas, conversation that mimics young people’s dialogue or even clone a particular voice. This may facilitate grooming.
Altered images are often referred to as ‘deep fakes’ and can be very convincing.
AI Chatbots
There are currently several concerns relating to AI Chatbots when used by children and young people, or vulnerable adults.
Two well-known apps are ChatGPT and Character.ai, although there are many available on app stores and chat is possible (or in development) for many social media apps.
Reinforcing harmful beliefs
AI chatbots are designed to please the user. They are programmed to flatter and agree, rather than challenge or argue an opposing view. Discussions with a chatbot can therefore become an ‘echo chamber’ where the one view is presented exclusively, with no challenge (unless you ask it specifically to give you another point of view).
In this way, chatbots can reinforce attitudes and beliefs that are prejudiced and/or discriminatory and may lead to harmful behaviour towards the self or others.
Inaccurate advice and mental health concerns
Users may ask a chatbot for advice on a particular issue. The chatbot will present itself as a helpful listening ear and a trusted source of advice.
While some users report that AI chatbots (particularly ones designed to offer therapy) have helped them through mental health crises, psychiatrists are raising concerns about the potential for harm. Chatbots lack privacy and regulation and cannot replicate the therapeutic relationship. There have been cases where chatbots have given dangerous advice or not signposted to emergency crisis support to prevent suicide.
There are further concerns that some users ‘anthropomorphise’ their chatbot (come to see it as a real person) or go so far as to ‘deify’ it (viewing it as ‘godlike’). It becomes their only, or most-trusted, confidant. A new term ‘AI psychosis’ has emerged (although this is not a clinical diagnosis) to describe ways in which users have developed paranoia or delusions or experienced a worsening of existing symptoms with their use of AI chatbots. This describes how users may lose contact with reality.
Sexualised content
AI companion apps allow you to create a customised chatbot, including girlfriend/boyfriend chatbots. These relationship chatbots are concerning for children and young people as they are a fantasy/idealised image and have the potential to make real-life relationships harder to form and maintain. In addition, young people may be uploading nude images of themselves to these AI apps and requesting nudes in return (which the app provides). The young person is exposed to potentially harmful images, and the security of their own images is compromised.
What to do
- If your child is aware of, interested in, or using AI, please discuss the relevant limitations and risks with them. Please do not talk with them about risks that are not currently relevant for them, or which might pique their interest in a way that might lead them to engage with risky behaviour.
- This requires a safeguarding judgement call, and we would advise you to discuss this with your SSW, Fostering Advisor and the child’s local authority as part of the risk assessment process.
- Block AI apps such as ChatGPT on the child’s device if appropriate, following risk assessment discussions.
- If a young person notifies you of a safeguarding concern, please let ISP know immediately as you would for all safeguarding concerns.
Additional information about CSA Risks
Viewing Generative AI and children’s safety in the round | NSPCC Learning
Britain’s leading the way protecting children from online predators – GOV.UK