“AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents,” the American Psychological Association report, titled Artificial Intelligence and Adolescent Well-being: An APA Health Advisory, stated.

“We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI,” the report urged.

“It is critical that we do not repeat the same harmful mistakes made with social media.”

The report was written by an expert advisory panel and follows on from two other APA reports on social media use in adolescence and healthy video content recommendations.

The AI report notes that adolescence – which it defines as ages 10-25 – is a long development period and that age is “not a foolproof marker for maturity or psychological competence”.

It is also a time of critical brain development, which argues for special safeguards aimed at younger users.

“Like social media, AI is neither inherently good nor bad,” APA Chief of Psychology Mitch Prinstein, PhD, who spearheaded the report’s development, said.

“But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example.

“Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.”

The report makes a number of recommendations to make certain that adolescents can use AI safely.

These include:

  • Ensuring there are healthy boundaries with simulated human relationships. Adolescents are less likely than adults to question the accuracy and intent of information offered by a bot, rather than a human.
  • Creating age-appropriate defaults in privacy settings, interaction limits and content. This will involve transparency, human oversight and support and rigorous testing, according to the report.
  • Encouraging uses of AI that can promote healthy development. AI can assist in brainstorming, creating, summarizing and synthesizing information – all of which can make it easier for students to understand and retain key concepts, the report notes. But it is critical for students to be aware of AI’s limitations.
  • Limiting access to and engagement with harmful and inaccurate content. AI developers should build in protections to prevent adolescents’ exposure to harmful content.
  • Protecting adolescents’ data privacy and likenesses. This includes limiting the use of adolescents’ data for targeted advertising and the sale of their data to third parties.

The report also calls for comprehensive AI literacy education in the United States, integrating it into core curricula and developing national and state guidelines for literacy education.

“Many of these changes can be made immediately, by parents, educators and adolescents themselves,” Prinstein said.

“Others will require more substantial changes by developers, policymakers and other technology professionals.”

Here in Australia, Education Services Australia (ESA), a non-profit set-up by Australia’s education ministers, has just developed a new national training program with Microsoft to help teachers build skills and confidence in GenAI.

While there have been AI training initiatives in different states and territories, and in individual schools, until now there has not been a national training program aligned to the Australian standards for teachers’ professional development and the Federal Government’s Framework for Generative AI in Schools.

In step with the American study’s findings, ESA CEO Andrew Smith said Generative AI is a developing technology that presents both opportunities and risks to school education.

“These modules offer an accessible avenue for Australia’s teachers to build their confidence and knowledge in using it safely and ethically, which will support our schools and education systems in achieving better outcomes as the technology evolves.”