Why AI in Pediatric Medical Imaging Struggles with Kids

A recent study analyzing FDA approvals from 1995 to 2023 revealed that 81.

AM
Arjun Mehta

May 6, 2026 · 6 min read

A child undergoing a medical scan with advanced AI technology in a futuristic pediatric hospital setting.

A recent study analyzing FDA approvals from 1995 to 2023 revealed that 81.6% of AI-enabled medical devices fail to specify age ranges for their intended user population, according to Nature. This widespread lack of clear age-specific information means many AI tools in medical imaging diagnostics for 2023 are approved without explicit guidance for their use on children. Such a systemic oversight by regulatory bodies creates significant concerns for patient safety and diagnostic accuracy in the youngest populations. The implications extend to potential misdiagnosis or ineffective treatment for a vulnerable group with unique physiological characteristics.

AI medical devices are being rapidly approved and deployed, but the vast majority lack clear validation or even specified age ranges for pediatric use. This creates a tension between the rapid pace of technological innovation and the critical need for rigorous safety standards, particularly for vulnerable patient groups. The absence of specific pediatric validation raises questions about the true efficacy of these tools when applied outside their primary development context.

Companies are prioritizing market entry and adult applications, which, based on current evidence, appears likely to leave children vulnerable to unvalidated and potentially unsafe AI diagnostic tools. This strategy risks overlooking the distinct medical needs of pediatric patients, whose physiological differences necessitate specialized testing and validation protocols for AI-driven diagnostics. The current trajectory suggests a gap in responsible AI development for healthcare.

The Pediatric Blind Spot in AI Device Approvals

In Europe, only 2% of 213 AI medical devices approved were clearly labeled for pediatric use, according to pubmed. The 2% figure highlights a critical oversight in the initial stages of device approval. An additional 11% of these devices were intended for all ages, including children, but still lacked specific pediatric labeling. The intended patient population was not clearly demonstrated for 41% of all AI medical devices analyzed in the database. The statistics indicate that the vast majority of AI medical devices entering the market are not explicitly designed or labeled for children, creating a significant gap in targeted pediatric care. This broad approval without clear age specification risks deploying unproven AI tools on young patients, potentially leading to diagnostic inaccuracies.

Even when devices are broadly intended for "all ages," the absence of explicit pediatric labeling or validation raises concerns. Regulatory authorization for pediatric use does not equate to actual pediatric validation. Regulatory authorization for pediatric use does not equate to actual pediatric validation, creating a false sense of security regarding device suitability for children, potentially encouraging clinicians to use tools without adequate evidence of safety or effectiveness for their youngest patients. The lack of specific guidance places a heavy burden of interpretation on healthcare providers.

Unclear Intent and Research Gaps

Further examination of devices with initially unclear populations showed that only 7% included patients under 18 in their intended target, while 47% remained ambiguous, according to pubmed. This persistent ambiguity about who AI tools are designed for complicates their responsible deployment in clinical settings. The lack of clear age specifications makes it difficult for healthcare providers to assess the suitability and safety of these devices for pediatric patients. This lack of clarity is compounded by a severe deficit in foundational research; paediatrics accounts for as little as 4% of AI-related studies indexed on Nature. The limited dedicated pediatric AI research restricts the foundational knowledge necessary for safe and effective applications in this vulnerable group.

The disparity in research focus means that the unique physiological differences of children, such as varying organ sizes, growth plates, and disease presentations, are not adequately considered in AI model development. This oversight can lead to models that perform poorly or generate erroneous results when applied to pediatric cases. The scarcity of pediatric-focused studies hinders the development of AI tools specifically tailored to the needs of young patients, perpetuating a cycle of adult-centric innovation.

The Data Deficit: Why AI Struggles with Kids

The underlying datasets used to train AI models critically lack pediatric imaging, both in quantity and metadata. Less than 2% of images in analyzed datasets, specifically 6,026 out of 512,608, involved children, according to scienceblog. This overwhelming skew towards adult data means AI models are inherently ill-equipped to accurately interpret pediatric physiology. Children's bodies are not simply smaller versions of adults; they have distinct anatomical structures, different disease patterns, and varying responses to medical conditions. Training AI predominantly on adult data ignores these crucial differences, rendering the models less reliable for pediatric use.

The stark data imbalance, with CT and MRI datasets showing adult-to-pediatric ratios of 317:1 and 298:1 respectively, means AI diagnostics are inherently biased against children. This bias risks misdiagnosis or delayed treatment for a population where accurate and timely care is paramount. Regulatory bodies are effectively allowing a vast array of unproven AI tools into pediatric care, trading innovation speed for unquantified patient risk. This systemic issue requires a concerted effort to collect and integrate more diverse pediatric data into AI training pipelines.

The Peril of Unvalidated AI for Children

The direct safety implications of using AI devices on children without proper pediatric validation are substantial. Among 149 AI devices authorized for pediatric use, only 18.8% are known to be validated on children, according to Nature. This critical lack of pediatric validation for tools intended for children means they may operate on assumptions derived from adult data, potentially leading to inaccurate diagnoses or inappropriate treatments. Conversely, 14.8% of these authorized devices were validated only on adults. The fact that 14.8% of these authorized devices were validated only on adults highlights a critical disconnect where actual safety and efficacy testing for pediatric populations is often neglected, even when devices are designed with children in mind.

The fact that 14.8% of AI devices authorized for pediatric use were validated only on adults suggests a systemic loophole where regulatory approval for children does not guarantee actual pediatric testing. This leaves clinicians to use tools with unknown efficacy and safety profiles on their youngest patients. Such a practice exposes pediatric patients to unquantified health risks, underscoring the urgent need for more stringent regulatory requirements and comprehensive pediatric validation for all AI medical devices. Without this, the promise of AI in healthcare remains incomplete and potentially harmful for children.

Modalities and Data Imbalance

How does the data imbalance affect different medical imaging types?

The disparity in training data varies significantly across imaging modalities, directly impacting AI performance. For instance, the ratio of adult to pediatric datasets is 6:1 for ultrasound, but dramatically increases to 128:1 for X-rays, according to scienceblog. This means AI tools developed for X-rays face a much larger data gap when applied to children compared to those developed for ultrasound, potentially leading to varied diagnostic accuracy across different imaging types.

Are AI tools for advanced imaging like MRI and CT reliable for children?

AI tools for advanced imaging face significant reliability challenges in pediatric applications due to severe data imbalances. The adult-to-pediatric dataset ratio for MRI is 298:1, and for CT scans, it is 317:1, according to scienceblog. The dramatic disparities indicate that AI models trained on such skewed data are likely to be less accurate and potentially unsafe when interpreting complex pediatric anatomy, necessitating modality-specific interventions and dedicated pediatric datasets.

Charting a Safer Path Forward for Pediatric AI

Addressing the systemic issues surrounding AI in pediatric medical imaging requires proactive measures from both industry and regulatory bodies. The ACR Informatics Commission's Pediatric AI Working Group has launched the Image IntelliGently™ campaign to promote safe, high-quality AI for pediatric care, according to the ACR Informatics Commission. Such industry-led initiatives are crucial steps towards establishing standards and promoting the responsible development and deployment of AI in pediatric medical imaging. These campaigns aim to raise awareness and encourage the generation of more pediatric-specific data.

However, these efforts require broad adoption and robust regulatory support to ensure that AI tools in pediatric care meet rigorous safety and efficacy benchmarks for all young patients. The dramatic disparity in pediatric data across modalities, particularly for advanced imaging like MRI and CT, indicates that AI tools developed for these areas are likely to be even less reliable for children, necessitating modality-specific interventions. Without a unified approach, the rapid deployment of AI-enabled medical devices will continue to expose pediatric patients to unquantified health risks. By 2027, clearer regulatory guidelines and mandatory pediatric validation for AI diagnostic tools must be in place to protect the health of children worldwide.