AI & Body Composition Assessment (Part 1 of 3)
Mar 23, 2026Part 1: AI Meets Imaging - CNNs and CT/MRI Body Composition Analysis
Medical imaging technologies such as dual‑energy X‑ray absorptiometry (DXA), computed tomography (CT), and magnetic resonance imaging (MRI) have long been considered gold standards for body composition assessment. These tools provide detailed insight into fat, muscle, and bone distribution. However, traditional analysis methods require substantial manual input and technical expertise. Convolutional neural networks (CNNs), a form of deep learning, are now transforming how these images are interpreted.
CNNs analyze spatial patterns within images to automatically identify tissues such as skeletal muscle, visceral adipose tissue, and subcutaneous fat. By training on thousands of labeled medical images, these systems learn to segment tissues and quantify body composition with accuracy comparable to expert analysts while dramatically reducing analysis time.
CNN Applications in DXA
In DXA analysis, CNN models can automatically detect anatomical landmarks and define regions of interest such as the arms, legs, trunk, and android and gynoid compartments. This improves the consistency of regional body composition estimates and reduces errors caused by positioning differences or artifacts. Automated quality control systems can also flag scans affected by motion or implants.
CNN Applications in CT and MRI
CT and MRI provide highly detailed measurements of body composition. CNNs are particularly effective at automatically identifying the third lumbar vertebral level (L3) and segmenting tissues within that slice. These models can quantify skeletal muscle cross‑sectional area, visceral adipose tissue, subcutaneous adipose tissue, and intermuscular fat in seconds.
Clinical Example: Opportunistic Sarcopenia Screening
A promising clinical use of convolutional neural networks (CNNs) in body composition assessment is opportunistic sarcopenia screening using CT scans that are obtained for other clinical purposes. Many patients undergo abdominal CT imaging for reasons such as cancer staging, trauma evaluation, or abdominal pain. These scans contain valuable information about muscle and fat distribution that traditionally goes unused. CNN-based analysis pipelines allow clinicians to extract this information automatically.
Clinical Scenario
A 68-year-old patient with newly diagnosed colorectal cancer undergoes an abdominal CT scan as part of routine staging prior to surgery. The primary purpose of the scan is to determine tumor extent and detect potential metastases. However, the CT images also include cross-sectional views of the patient’s abdominal musculature and adipose tissue. Traditionally, assessing muscle mass from these images would require a trained analyst to manually identify the appropriate vertebral level and trace muscle boundaries—an analysis that could take 10–20 minutes per patient. Because of this time requirement, body composition metrics were rarely incorporated into routine clinical workflows. With a CNN-based analysis system integrated into the imaging pipeline, this process becomes fully automated.
Step 1: Automatic Identification of the L3 Vertebral Level
After the CT scan is completed and uploaded to the hospital’s imaging system, a CNN model first analyzes the full CT volume to identify the third lumbar vertebra (L3). This level is widely used in body composition research because skeletal muscle area measured at L3 correlates strongly with whole-body muscle mass. The CNN scans through the CT slices and identifies the vertebral anatomy associated with L3 with very high accuracy, eliminating the need for manual slice selection.
Step 2: Automated Tissue Segmentation
Once the correct slice is identified, a second CNN segments the different tissues within the image. The model labels each pixel as belonging to one of several tissue categories, including:
- skeletal muscle
- visceral adipose tissue (VAT)
- subcutaneous adipose tissue (SAT)
- intermuscular fat
The segmentation process typically takes only a few seconds and produces a color-coded map of the different tissue compartments.
Step 3: Calculation of Body Composition Metrics
From the segmented image, the system calculates several clinically relevant metrics, including:
- skeletal muscle cross-sectional area (cm²)
- skeletal muscle index (SMI) — muscle area normalized for height (cm²/m²)
- visceral adipose tissue area
- subcutaneous adipose tissue area
The skeletal muscle index is then compared with established clinical thresholds used to define sarcopenia.
Step 4: Clinical Decision Support
In this example, the CNN-derived analysis indicates whether the patient’s skeletal muscle index falls below the sarcopenia threshold for their sex and age group. The system automatically generates a flag in the radiology report or electronic health record indicating possible sarcopenia.
The alert may trigger a multidisciplinary response involving:
- referral to a clinical dietitian for nutritional assessment
- evaluation by physical therapy or exercise specialists
- review of protein and energy intake
- consideration of prehabilitation strategies before surgery
Why This Matters
Sarcopenia has been associated with a wide range of adverse clinical outcomes in oncology, including:
- increased chemotherapy toxicity
- higher postoperative complication rates
- longer hospital stays
- reduced survival
Early identification allows clinicians to implement nutrition and exercise interventions that may improve treatment tolerance and recovery.
Advantages of CNN-Based Opportunistic Screening
CNN-based analysis provides several important benefits:
Automation and speed- Analysis that once took 10–20 minutes can now be completed in seconds.
Consistency and reproducibility- CNN models reduce inter-observer variability in tissue segmentation.
Use of existing imaging data- No additional imaging or radiation exposure is required.
Scalability- Large numbers of scans can be analyzed automatically across hospital systems or research databases.
Broader Implications
Opportunistic body composition analysis represents a major shift in how imaging data are used in clinical care. Instead of focusing only on the primary diagnostic question (such as tumor detection), CT scans can also provide valuable insights into metabolic health, nutritional status, and physical frailty. As CNN models become more widely integrated into radiology workflows, body composition metrics such as muscle mass and visceral fat may become routine components of imaging reports—helping clinicians identify patients at risk and intervene earlier.
More Reading -- Key References
Weston, A. D., Korfiatis, P., Kline, T. L., Philbrick, K. A., Kostandy, P., Sakinis, T., Sugimoto, M., Takahashi, N., Erickson, B. J., & Linguraru, M. G. (2019). Automated abdominal segmentation of CT scans for body composition analysis using deep learning. Radiology, 290(3), 669–679.
Paris, M. T., Tandon, P., Heyland, D. K., Furberg, H., Premji, T., Low, G., Mourtzakis, M., & Mourtzakis, M. (2020). Automated body composition analysis of clinically acquired computed tomography scans using neural networks. Clinical Nutrition, 39(10), 3049–3055.
Elhakim, T. (2023). Role of machine learning–based CT body composition analysis in clinical care. Diagnostics, 13(5), 968.
Delrieu, L., Touillaud, M., Pardon, L., Bousson, V., & colleagues. (2024). Deep learning–based automated selection and segmentation of the L3 slice for CT body composition analysis in cancer patients. , European Journal of Radiology, 165, 110947.
Mustapoevich, D., Shpanskaya, K., & colleagues. (2023). Artificial intelligence applications in sarcopenia detection and body composition analysis: Current evidence and future directions. Healthcare, 11(18), 2483