ORCID

Abstract

A cross-domain transfer learning approach is introduced to address the challenges of diagnosing individuals with Autism Spectrum Disorder (ASD) using small-scale fMRI datasets. Vision Transformer (ViT) and TinyViT models pre-trained on the ImageNet, were employed to transfer knowledge from the natural image domain to the brain imaging domain. The models were fine-tuned on ABIDE and CMI-HBN, using a teacher student framework with knowledge distillation loss. Experimental results demonstrated that our method outperformed previous studies, ViT models, and CNN-based models. Our approach achieved competitive performance (F-1 score 78.72%) with a much smaller parameter size. This study highlights the effectiveness of cross-domain transfer learning in medical applications, particularly for scenarios with small datasets. It suggests that pre-trained models can be leveraged to improve diagnostic accuracy for neuro-developmental disorders such as ASD. The findings indicate that the features learned from natural images can be adapted to fMRI data using the proposed method, potentially providing a reliable and efficient approach to diagnosingautism.

Publication Date

2024-12-20

Keywords

Cross-Domain Transfer Learning, Vision Transformers, Autism Diagnosis

Share

COinS