https://doi.org/10.1140/epjp/s13360-025-06736-9
Regular Article
HHM-ZUNet: hybrid hierarchical model based on Z-Net and modified U-Net++ with variable multi-attention for brain tumor detection and classification
1
Department of Electronics and Communication Engineering, Amity University Rajasthan, Jaipur NH-11C, India
2
Department of Electronics and Communication Engineering, Dr B R Ambedkar National Institute of Technology, Jalandhar, India
a
This email address is being protected from spambots. You need JavaScript enabled to view it.
Received:
2
May
2025
Accepted:
1
July
2025
Published online:
26
August
2025
Brain tumors are abnormal growths of tissue in the brain that can vary in size, type, and location. Accurate detection and classification of brain tumors from MRI images remain several difficulties due to inter-image differences in size and orientation as well as complexities in tissue differentiation. Existing methods often fail to effectively capture spatial relationships, dependencies, and salient features that are critical in diagnosis. To mitigate these problems, we propose a comprehensive framework, hybrid hierarchical model based on Z-Net and modified U-Net++ with variable multi-attention (HHM-ZUNet). This framework integrates advanced techniques for robust tumor detection and classification. The HHM-ZUNet framework begins with preprocessing steps, including data normalization and skull stripping, followed by spatial augmentation to enhance dataset uniformity and variability. It employs a novel feature extraction module termed HIERARCHICAL Z-Net with adaptive attention block (AAB) and graph convolutional networks (GCNs) (HZNet-AGN). This module consists of a hierarchical Z-Net enhanced by two internal components-AAB, which dynamically focuses on salient tumor regions through channel and location attention, and GCNs, which model spatial relationships across brain regions. Together, these elements enable HZNet-AGN to extract rich, multi-level features that capture both local textures and global spatial context, significantly improving tumor segmentation and classification accuracy. After that, the variable multi-attention (VMA) module enhances precise tumor segmentation by incorporating graph, hierarchical, spatial, and channel-wise attention mechanisms. This module is integrated within the U-Net++ backbone, referred to as the VMA-UNet++ architecture. Finally, the transformer pooling-based classification (TPC) module integrates transformer architectures to aggregate multi-scale, multimodal information, improving the ability of the model to extract contextual information and long-range dependencies from MRI data. The qualitative performances on the BraTS 2020 and 2021 datasets, which reveal that HHM-ZUNet achieves superior performance with Dice scores of 98.43%, Hausdorff distances of 1.96, sensitivity of 98.4%, specificity of 98.5%, and accuracy of 99.5% compared to existing methods. The proposed methodology significantly improves the accuracy and reliability of brain tumor diagnosis compared to existing techniques.
Copyright comment Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
© The Author(s), under exclusive licence to Società Italiana di Fisica and Springer-Verlag GmbH Germany, part of Springer Nature 2025
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
