https://doi.org/10.1140/epjp/s13360-025-06587-4
Regular Article
ISAM-QV7Net: an improved segment anything model-2 with the quantum-inspired superposition memory and vision-efficientNetB7 for the advanced liver tumor prognosis
1
Department of ECE, Chennai Institute of Technology, Kundrathur, Chennai, Tamil Nadu, India
2
Department of ECE, Ponjesly College of Engineering, Nagercoil, Tamil Nadu, India
3
Computer Science and Engineering, KCG College of Technology, Karapakkam, Chennai, Tamil Nadu, India
4
Department of Electronic and Communication Engineering, PSN College of Engineering and Technology, Tirunelveli, Tamil Nadu, India
Received:
29
April
2025
Accepted:
24
June
2025
Published online:
19
July
2025
Liver cancer is a major cause of cancer-related deaths, making accurate prognosis prediction vital for improving patient outcomes. However, inconsistent tumor boundary delineation due to irregular shapes and overlapping regions in histopathology images makes segmentation challenging. Variations in tumor morphology, such as cell density, necrosis, and invasive margins, further hinder accurate classification of primary and secondary tumors. Moreover, oncologists face challenges in tracking tumor evolution over time, that require longitudinal multi-timepoint histopathology slides. To address these challenges, we proposed an improved Segment Anything Model-2 with Quantum-Inspired Superposition Memory and Vision-EfficientNetB7 (ISAM-QV7Net), for liver tumor segmentation, classification, and evolution tracking. The Adaptive Cross-Scale Guided Segment Anything Model-2 Network (ACS-SAM2Net) uses adaptive cross-scale feature aggregation to precisely isolate tumor boundaries across multiple scales. This mechanism captures nonlinear spatial dependencies that arise due to irregular tumor evolution and variable morphology. The graph memory attention module enhances segmentation accuracy by refining boundary delineation, while the quantum-inspired superposition memory bank retains complex contextual information, improving segmentation of irregular and overlapping tumor regions. For classification, Partition-Aware Vision-EfficientNetB7, equipped with self-calibrated convolution (SC-Conv), partition pooling, and Vision Eagle Attention, focuses on the most discriminative tumor regions. The SC-Conv blocks dynamically adapt convolutional filters based on local and global contexts, improving the model’s capacity to suppress excessive noise and capture fine-grained tumor features. The Sparse Residual Evolution Tracking Network (SRET-Net) captures temporal patterns using its Sparse Residual Evolution Encoder, which model’s tumor progression features over time. The Sparse Attention-Guided Tracking Module focuses on critical regions, while the Dynamic temporal Attention DeepHiT (DTA-DeepHiT) module estimates survival probabilities, aiding clinicians in making timely treatment decisions. The model achieves outstanding performance with 99.8% accuracy, 99.70% precision, 99.50% sensitivity, 99.10% DSC, 98.00% IoU, 79.20 C-Index, and 0.155 IBS, supporting precise and reliable liver cancer prognosis prediction, enhancing clinical decision-making, and improving patient care.
Copyright comment Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
© The Author(s), under exclusive licence to Società Italiana di Fisica and Springer-Verlag GmbH Germany, part of Springer Nature 2025
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.