TY - GEN
T1 - Blood Clot Image Segmentation Using Segment Anything Model
AU - Yadav, Nupur
AU - Srivastava, Shilpee
AU - Sriwastav, Nikhil
AU - Torgal, Sneha
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Recently, there has been a lot of interest in the Segment Anything Model (SAM), which has led scholars to investigate its zero-shot generalisation capabilities and restrictions. SAM was trained on a large dataset with an unprecedented amount of images and annotations, serving as the first promptable foundation model for segmentation tasks [6]. The large dataset and promptable nature of the data provide the model strong zero-shot generalisation capabilities. Although SAM has shown competitive performance on multiple datasets, its potential for zero-shot generalisation on Blood Clot imaging datasets is still unexplored. The advent of a foundation model that can predict masks with excellent quality using just a few point prompts could revolutionise blood clot image analysis, since expert practitioners have to put in a lot of work to obtain annotations for blood clot images. We compiled more than two public blood clot image datasets, including 9999 normal photos and equivalent number of images having blood clots in it, in order to evaluate SAM's suitability as the base model for blood clot image segmentation tasks. We also looked into the best prompts that provide better zero-shot performance in a variety of modalities. Interestingly, our investigation revealed a clear trend: changes in box size had a major effect on prediction accuracy. Extensive trials conducted later on showed significant differences in the projected mask quality between datasets. Notably, giving SAM the right cues - like bounding boxes - noticeably improved its performance.
AB - Recently, there has been a lot of interest in the Segment Anything Model (SAM), which has led scholars to investigate its zero-shot generalisation capabilities and restrictions. SAM was trained on a large dataset with an unprecedented amount of images and annotations, serving as the first promptable foundation model for segmentation tasks [6]. The large dataset and promptable nature of the data provide the model strong zero-shot generalisation capabilities. Although SAM has shown competitive performance on multiple datasets, its potential for zero-shot generalisation on Blood Clot imaging datasets is still unexplored. The advent of a foundation model that can predict masks with excellent quality using just a few point prompts could revolutionise blood clot image analysis, since expert practitioners have to put in a lot of work to obtain annotations for blood clot images. We compiled more than two public blood clot image datasets, including 9999 normal photos and equivalent number of images having blood clots in it, in order to evaluate SAM's suitability as the base model for blood clot image segmentation tasks. We also looked into the best prompts that provide better zero-shot performance in a variety of modalities. Interestingly, our investigation revealed a clear trend: changes in box size had a major effect on prediction accuracy. Extensive trials conducted later on showed significant differences in the projected mask quality between datasets. Notably, giving SAM the right cues - like bounding boxes - noticeably improved its performance.
KW - auto-prompt
KW - Blood Clot
KW - box-prompt
KW - Segment Anything Model (SAM)
UR - http://www.scopus.com/inward/record.url?scp=85199318308&partnerID=8YFLogxK
U2 - 10.1109/ICRTCST61793.2024.10578367
DO - 10.1109/ICRTCST61793.2024.10578367
M3 - Conference contribution
AN - SCOPUS:85199318308
T3 - 5th International Conference on Recent Trends in Computer Science and Technology, ICRTCST 2024 - Proceedings
SP - 476
EP - 481
BT - 5th International Conference on Recent Trends in Computer Science and Technology, ICRTCST 2024 - Proceedings
A2 - Mahato, Gopal Chandra
A2 - S., Sangeeta
A2 - Dash, Smita
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 5th IEEE International Conference on Recent Trends in Computer Science and Technology, ICRTCST 2024
Y2 - 15 April 2024 through 16 April 2024
ER -