Abstract

Supervised deep learning methods have produced state-of-the-art results with large labeled datasets. However, accessing large labeled datasets is difficult in medical image analysis because of a shortage of medical experts, expensive annotations, and privacy constraints in the healthcare domain. Self-supervised learning is a branch of machine learning that exploits unlabeled data to encourage network weights toward a valid latent representation of the data during a so-called pretext task. The features learned by the model while solving pretext tasks are transferred to a downstream task where limited annotations are available. In this work, we propose PatchLoc, a novel pretext task whose objective is to find the location of a given patch from an image as a source of supervision. We validated the effectiveness of PatchLoc on a downstream segmentation task using three different medical datasets. PatchLoc yields substantial improvements compared to U-Net trained from scratch and other pretext task-based approaches in a low data regime.

Original languageEnglish
Pages (from-to)66845-66857
Number of pages13
JournalIEEE Access
Volume12
DOIs
Publication statusPublished - 2024

Keywords

  • Medical imaging
  • limited annotations
  • pretext tasks
  • self-supervised learning

Fingerprint

Dive into the research topics of 'PatchLoc: Embedded Patch Localization Pretext Task for Tumor Segmentation in Medical Images'. Together they form a unique fingerprint.

Cite this