Call for papers

Colorectal cancer is the third most common cause of cancer death all over the world, with almost two million new cases in 2020, worldwide. Early detection is of paramount importance in detecting cancer at an early stage, due to a 5-year survival rate close to 100% at stage I, and below 10%, at stage IV. Traditional colonoscopy remains the gold standard procedure because of its dual capability to optically inspect the colonic mucosa and to remove polyps, because they may eventually become cancer. In the last decades, new technologies have supported clinical decisions with AI and ML algorithms playing an important role in this process.

This workshop focuses on novel scientific contributions to vision systems, imaging algorithms as well as the autonomous system for endorobot for GI endoscopy. This includes lesion and lumen detection, as well as 3D reconstruction of the GI tract and hand-eye coordination. Applications include but are not limited to wireless capsule endoscopy, standard optical colonoscopy, as well as endorobots for endoscopy. Contributions should demonstrate potential clinical benefits describing current stage, development path and challenges to overcome before translation into clinical practice.

Localising interesting features such as the lumen, lesions, or polyps to be removed requires their detection in the images of the video stream. Deep learning-based approaches to this task have become very successful in recent years, with annotated datasets publicly available to train them. Challenges remain, especially concerning the robustness to illumination because of the reflective surface. Potential solutions to these issues will be discussed in this workshop.

Another important aspect in endoscopy is the level of autonomy of a medical device to support the clinical workforce in performing the procedure. This autonomous behaviour can benefit from 3D information such as the localisation of the target, lumen or lesion, and the position of an endorobot with respect to the wall of the gastrointestinal (GI) tract. Traditionally this information can be obtained from a monocular video system by using methods such as SLAM, shape-from-template, and non-rigid structure-from-motion. More recently, methods based on deep learning monocular depth estimation have also emerged. This workshop will discuss progresses made with those approaches and limitation remaining to successfully move an endorobot autonomously as well as achieving autonomous surgical tasks.

The workshop will have six selected papers, 10 poster presentations, and two invited speakers covering both clinical and technical aspects. Topics include but are not limited to

  • Image processing

    • Lesion detection and classification

    • Polyp segmentation and detection

    • Haustral folds detection

    • Real-time lumen detection

  • 3D vision for endoscopy

    • 3D reconstruction of the colon

    • CT (Virtual) colonoscopy

    • 3D camera systems

    • Navigation systems

  • Imaging technology

    • Capsule endoscopy

    • Imaging in surgical robots

    • Imaging for soft robotics

    • Imaging for self-assembling robots

  • Translational aspects

    • Clinical trials and lessons learnt

    • Validation of endoscopic systems

    • Explainability and acceptability of technology by patients and clinicians

Registration closed

Manuscripts will be reviewed in double-blinded peer-review. Please prepare your workshop papers according to the MICCAI submission guidelines (LNCS template, up to 8-pages including text, figures and tables plus up to 2 pages of references). Supplementary materials can be PDF documents or mp4 videos (10 MB maximum file size). Camera ready guidelines can be found here.

LNCS logo

Accepted papers will be published in Springer Nature's Lecture Notes in Computer Science (LNCS), MICCAI Workshop Singapore, 18 September 2022 proceeding.