OSLSM

OSLSM

OSLSM(One-Shot Learning for Semantic Segmentation)[1] firstly proposed two-branch approach to one-shot semantic segmentation. Conditioning branch trains a network to get parameter $\theta$, and Segmentaion branch outputs the final mask based on parameter $\theta$. There are some details of reading and implementing it.

Contents


Paper & Code & note


Paper: One-Shot Learning for Semantic Segmentation(BMVC 2017 paper)
Code: Caffe
Note: Mendeley

Paper


Abstract

OSLSM_Abstract.png

  1. They extend low-shot methods to support semantic segmentation.
  2. They train a network that produces parameters for a FCN.
  3. They use this FCN to perform dense pixel-level prediction on a test image for the new semantic class.
  4. It outperforms the state-of-the-art method on the PASCAL VOC 2012 dataset.

Problem Description

A simple approach to performing one-shot semantic image segmentation is to fine-tune a pre-trained segmentation network on the labeled image.

  • This approach is prone to over-fitting due to the millions of parameters being updated.
  • The fine tuning approach to one-shot learning, which may require many iterations of SGD to learn parameters for the segmentation network.
  • Besides, thousands of dense features are computed from a single image and one-shot methods do not scale well to this many features.

Problem Solution

OSLSM_Overview.png
OSLSM_PS.png

Conceptual Understanding

OSLSM_Architecture.png

Core Conception

OSLSM_weight-hashing.png

Experiments

OSLSM_MIoU.png
OSLSM_QR.png

Code


[Updating]

Note


  • It takes inspiration from few-shot learning and firstly proposes a novel two-branched approach to one-shot semantic segmentation.

References


[1] Shaban A, Bansal S, Liu Z, et al. One-shot learning for semantic segmentation[J]. arXiv preprint arXiv:1709.03410, 2017.
[2] OSLSM. https://github.com/lzzcd001/OSLSM.


  DLFSSOSLSM

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×