Skip to content

UNet Training

New Feature!

Train you model right from the GUI!

With PlantSeg you can train bespoke segmentation models! This is especially useful to get great results on a big dataset for which the build-in models work not perfectly.
First proofread some images from the dataset. Then train a new model on this high-quality data, and run it on the whole dataset.

You can also fine-tune existing models on your data.

Dataset

For training from an dataset stored on disk, create the directories train and val. Your training images must be stored as h5 files in these directories. The h5 files must contain the input image under the raw key, and the segmentation under the label key.

Train from GUI

To train from images loaded in the GUI, you need a layer containing the input image and one containing the segmentation. Make sure the quality of the segmentation is as good as possible by using the proofreading tool.

Widgets

Screenshot

Screenshot

  • Train from: Whether to train from a dataset on disk or from loaded layers in Napari.
  • Dataset: Dataset directory. It must contain a `train` and a `val` directory, each containing h5 files. Input/Output keys must be `raw` and `label` respectively.
  • Pretrained model: Optionally select an existing model to retrain. Hover over the name to show the model description. Leave empty to create a new model.
  • Model name: How your new model should be called. Can't be the name of an existing model.
  • Description: Model description will be saved alongside the model.
  • In and Out Channels: Number of input and output channels
  • Feature dimensions: Number of feature maps at each level of the encoder. If it's one number, the number of feature maps is given by the geometric progression: f_maps ^ k, k=1,2,3,4 Can't be modified for pretrained models.
  • Patch size: None
  • Data Resolution: Voxel size in um of the training data. Is initialized correctly from the chosen data if possible.
  • Max iterations: Maximum number of iterations after which the training will be stopped. Stops earlier if the accuracy converges.
  • Dimensionality: Train a 3D unet or a 2D unet
  • Microscopy modality: Modality of the model (e.g. confocal, light-sheet ...).
  • Prediction type: Type of prediction (e.g. cell boundaries prediction or nuclei...).
  • Sparse: If True, use Softmax in final layer. Else use a Sigmoid.
  • Device: None