Detail of G1CN_TM10.5_G



Project
Title
BDML file for semantic segmentation data of eight anatomical regions (isocortex, olfactory area, hippocampal formation, cerebral nuclei, interbrain, midbrain, hindbrain, and cerebellum) for images of samples obtained by slicing P7 mouse brain (G1CN_TM10.5_G dataset) that was neurogenic-tagged at E10.5 and was visualized for membrane-localized GFP reporter (G)
Description
semantic segmentation data of eight anatomical regions (isocortex, olfactory area, hippocampal formation, cerebral nuclei, interbrain, midbrain, hindbrain, and cerebellum) for images of samples obtained by slicing P7 mouse brain (G1CN_TM10.5_G dataset) that was neurogenic-tagged at E10.5 and was visualized for membrane-localized GFP reporter (G)
Release, Updated
2023-12-24
License
CC BY
Kind
Quantitative data based on Experiment
File Formats
BDML/BD5
Data size
715.6 MB

Organism
Mus musculus ( NCBI:txid10090 )
Strain(s)
C57BL/6 Neurog1^{CreER}(G1C); Tau^{mGFP-nLacZ}
Cell Line
-

Datatype
-
Molecular Function (MF)
Biological Process (BP)
nervous system development ( GO:0007399 )
Cellular Component (CC)
plasma membrane ( GO:0005886 ) neuron projection ( GO:0043005 )
Biological Imaging Method
X scale
1.0 micrometer/pixel
Y scale
1.0 micrometer/pixel
Z scale
20 micrometer/slice
T scale
-

Image Acquisition
Experiment type
-
Microscope type
-
Acquisition mode
-
Contrast method
-
Microscope model
-
Detector model
-
Objective model
-
Filter set
-

Summary of Methods
Under article review.
Related paper(s)

Shimojo,Yuki, Suehara,Kazuki, Hirata,Tatsumi, Tohsato,Yukako (2024) Segmentation of Mouse Brain Slices with Unsupervised Domain Adaptation Considering Cross-sectional Locations, IPSJ Transactions on Bioinformatics, Volume 17, Number , 33-39

Published in 2024

(Abstract) Images of mouse brain slices, obtained under slightly different experimental conditions, are available in 84 datasets in the NeuroGT database (https://ssbd.riken.jp/neurogt/). Our goal was to obtain semantic segmentation results for eight brain anatomical regions. However, out of 84 datasets, only one dataset had true labels that could be used to train a convolutional neural network (CNN), and it was incomplete (131 out of 162 images). A segmentation model trained with the labeled images was less accurate on other images obtained under different experimental conditions because of differences of the image properties. We therefore tried Unsupervised Domain Adaptation (UDA), wherein the parameters of the CNN trained on the labeled images (source) were transferred to the unlabeled images (target). We used the positional information of the sample slices associated with each image to propose a novel loss function that approximated the class occurrence probabilities of segmentation results obtained from source and target images of brain samples at similar sliced locations, and we introduced it into the UDA. The proposed UDA method achieved an mIoU of 78.34%, which was 8% more accurate than the previous UDA methods such as Contrastive Learning and Self-Training (CLST) and Maximum Classifier Discrepancy (MCD). We demonstrated experimentally that the proposed method was useful for segmenting biomedical images with a small amount of incomplete training data.

Contact
Yukako Tohsato , Ritsumeikan University , Faculty of Information Science and Engineering , Laboratory of compubational biology
Contributors
Yuki Shimojo, Kazuki Suehara, Tatsumi Hirata, Yukako Tohsato

Local ID
G1CN_TM10.5_G
BDML ID
1d04ff1d-4be9-415c-94e9-d1c8c8b22504
BDML/BD5
Database Link
Source