![]() ![]() The results can serve as a guideline for the image assignment mechanism of future crowdsourcing applications. We found that when a minimum quality of at least 3 annotations per image can be acquired, it is more efficient to then distribute crowdsourced annotations over as many images as possible. Furthermore, we evaluated whether a higher number of annotations can compensate lower annotation quality by comparing CNN predictions from models trained on differently sized training data sets. It was found that increasing annotation quality results in a better performance of the CNN in a logarithmic way. CNN models were trained using these annotations and the results were compared to a ground-truth. ![]() Several annotation sets with different quality levels were generated using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm on crowdsourced segmentations. In this work, we investigate the effect of the annotation quality used for model training on the predicted results of a CNN. In case of crowdsourcing, this translates to the question on how many annotations per image need to be obtained. However, it is unclear, to which quality standards the annotations need to comply to for sufficient accuracy. It is agreed that a larger training set yields increased CNN performance. Medical applications, however, require a high accuracy of the segmented regions. As expert annotations are costly to acquire, crowdsourcing–obtaining several annotations from a large group of non-experts–has been proposed. For good accuracy, large annotated training data sets are required. For medical image segmentation, deep learning approaches using convolutional neural networks (CNNs) are currently superseding classical methods. ![]()
0 Comments
Leave a Reply. |