For my thesis, I am attempting to detect faults/inconsistencies in 3D prints.
Generating data takes a long time (because the print process takes a long time). For this reason my dataset is limited. I’ve got two classes which each have about 100-150 images each. This adds up to a total of about 250-300 images.Then I augmented those images 8 times with rotations and flips. I first tried to train on EfficientdetD0 but the results were pretty disappointing. Perhaps only a quarter of errors were getting detected.
Someone on this subreddit told me I should use an architecture like ” SSD ResNet50 V1 FPN 640×640 (RetinaNet50) ” because this appearantly works better for small datasets, though I don’t know why. So I went and tried it. I trained with the same parameters as before and for roughly the same time. The results were even worse.
Right now I would like to train a model from scratch and compare the results to the results I got using transfer learning, however I have no clue on how I should get started with this. I’ve been googling about but I haven’t found a clear explanation just yet. Could somebody please point me in the right direction?
Also why should SSD ResNet50 V1 FPN 640×640 (RetinaNet50) work better with smaller datasets, and why shouldn’t I just use ResNet 151? From what I’ve gathered this should work better than 50 in my case because it goed deeper, no?
submitted by /u/007Nick700
[visit reddit] [comments]