本科《模式识别课设》MTCNN识别人眼(脸)
This work is used for reproduce MTCNN,a Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks.
WIDER_train
and put it into prepare_data
folder.prepare_data
folder.prepare_data/gen_12net_data.py
to generate training data(Face Detection Part) for PNet.gen_landmark_aug_12.py
to generate training data(Face Landmark Detection Part) for PNet.gen_imglist_pnet.py
to merge two parts of training data.gen_PNet_tfrecords.py
to generate tfrecord for PNet.gen_hard_example
to generate training data(Face Detection Part) for RNet.gen_landmark_aug_24.py
to generate training data(Face Landmark Detection Part) for RNet.gen_imglist_rnet.py
to merge two parts of training data.gen_RNet_tfrecords.py
to generate tfrecords for RNet.(you should run this script four times to generate tfrecords of neg,pos,part and landmark respectively)gen_hard_example
to generate training data(Face Detection Part) for ONet.gen_landmark_aug_48.py
to generate training data(Face Landmark Detection Part) for ONet.gen_imglist_onet.py
to merge two parts of training data.gen_ONet_tfrecords.py
to generate tfrecords for ONet.(you should run this script four times to generate tfrecords of neg,pos,part and landmark respectively)Since MTCNN is a Multi-task Network,we should pay attention to the format of training data.The format is:
[path to image][cls_label][bbox_label][landmark_label]
For pos sample,cls_label=1,bbox_label(calculate),landmark_label=[0,0,0,0,0,0,0,0,0,0].
For part sample,cls_label=-1,bbox_label(calculate),landmark_label=[0,0,0,0,0,0,0,0,0,0].
For landmark sample,cls_label=-2,bbox_label=[0,0,0,0],landmark_label(calculate).
For neg sample,cls_label=0,bbox_label=[0,0,0,0],landmark_label=[0,0,0,0,0,0,0,0,0,0].
Since the training data for landmark is less.I use transform,random rotate and random flip to conduct data augment(the result of landmark detection is not that good).
Result on FDDB
MIT LICENSE