Face Recognition with CelebA dataset

The dataset is a small subset of CelebA dataset including facial images of 20 identities, each having 100/30/30 train/validation/test images. We have extracted the deep features (using pretrained VGGface) to be used as input to all networks.

Preparation

To run this example, please fetch the examples directory from https://github.com/viebboy/PyGOP/tree/master/examples

The examples directory includes the following files:

  • prepare_miniCelebA.py: this script loads raw images and generate deep features. However, we have extracted aand also provide the features, which can be downloaded via this
  • data_utility.py: this script includes the data loading functionalities.
  • train_miniCelebA.py: the training script used for all algorithms.

To run this example, it suffices to just download miniCelebA_deepfeatures.tar.gz and extract it to the same folder as train_miniCelebA.py and data_utility.py.

However, readers who want to do the data extraction process by their own can download the raw data miniCelebA.tar.gz and extract the data in the same example folder. After that, running prepare_miniCelebA.py will generate the deep features in data directory in the same directory. Since prepare_miniCelebA.py requires a package called keras_vggface, which uses an older version of keras. It is advised to create a new environment when running this data preparation script to prevent breaking your current setup of keras.

Usage

After preparing the necessary files and data, the example folder should hold at least the following content

  • examples/data_utility.py
  • examples/train_miniCelebA.py
  • examples/data/miniCelebA_x_train.npy
  • examples/data/miniCelebA_y_train.npy
  • examples/data/miniCelebA_x_val.npy
  • examples/data/miniCelebA_y_val.npy
  • examples/data/miniCelebA_x_test.npy
  • examples/data/miniCelebA_y_test.npy

The signature of train_miniCelebA.py is as follows:

python train_miniCelebA.py -m <model name> -c <computation device (cpu/gpu)>

For example, to train HeMLGOP on CPU, we simply run:

python train_miniCelebA.py -m hemlgop -c cpu

or training POP on GPU, we simply run:

python train_miniCelebA.py -m pop -c gpu

For CPU, we have configured the script to run 8 parallel processes, and for GPU we have configured the script to run on 4 GPUs. Please change the configuration inside train_miniCelebA.py to suit your setup.

After completing the training process, the performance and time taken will be written in result.txt in the same folder.