Welcome to PyGOP’s documentation!¶
PyGOP provides a reference implementation of existing algorithms using Generalized Operational Perceptron (GOP) based on Keras and Tensorflow library. The implementation adopts a user-friendly interface while allowing a high level of customization including user-defined operators, custom loss function, custom metric functions that requires full batch evaluation such as precision, recall or f1. In addition, PyGOP supports different computation environments (CPU/GPU) for both single machine and cluster using SLURM job scheduler. What’s more? Since training GOP-based algorithms might take days, PyGOP allows resuming to what has been learned in case the script got interfered in the middle during the progression!
What is Generalized Operational Perceptron?¶
Generalized Operational Perceptron (GOP) is an artificial neuron model that was proposed to replace the traditional McCulloch-Pitts neuron model. While standard perceptron model only performs a linear transformation followed by non-linear thresholding, GOP model encapsulates a diversity of both linear and non-linear operations (with traditional perceptron as a special case). Each GOP is characterized by learnable synaptic weights and an operator set comprising of 3 types of operations: nodal operation, pooling operation and activation operation. The 3 types of operations performed by a GOP loosely resemble the neuronal activities in a biological learning system of mammals in which each neuron conducts electrical signals over three distinct operations:
- Modification of input signal from the synapse connection in the Dendrites.
- Pooling operation of the modified input signals in the Soma.
- Sending pulses when the pooled potential exceeds a limit in the Axon hillock.
By defining a set of nodal operators, pooling operators and activation operators, each GOP can select the suitable operators based on the problem at hand. Thus learning a GOP-based network involves finding the suitable operators as well as updating the synaptic weights. The author of GOP proposed Progressive Operational Perceptron (POP) algorithm to progressively learn GOP-based networks. Later, Heterogeneous Multilayer Generalized Operational Perceptron (HeMLGOP) algorithm and its variants (HoMLGOP, HeMLRN, HoMLRN) were proposed to learn heterogeneous architecture of GOPs with efficient operator set search procedure. In addition, fast version of POP (POPfast) was proposed together with memory extensions (POPmemO, POPmemH) that augment POPfast by incorporating memory path.
- Change Log
- Quick Start
- Data Feeding Mechanism
- Common Interface
- Computation Environments
- Progressive Operational Perceptron (POP)
- Heterogeneous Multilayer Operational Perceptron (HeMLGOP)
- Homogeneous Multilayer Operational Perceptron (HoMLGOP)
- Heterogeneous Multilayer Randomized Network (HeMLRN)
- Homogeneous Multilayer Randomized Network (HoMLRN)
- Fast Progressive Operational Perceptron (POPfast)
- Progressive Operational Perceptron with Memory (POPmem)