What is a `"Python"` layer in caffe?
Asked Answered
R

3

21

Caffe has a layer type "Python".

For instance, this layer type can be used as a loss layer.
On other occasions it is used as an input layer.

What is this layer type?
How can this layer be used?

Rauch answered 27/12, 2016 at 11:17 Comment(0)
R
36

Prune's and Bharat's answers gives the overall purpose of a "Python" layer: a general purpose layer which is implemented in python rather than c++.

I intend this answer to serve as a tutorial for using "Python" layer.


A Tutorial for "Python" layer

what is a "Python" layer?

Please see the excellent answers of Prune and Bharat.

Pre-requisite

In order to use 'Python" layer you need to compile caffe with flag

WITH_PYTHON_LAYER := 1

set in 'Makefile.config'.

How to implement a "Python" layer?

A "Python" layer should be implemented as a python class derived from caffe.Layer base class. This class must have the following four methods:

import caffe
class my_py_layer(caffe.Layer):
  def setup(self, bottom, top):
    pass

  def reshape(self, bottom, top):
    pass

  def forward(self, bottom, top):
    pass

  def backward(self, top, propagate_down, bottom):
    pass

What are these methods?

def setup(self, bottom, top): This method is called once when caffe builds the net. This function should check that number of inputs (len(bottom)) and number of outputs (len(top)) is as expected.
You should also allocate internal parameters of the net here (i.e., self.add_blobs()), see this thread for more information.
This method has access to self.param_str - a string passed from the prototxt to the layer. See this thread for more information.

def reshape(self, bottom, top): This method is called whenever caffe reshapes the net. This function should allocate the outputs (each of the top blobs). The outputs' shape is usually related to the bottoms' shape.

def forward(self, bottom, top): Implementing the forward pass from bottom to top.

def backward(self, top, propagate_down, bottom): This method implements the backpropagation, it propagates the gradients from top to bottom. propagate_down is a Boolean vector of len(bottom) indicating to which of the bottoms the gradient should be propagated.

Some more information about bottom and top inputs you can find in this post.

Examples
You can see some examples of simplified python layers here, here and here.
Example of "moving average" output layer can be found here.

Trainable parameters
"Python" layer can have trainable parameters (like "Conv", "InnerProduct", etc.).
You can find more information on adding trainable parameters in this thread and this one. There's also a very simplified example in caffe git.

How to add a "Python" layer in a prototxt?

See Bharat's answer for details.
You need to add the following to your prototxt:

layer {
  name: 'rpn-data'
  type: 'Python'  
  bottom: 'rpn_cls_score'
  bottom: 'gt_boxes'
  bottom: 'im_info'
  bottom: 'data'
  top: 'rpn_labels'
  top: 'rpn_bbox_targets'
  top: 'rpn_bbox_inside_weights'
  top: 'rpn_bbox_outside_weights'
  python_param {
    module: 'rpn.anchor_target_layer'  # python module name where your implementation is
    layer: 'AnchorTargetLayer'   # the name of the class implementation
    param_str: "'feat_stride': 16"   # optional parameters to the layer
  }
}

How to add a "Python" layer using pythonic NetSpec interface?

It's very simple:

import caffe
from caffe import layers as L

ns = caffe.NetSpec()
# define layers here...
ns.rpn_labels, ns.rpn_bbox_targets, \
  ns.rpn_bbox_inside_weights, ns.rpn_bbox_outside_weights = \
    L.Python(ns.rpn_cls_score, ns.gt_boxes, ns.im_info, ns.data, 
             name='rpn-data',
             ntop=4, # tell caffe to expect four output blobs
             python_param={'module': 'rpn.anchor_target_layer',
                           'layer': 'AnchorTargetLayer',
                           'param_str': '"\'feat_stride\': 16"'})

How to use a net with a "Python" layer?

Invoking python code from caffe is nothing you need to worry about. Caffe uses boost API to call python code from compiled c++.
What do you do need to do?
Make sure the python module implementing your layer is in $PYTHONPATH so that when caffe imports it - it can be found.
For instance, if your module my_python_layer.py is in /path/to/my_python_layer.py then

PYTHONPATH=/path/to:$PYTHONPATH $CAFFE_ROOT/build/tools/caffe train -solver my_solver.prototxt

should work just fine.

How to test my layer?

You should always test your layer before putting it to use.
Testing the forward function is entirely up to you, as each layer has a different functionality.
Testing the backward method is easy, as this method only implements a gradient of forward it can be numerically tested automatically!
Check out test_gradient_for_python_layer testing utility:

import numpy as np
from test_gradient_for_python_layer import test_gradient_for_python_layer

# set the inputs
input_names_and_values = [('in_cont', np.random.randn(3,4)), 
                          ('in_binary', np.random.binomial(1, 0.4, (3,1))]
output_names = ['out1', 'out2']
py_module = 'folder.my_layer_module_name'
py_layer = 'my_layer_class_name'
param_str = 'some params'
propagate_down = [True, False]

# call the test
test_gradient_for_python_layer(input_names_and_values, output_names, 
                               py_module, py_layer, param_str, 
                               propagate_down)

# you are done!

Special Notice

It is worth while noting that python code runs on CPU only. Thus, if you plan to have a Python layer in the middle of your net you will see a significant degradation in performance if you plan on using GPU. This happens because caffe needs to copy blobs from GPU to CPU before calling python layer and then copy back to GPU to proceed with the forward/backward pass.
This degradation is far less significant if the python layer is either an input layer or the topmost loss layer.
Update: On Sep 19th, 2017 PR #5904 was merged into master. This PR exposes GPU pointers of blobs via the python interface. You may access blob._gpu_data_ptr and blob._gpu_diff_ptr directly from python at your own risk.

Rauch answered 5/1, 2017 at 9:37 Comment(5)
thank you very much for the great explanation! Will a python layer also work on a system where no python is installed? (can I deploy just the caffe binaries then?)Ogrady
@Ogrady I think you need Python libraries for this to work.Rauch
@Rauch I think there is a bug in pyloss layer github.com/BVLC/caffe/blob/master/examples/pycaffe/layers/… I think the last line should be bottom[i].diff[...] = sign * top[0].diff[0] * self.diff / bottom[i].num Am I right? Thanks.Cordilleras
@Cordilleras it does seem odd that top.diff is not taken into account. you may open an issue in github to investigate this point.Rauch
@Rauch I have created a PR here github.com/BVLC/caffe/pull/5407Cordilleras
B
8

Very simply, it's a layer in which you provide the implementation code, rather than using one of the pre-defined types -- which are all backed by efficient functions.

If you want to define a custom loss function, go ahead: write it yourself, and create the layer with type Python. If you have non-standard input needs, perhaps some data-specific pre-processing, no problem: write it yourself, and create the layer with type Python.

Bryannabryansk answered 27/12, 2016 at 18:35 Comment(2)
I don't think I agree with "it's a layer in which you provide the implementation code, rather than using one of the pre-defined types". You can implement your own C++ and CUDA layers also.Dextroamphetamine
Right ... but the existence of other user-defined vehicles doesn't negate that phrase. It's a layer, not the only possible type of layer.Bryannabryansk
D
7

Python layers are different from C++ layers which need to be compiled, their parameters need to be added to the proto file and finally you need to register the layer in layer_factory. If you write a python layer, you don't need to worry about any of these things. Layer parameters can be defined as a string, which are accessible as a string in python. For example: if you have a parameter in a layer, you can access it using 'self.param_str', if param_str was defined in your prototxt file. Like other layers, you need to define a class with the following functions:

  • Setup - Initialize your layer using parameters obtained from layer variables
  • Forward - What would be input and output of a layer
  • Backward - Given the prediction and gradients from the next layer, compute the gradients for the previous layer
  • Reshape - Reshape your blob if needed

Prototxt example:

layer {
  name: 'rpn-data'
  type: 'Python'
  bottom: 'rpn_cls_score'
  bottom: 'gt_boxes'
  bottom: 'im_info'
  bottom: 'data'
  top: 'rpn_labels'
  top: 'rpn_bbox_targets'
  top: 'rpn_bbox_inside_weights'
  top: 'rpn_bbox_outside_weights'
  python_param {
    module: 'rpn.anchor_target_layer'
    layer: 'AnchorTargetLayer'
    param_str: "'feat_stride': 16"
  }
}

Here, name of the layer is rpn-data, bottom and top are input and output details of the layer respectively. python_param defines what are the parameters of the Python layer. 'module' specifies what is the file name of your layer. If the file called 'anchor_target_layer.py' is located inside a folder called 'rpn', the parameter would be 'rpn.anchor_target_layer'. The 'layer' parameter is the name of your class, in this case it is 'AnchorTargetLayer'. 'param_str' is a parameter for the layer, which contains a value 16 for the key 'feat_stride'.

Unlike C++/CUDA layers, Python layers do not work in a multi-GPU setting in caffe as of now, so that is a disadvantage of using them.

Dextroamphetamine answered 3/1, 2017 at 21:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.