How to make a custom activation function with only Python in Tensorflow?
Asked Answered
L

2

58

Suppose you need to make an activation function which is not possible using only pre-defined tensorflow building-blocks, what can you do?

So in Tensorflow it is possible to make your own activation function. But it is quite complicated, you have to write it in C++ and recompile the whole of tensorflow [1] [2].

Is there a simpler way?

Layout answered 7/10, 2016 at 16:8 Comment(2)
See also How do you create a custom activation function with Keras?Lakitalaks
It is hard to have absolute freedom with any software, but if you give us an idea what activation function (function family) you are trying to create, somebody might be able to help you.Avouch
L
86

Yes There is!

Credit: It was hard to find the information and get it working but here is an example copying from the principles and code found here and here.

Requirements: Before we start, there are two requirement for this to be able to succeed. First you need to be able to write your activation as a function on numpy arrays. Second you have to be able to write the derivative of that function either as a function in Tensorflow (easier) or in the worst case scenario as a function on numpy arrays.

Writing Activation function:

So let's take for example this function which we would want to use an activation function:

def spiky(x):
    r = x % 1
    if r <= 0.5:
        return r
    else:
        return 0

Which look as follows: Spiky Activation

The first step is making it into a numpy function, this is easy:

import numpy as np
np_spiky = np.vectorize(spiky)

Now we should write its derivative.

Gradient of Activation: In our case it is easy, it is 1 if x mod 1 < 0.5 and 0 otherwise. So:

def d_spiky(x):
    r = x % 1
    if r <= 0.5:
        return 1
    else:
        return 0
np_d_spiky = np.vectorize(d_spiky)

Now for the hard part of making a TensorFlow function out of it.

Making a numpy fct to a tensorflow fct: We will start by making np_d_spiky into a tensorflow function. There is a function in tensorflow tf.py_func(func, inp, Tout, stateful=stateful, name=name) [doc] which transforms any numpy function to a tensorflow function, so we can use it:

import tensorflow as tf
from tensorflow.python.framework import ops

np_d_spiky_32 = lambda x: np_d_spiky(x).astype(np.float32)


def tf_d_spiky(x,name=None):
    with tf.name_scope(name, "d_spiky", [x]) as name:
        y = tf.py_func(np_d_spiky_32,
                        [x],
                        [tf.float32],
                        name=name,
                        stateful=False)
        return y[0]

tf.py_func acts on lists of tensors (and returns a list of tensors), that is why we have [x] (and return y[0]). The stateful option is to tell tensorflow whether the function always gives the same output for the same input (stateful = False) in which case tensorflow can simply the tensorflow graph, this is our case and will probably be the case in most situations. One thing to be careful of at this point is that numpy used float64 but tensorflow uses float32 so you need to convert your function to use float32 before you can convert it to a tensorflow function otherwise tensorflow will complain. This is why we need to make np_d_spiky_32 first.

What about the Gradients? The problem with only doing the above is that even though we now have tf_d_spiky which is the tensorflow version of np_d_spiky, we couldn't use it as an activation function if we wanted to because tensorflow doesn't know how to calculate the gradients of that function.

Hack to get Gradients: As explained in the sources mentioned above, there is a hack to define gradients of a function using tf.RegisterGradient [doc] and tf.Graph.gradient_override_map [doc]. Copying the code from harpone we can modify the tf.py_func function to make it define the gradient at the same time:

def py_func(func, inp, Tout, stateful=True, name=None, grad=None):
    
    # Need to generate a unique name to avoid duplicates:
    rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8))
    
    tf.RegisterGradient(rnd_name)(grad)  # see _MySquareGrad for grad example
    g = tf.get_default_graph()
    with g.gradient_override_map({"PyFunc": rnd_name}):
        return tf.py_func(func, inp, Tout, stateful=stateful, name=name)

Now we are almost done, the only thing is that the grad function we need to pass to the above py_func function needs to take a special form. It needs to take in an operation, and the previous gradients before the operation and propagate the gradients backward after the operation.

Gradient Function: So for our spiky activation function that is how we would do it:

def spikygrad(op, grad):
    x = op.inputs[0]

    n_gr = tf_d_spiky(x)
    return grad * n_gr  

The activation function has only one input, that is why x = op.inputs[0]. If the operation had many inputs, we would need to return a tuple, one gradient for each input. For example if the operation was a-bthe gradient with respect to a is +1 and with respect to b is -1 so we would have return +1*grad,-1*grad. Notice that we need to return tensorflow functions of the input, that is why need tf_d_spiky, np_d_spiky would not have worked because it cannot act on tensorflow tensors. Alternatively we could have written the derivative using tensorflow functions:

def spikygrad2(op, grad):
    x = op.inputs[0]
    r = tf.mod(x,1)
    n_gr = tf.to_float(tf.less_equal(r, 0.5))
    return grad * n_gr  

Combining it all together: Now that we have all the pieces, we can combine them all together:

np_spiky_32 = lambda x: np_spiky(x).astype(np.float32)

def tf_spiky(x, name=None):
    
    with tf.name_scope(name, "spiky", [x]) as name:
        y = py_func(np_spiky_32,
                        [x],
                        [tf.float32],
                        name=name,
                        grad=spikygrad)  # <-- here's the call to the gradient
        return y[0]

And now we are done. And we can test it.

Test:

with tf.Session() as sess:

    x = tf.constant([0.2,0.7,1.2,1.7])
    y = tf_spiky(x)
    tf.initialize_all_variables().run()
    
    print(x.eval(), y.eval(), tf.gradients(y, [x])[0].eval())

[ 0.2 0.69999999 1.20000005 1.70000005] [ 0.2 0. 0.20000005 0.] [ 1. 0. 1. 0.]

Success!

Layout answered 7/10, 2016 at 16:8 Comment(8)
@lahwran this is not really an activation function that you would want to use in real life. Its just an example of how to implement a custom activation function if you need to do that.Layout
totally, which is why I was curious whether you'd gotten it to work :pBinion
yes it works :) but it I didnt try using a network it in a real learning problem, i needed to make a much more complicated activation function than that for my purpose and that one learned, but for the post here i only put a toy activation function which i didnt try to learn with.Layout
ah yeah. I was mostly curious because cycles in the activation function seem like they'd be pretty terrible, and I was curious whether you'd managed to get something with cycles to learn.Binion
awesome ! Note for people that currently want to use your method, you should replace op.scope by tf.name_scope because the former is deprecated. op.scope takes argument as follow op.scope(values, name, "default_name"), whereas tf.name_scope argument order is tf.name_scope(name, default_name, values), so instead of ops.op_scope([x], name, "spiky") one should use tf.name_scope(name, "spiky", [x])Yang
@Layout does TensorFlow use GPU acceleration on custom functions? That is, will this activation be applied parallelly to many tensor elements across CUDA cores?Landan
@RohanSaxena yes.Layout
@Layout Clearest explanation of creating a custom tensorflow function I have seen so far - thank you!Marchland
J
17

Why not simply use the functions that are already available in tensorflow to build your new function?

For the spiky function in your answer, this could look as follows

def spiky(x):
    r = tf.floormod(x, tf.constant(1))
    cond = tf.less_equal(r, tf.constant(0.5))
    return tf.where(cond, r, tf.constant(0))

I would consider this substantially much easier (not even need to compute any gradients) and unless you want to do really exotic things, I can barely imagine that tensorflow does not provide the building blocks for building highly complex activation functions.

Julienne answered 22/7, 2017 at 19:51 Comment(9)
Yes, indeed, spiky can be done with tf primitives, but spiky is just an simple example not to get overly confused by the complexity of the function which i really wanted to implement. The function I actually wanted to implement could unfortunately not be implemented with tf primitives.Layout
The whole point of the question is: what do you do when you can't formulate the activation function with tf primitives.Layout
@Layout I already expected this, but it is not clear from your question. Because of the popularity of this question, I thought it might be a good idea to point to this solution as well (for people with little experience with tensorflow trying to create their own activation functions).Julienne
Very useful answer except you might want to use the shape of tensor x like so: def spiky(x): r = tf.floormod(x, tf.constant(1,shape=x.shape)) cond = tf.less_equal(r, tf.constant(0.5,shape=x.shape)) return tf.where(cond, r, tf.constant(0,shape=x.shape)) otherwise you may get this kind of error: ValueError: Shape must be rank xx but is rank xx for 'cond_xx/Switch' (op: 'Switch')Semiskilled
where to go from here? would be nice to have a full example on how to implement it with only "primitives" as u call themHeisel
@ShavedMan There is a full example in my answer. I am not sure what could be missing...Julienne
maybe I am uncommonly noob, but I have never wrote my own act.func., so I don't know how to put it into my layerHeisel
It's been too long since I used tensorflow, but it should be pretty much the same as using regular activation functions. However, if you do not know how to use activation functions in any framework, I'm pretty sure you shouldn't be writing custom activation functions for it (the reasonable choices should be included in the framework).Julienne
@MrTsjolder I have an idea for an activation function. I just want to know, when and how to tell it the derivative and such. If with this answer, you just need to put the name of the function in activation=<function>, I will try it out. thanks.Heisel

© 2022 - 2024 — McMap. All rights reserved.