I was wondering if [having different activation functions on each layer] is a practical choice, or whether I should just use one activation function per net?
Short answer: it depends
Longer answer: I'm trying to think of why you would want to have multiple activation functions. You don't say in your question so I'll answer at a more theoretical level.
General Advice/Guidance
Neural networks are just approximations of a mathematical function, and the correct design will be based on answering the following questions/answers
- How close does the approximation need to be, and how close can you train your network to approximate the function?
- How well does the network generalize to datasets that it was not trained on? How well does it need to generalize?
Here's an extra one that I think is relevant to your question
- How fast does the network need to perform? How does your choice of activation function hinder performance?
If you answer these questions, you'll have a better idea about your specific case.
My Opinion
Building a neural network with multiple activation functions is really muddying the waters and making the system more complicated than it needs to be. When I think of building good software, one of the first things I think of is cohesive design. In other words, does the system make sense as a whole or is it doing too much?
Pro tip: Don't build software Rube Goldburg Machines.
If you want multiple activation functions in the same network, this is not cohesive in my opinion. If your problem really calls for this for some reason, then rethink the problem and maybe design a system with multiple separate neural networks, and those networks will each serve their respective purposes with their respective architecture (including a choice of activation function).