"Pretty" Continuous Integration for Python
Asked Answered
T

14

114

This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at..

For example, compared to..

..and others, BuildBot looks rather.. archaic

I'm currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info)

Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes?


Update: Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with Jenkins is different.

Update: After trying a few alternatives, I think I'll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it.

Setting Hudson up for a Python project was pretty simple:

  • Download Hudson from http://hudson-ci.org/
  • Run it with java -jar hudson.war
  • Open the web interface on the default address of http://localhost:8080
  • Go to Manage Hudson, Plugins, click "Update" or similar
  • Install the Git plugin (I had to set the git path in the Hudson global preferences)
  • Create a new project, enter the repository, SCM polling intervals and so on
  • Install nosetests via easy_install if it's not already
  • In the a build step, add nosetests --with-xunit --verbose
  • Check "Publish JUnit test result report" and set "Test report XMLs" to **/nosetests.xml

That's all that's required. You can setup email notifications, and the plugins are worth a look. A few I'm currently using for Python projects:

  • SLOCCount plugin to count lines of code (and graph it!) - you need to install sloccount separately
  • Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build)
  • Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)
Tryptophan answered 22/10, 2008 at 12:49 Comment(3)
Great question, I am looking into similar things just right now. If you go one route, can you share your experience with the rest of us?Homologue
Don't know if it was availabe when you wrote this: Use the Chuck Norris plugin for Hudson to further enhance control over your stuff!Mufinella
Update for 2011/2012: Those considering Hudson should be using Jenkins, the open source continuation of the Hudson project (Hudson is now controlled by Oracle)Bravar
A
41

You might want to check out Nose and the Xunit output plugin. You can have it run your unit tests, and coverage checks with this command:

nosetests --with-xunit --enable-cover

That'll be helpful if you want to go the Jenkins route, or if you want to use another CI server that has support for JUnit test reporting.

Similarly you can capture the output of pylint using the violations plugin for Jenkins

Automate answered 20/3, 2009 at 20:13 Comment(3)
Nose now includes the xunit plugin by default - nosetests --with-xunitTryptophan
So how does one run the auditing from Pylint then? When I do nosetests --with-xunit --enable-audit I get nosetests: error: no such option: --enable-auditBreadwinner
Modernised answer, the NoseXUnit stuff is now builtin and renamed from the unfortunate-when-downcased --with-nosexunit to --with-xunit.Tryptophan
N
10

Don't know if it would do : Bitten is made by the guys who write Trac and is integrated with Trac. Apache Gump is the CI tool used by Apache. It is written in Python.

Nickels answered 22/10, 2008 at 13:46 Comment(0)
D
9

We've had great success with TeamCity as our CI server and using nose as our test runner. Teamcity plugin for nosetests gives you count pass/fail, readable display for failed test( that can be E-Mailed). You can even see details of the test failures while you stack is running.

If of course supports things like running on multiple machines, and it's much simpler to setup and maintain than buildbot.

Draw answered 23/10, 2008 at 1:30 Comment(0)
P
8

Buildbot's waterfall page can be considerably prettified. Here's a nice example http://build.chromium.org/buildbot/waterfall/waterfall

Portillo answered 8/1, 2010 at 9:16 Comment(0)
G
7

I guess this thread is quite old but here is my take on it with hudson:

I decided to go with pip and set up a repo (the painful to get working but nice looking eggbasket), which hudson auto uploads to with a successful tests. Here is my rough and ready script for use with a hudson config execute script like: /var/lib/hudson/venv/main/bin/hudson_script.py -w $WORKSPACE -p my.package -v $BUILD_NUMBER, just put in **/coverage.xml, pylint.txt and nosetests.xml in the config bits:

#!/var/lib/hudson/venv/main/bin/python
import os
import re
import subprocess
import logging
import optparse

logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s %(levelname)s %(message)s')

#venvDir = "/var/lib/hudson/venv/main/bin/"

UPLOAD_REPO = "http://ldndev01:3442"

def call_command(command, cwd, ignore_error_code=False):
    try:
        logging.info("Running: %s" % command)
        status = subprocess.call(command, cwd=cwd, shell=True)
        if not ignore_error_code and status != 0:
            raise Exception("Last command failed")

        return status

    except:
        logging.exception("Could not run command %s" % command)
        raise

def main():
    usage = "usage: %prog [options]"
    parser = optparse.OptionParser(usage)
    parser.add_option("-w", "--workspace", dest="workspace",
                      help="workspace folder for the job")
    parser.add_option("-p", "--package", dest="package",
                      help="the package name i.e., back_office.reconciler")
    parser.add_option("-v", "--build_number", dest="build_number",
                      help="the build number, which will get put at the end of the package version")
    options, args = parser.parse_args()

    if not options.workspace or not options.package:
        raise Exception("Need both args, do --help for info")

    venvDir = options.package + "_venv/"

    #find out if venv is there
    if not os.path.exists(venvDir):
        #make it
        call_command("virtualenv %s --no-site-packages" % venvDir,
                     options.workspace)

    #install the venv/make sure its there plus install the local package
    call_command("%sbin/pip install -e ./ --extra-index %s" % (venvDir, UPLOAD_REPO),
                 options.workspace)

    #make sure pylint, nose and coverage are installed
    call_command("%sbin/pip install nose pylint coverage epydoc" % venvDir,
                 options.workspace)

    #make sure we have an __init__.py
    #this shouldn't be needed if the packages are set up correctly
    #modules = options.package.split(".")
    #if len(modules) > 1: 
    #    call_command("touch '%s/__init__.py'" % modules[0], 
    #                 options.workspace)
    #do the nosetests
    test_status = call_command("%sbin/nosetests %s --with-xunit --with-coverage --cover-package %s --cover-erase" % (venvDir,
                                                                                     options.package.replace(".", "/"),
                                                                                     options.package),
                 options.workspace, True)
    #produce coverage report -i for ignore weird missing file errors
    call_command("%sbin/coverage xml -i" % venvDir,
                 options.workspace)
    #move it so that the code coverage plugin can find it
    call_command("mv coverage.xml %s" % (options.package.replace(".", "/")),
                 options.workspace)
    #run pylint
    call_command("%sbin/pylint --rcfile ~/pylint.rc -f parseable %s > pylint.txt" % (venvDir, 
                                                                                     options.package),
                 options.workspace, True)

    #remove old dists so we only have the newest at the end
    call_command("rm -rfv %s" % (options.workspace + "/dist"),
                 options.workspace)

    #if the build passes upload the result to the egg_basket
    if test_status == 0:
        logging.info("Success - uploading egg")
        upload_bit = "upload -r %s/upload" % UPLOAD_REPO
    else:
        logging.info("Failure - not uploading egg")
        upload_bit = ""

    #create egg
    call_command("%sbin/python setup.py egg_info --tag-build=.0.%s --tag-svn-revision --tag-date sdist %s" % (venvDir,
                                                                                                              options.build_number,
                                                                                                              upload_bit),
                 options.workspace)

    call_command("%sbin/epydoc --html --graph all %s" % (venvDir, options.package),
                 options.workspace)

    logging.info("Complete")

if __name__ == "__main__":
    main()

When it comes to deploying stuff you can do something like:

pip -E /location/of/my/venv/ install my_package==X.Y.Z --extra-index http://my_repo

And then people can develop stuff using:

pip -E /location/of/my/venv/ install -e ./ --extra-index http://my_repo

This stuff assumes you have a repo structure per package with a setup.py and dependencies all set up then you can just check out the trunk and run this stuff on it.

I hope this helps someone out.

------update---------

I've added epydoc which fits in really nicely with hudson. Just add javadoc to your config with the html folder

Note that pip doesn't support the -E flag properly these days, so you have to create your venv separately

Glassine answered 25/2, 2011 at 15:25 Comment(1)
This answer is very useful and has lots of detail in regards to the internals of Python CI, something you won't get for free from Jenkins or whatever. Thanks!Avicenna
M
6

Atlassian's Bamboo is also definitely worth checking out. The entire Atlassian suite (JIRA, Confluence, FishEye, etc) is pretty sweet.

Medan answered 28/6, 2011 at 18:8 Comment(0)
N
3

another one : Shining Panda is a hosted tool for python

Nickels answered 20/12, 2011 at 8:28 Comment(0)
G
3

If you're considering hosted CI solution, and doing open source, you should look into Travis CI as well - it has very nice integration with GitHub. While it started as a Ruby tool, they have added Python support a while ago.

Gottschalk answered 28/4, 2012 at 12:43 Comment(1)
From the Travis CI org webpage: "Since June 15th, 2021, the building on travis-ci.org is ceased. Please use travis-ci.com from now on."Bersagliere
P
2

Signal is another option. You can know more about it and watch a video also here.

Pigeontoed answered 29/3, 2010 at 4:5 Comment(0)
S
2

I would consider CircleCi - it has great Python support, and very pretty output.

Strunk answered 8/11, 2012 at 16:41 Comment(0)
M
1

continuum's binstar now is able to trigger builds from github and can compile for linux, osx and windows ( 32 / 64 ). the neat thing is that it really allows you to closely couple distribution and continuous integration. That's crossing the t's and dotting the I's of Integration. The site, workflow and tools are really polished and AFAIK conda is the most robust and pythonic way to distributing complex python modules, where you need to wrap and distribute C/C++/Fotran libraries.

Missy answered 25/11, 2014 at 19:1 Comment(0)
F
0

We have used bitten quite a bit. It is pretty and integrates well with Trac, but it is a pain in the butt to customize if you have any nonstandard workflow. Also there just aren't as many plugins as there are for the more popular tools. Currently we are evaluating Hudson as a replacement.

Futures answered 6/2, 2010 at 15:3 Comment(0)
D
0

Check rultor.com. As this article explains, it uses Docker for every build. Thanks to that, you can configure whatever you like inside your Docker image, including Python.

Deventer answered 3/8, 2014 at 17:27 Comment(0)
A
0

Little disclaimer, I've actually had to build a solution like this for a client that wanted a way to automatically test and deploy any code on a git push plus manage the issue tickets via git notes. This also lead to my work on the AIMS project.

One could easily just setup a bare node system that has a build user and manage their build through make(1), expect(1), crontab(1)/systemd.unit(5), and incrontab(1). One could even go a step further and use ansible and celery for distributed builds with a gridfs/nfs file store.

Although, I would not expect anyone other than a Graybeard UNIX guy or Principle level engineer/architect to actually go this far. Just makes for a nice idea and potential learning experience since a build server is nothing more than a way to arbitrarily execute scripted tasks in an automated fashion.

Aquilegia answered 29/9, 2015 at 19:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.