How to export wikis from FogBugz 6 to (almost) any other wiki (final destination: Confluence)?
Asked Answered
J

1

11

We have a FogBugz 6 installation, with a good deal of wiki content in place. We're transitioning to use Atlassian products (JIRA and Confluence), so we'd like to get that wiki content into Confluence. How would you approach this?

Unfortunately, FogBugz doesn't appear to provide any kind of wiki export functionality, and Confluence doesn't provide any FogBugz wiki import.

FogBugz does have an API, but its a little light on the details w.r.t. accessing wiki content. We don't really care about past revisions of pages (just content, links, and images/attachments), so it's not clear that the API gets us any further than scraping the FB wikis with wget or something, and working with the HTML and images/attachments from there.

Confluence has a pretty full-featured content import utility that supports a number of source wikis:

  • TWiki
  • PmWiki
  • DokuWiki
  • Mediawiki
  • MoinMoin
  • Jotspot
  • Tikiwiki
  • Jspwiki
  • Sharepoint
  • SWiki
  • Vqwiki
  • XWiki
  • Trac

No FogBugz option there, but if we could export the FogBugz wiki content into one of the above wikis, then we could likely use the Confluence multi-wiki importer from there.

Alternatively, we could use wget to scrape the FogBugz wiki content, and then find a way to get static HTML + images + attachments into either Confluence or into one of the above other wikis as a stepping stone to Confluence.

Thoughts?

Jainism answered 7/5, 2009 at 12:34 Comment(2)
I think it's kind of funny that this get's asked here.Persiflage
I don't think you need to feel bad for Joel. FogBugz is a great tool, and we've enjoyed using (and still enjoy using it) -- but requirements change. shrug I'm sure Joel's world domination will continue apace, even without us. ;-)Jainism
J
6

A colleague ended up figuring this one out, and the process ended up being generally-applicable to other web content we wanted to pull into Confluence as well. In broad strokes, the process involved:

  1. Using wget to suck all of the content out of FogBugz (configured so that images and attachments were downloaded properly, and links to them and to other pages were properly relativized).
  2. Using a simple XSLT transform to strip away the "template" content (e.g. logos, control/navigation links, etc) that surrounded the body of each page.
  3. (optionally) Using a perl module to convert the resulting HTML fragments into Confluence's markup format
  4. Using the Confluence command line interface to push up all of the page, image, and attachment data.

Note that I said "optionally" in #3 above. That is because the Confluence CLI has two relevant options: it can be used to create new pages directly, in which case it's expecting Confluence markup already, or it can be used to create new pages using HTML, which it converts to Confluence markup itself. In some cases, the Confluence CLI converted the HTML just fine; for other data sources, we needed to use the perl module.

Jainism answered 6/6, 2009 at 16:27 Comment(1)
Instead of wget-ing HTML, why not get the raw text from the FogBugz database? select sHeadline, sBody from wikipage ...Columelliform

© 2022 - 2024 — McMap. All rights reserved.