How to extract just plain text from .doc & .docx files? [closed]
Asked Answered
T

6

61

Anyone know of anything they can recommend in order to extract just the plain text from a .doc or .docx?

I've found this - wondered if there were any other suggestions?

Tompion answered 15/4, 2011 at 3:12 Comment(5)
This is a perfect fit for Software Recommendations. It should be transferred there.Woodbine
If we have Software Recommendations why to do not transfer here? I also search software for similar tasks and do not found there best answer. But could recommend pandoc as best solution which even tables convert correctly. So I suggest reopen question.Risorgimento
You obviously aren't on a Mac, but if you were you could use 'textutil' at the command line to quickly get plain text from various proprietary document types.Denticulate
This question is being discussed on MetaProteolysis
@Taryn: care to explain why this Q is off-topic but #8252720 is not?Salahi
M
80

If you want the pure plain text(my requirement) then all you need is

unzip -p some.docx word/document.xml | sed -e 's/<[^>]\{1,\}>//g; s/[^[:print:]]\{1,\}//g'

Which I found at command line fu

It unzips the docx file and gets the actual document then strips all the xml tags. Obviously all formatting is lost.

Melodist answered 2/9, 2014 at 9:46 Comment(9)
I like this command, but often newlines are still useful data to have in the final version. Therefore I used the following command instead: unzip -p document.docx word/document.xml | sed -e 's/<\/w:p>/\n/g; s/<[^>]\{1,\}>//g; s/[^[:print:]\n]\{1,\}//g' Note the additional sed argument, replacing XML representations of newlines with the actual newline character, and I edited the last sed argument to not strip newline characters. This makes the above command far more useful for diff-ing Word documents.Hypaethral
Thanks Rob! @Jeff: I agree but the following command works better for me in practice: unzip -p document.docx word/document.xml | sed -e 's/<\/w:p>/ /g; s/<[^>]\{1,\}>/ /g; s/[^[:print:]]\{1,\}/ /g'Flavone
Very nice. Is it also possible to edit the XML data inside the Word document without corrupting it? And how?Etching
How does this fare with non-ASCII characters? Especially the more esoteric character sets?Mg
@Mg the first bit of the command gets the raw xml so that will work fine. the second bit gets all the none xml tag strings and separates them with a new line. So as long as the sed does not barf on esoteric character sets you should be fine. Please post a reply if you find that is not the case.Melodist
Good idea for recovering corrupted files.Duley
this doesn't preserve newlineTeniers
I agree with Jeff besides rob's suggestion does loose some text (close to recurring start of lines seem to be suppressed.Johst
@jeff-mcjunkin nice snipet, thanks a lot! I think you should add s/<w:br/>/\n/g; too ;)Distorted
P
47

LibreOffice

One option is libreoffice/openoffice in headless mode (make sure all other instances of libreoffice are closed first):

libreoffice --headless --convert-to "txt:Text (encoded):UTF8" mydocument.doc

For more details see e.g. this link: http://ask.libreoffice.org/en/question/2641/convert-to-command-line-parameter/

For a list of libreoffice filters see http://cgit.freedesktop.org/libreoffice/core/tree/filter/source/config/fragments/filters

Since the openoffice command line syntax is a bit too complicated, there is a handy wrapper which can make the process easier: unoconv.

Apache POI

Another option is Apache POI — a well supported Java library which unlike antiword can read, create and convert .doc, .docx, .xls, .xlsx, .ppt, .pptx files.

Here is the simplest possible Java code for converting a .doc or .docx document to plain text:

import java.io.FileInputStream;
import java.io.FileWriter;
import java.io.IOException;

import org.apache.poi.POITextExtractor;
import org.apache.poi.extractor.ExtractorFactory;
import org.apache.poi.openxml4j.exceptions.OpenXML4JException;
import org.apache.xmlbeans.XmlException;

public class WordToTextConverter {
    public static void main(String[] args) {
        try {
            convertWordToText(args[0], args[1]);
        } catch (ArrayIndexOutOfBoundsException aiobe) {
            System.out.println("Usage: java WordToTextConverter <word_file> <text_file>");
        }
    }

    public static void convertWordToText(String src, String desc) {
        try {
            FileInputStream fs = new FileInputStream(src);
            final POITextExtractor extractor = ExtractorFactory.createExtractor(fs);
            FileWriter fw = new FileWriter(desc);
            fw.write(extractor.getText());
            fw.flush();
            fs.close();
            fw.close();

        } catch (IOException | OpenXML4JException | XmlException e) {
            e.printStackTrace();
        }
    }
}


# Maven dependencies (pom.xml):

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>my.wordconv</groupId>
<artifactId>my.wordconv.converter</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
    <dependency>
        <groupId>org.apache.poi</groupId>
        <artifactId>poi</artifactId>
        <version>3.17</version>
    </dependency>
    <dependency>
        <groupId>org.apache.poi</groupId>
        <artifactId>poi-ooxml</artifactId>
        <version>3.17</version>
    </dependency>
    <dependency>
        <groupId>org.apache.poi</groupId>
        <artifactId>poi-scratchpad</artifactId>
        <version>3.17</version>
    </dependency>
</dependencies>
</project>

NOTE: You will need to add the apache poi libraries to the classpath. On ubuntu/debian the libraries can be installed with sudo apt-get install libapache-poi-java — this will install them under /usr/share/java. For other systems you'll need to download the library and unpack the archive to a folder that you should use instead of /usr/share/java. If you use maven/gradle (the recommended option), then include the org.apache.poi dependencies as shown in the code snippet.

The same code will work for both .doc and .docx as the required converter implementation will be chosen by inspecting the binary stream.

Compile the class above (assuming it's in the default package, and the apache poi jars are under /usr/share/java):

javac -cp /usr/share/java/*:. WordToTextConverter.java

Run the conversion:

java -cp /usr/share/java/*:. WordToTextConverter doc.docx doc.txt 

A clonable gradle project which pulls all necessary dependencies and generates the wrapper shell script (with gradle installDist).

Pacificas answered 3/9, 2012 at 10:52 Comment(7)
If you're going to add Java options into the mix, I'd like to mention 'my' docx4j (which also handles pptx, xlsx). For text extraction, you'd use github.com/plutext/docx4j/blob/master/src/main/java/org/docx4j/…Chloramine
See also question 1686 on Ask LibreOffice about running the command line conversion in parallel with a running LibreOffice instance: ask.libreoffice.org/en/question/1686/…Monreal
When I tried using libreoffice to convert some docx files, I got this weird error Error: Please reverify input parameters..., which I disappeared when I switched to --convert-to "txt:Text (encoded):UTF8", so I'd recommend that (even if you don't have non-ascii characters).Myrticemyrtie
This is the ideal approach IMO. But to get it working in OSX, I had to uninstall the GUI-installed version, then run brew install libreoffice. Then, the command that worked was soffice --headless ... instead of libreoffice --headless .... Although this question is closed, it's the very first google result, so it might be worth adding this to the answer to help us hapless searchers.Mamoun
@senderle: no need to uninstall the existing GUI-installed version — in that scenario the binary is just not available in $PATH; you can still call it on macos e.g. with /Applications/LibreOffice.app/Contents/MacOS/soffice --headless --helpPacificas
@Pacificas true, but I like to cede as much control of my path to package managers as possible. With homebrew, it Just Works.Mamoun
@senderle: fair enough; brew cask info libreoffice points to the formula at github.com/Homebrew/homebrew-cask/blob/master/Casks/… where you can see it additionally puts a wrapper script under /usr/local/bin/soffice. It's useful to know what exactly is going on just in case the formula gets removed, or in case you need a newer version than the one provided by brew.Pacificas
M
16

Try Apache Tika. It supports most document formats (every MS Office format, OpenOffice/LibreOffice formats, PDF, etc.) using Java-based libraries (among others, Apache POI). It's very simple to use:

java -jar tika-app-1.4.jar --text ./my-document.doc
Mccarron answered 2/1, 2014 at 14:45 Comment(0)
H
10

Try "antiword" or "antiword-xp-rb"

My favorite is antiword:

http://www.winfield.demon.nl/

And here's a similar project which claims support for docx:

https://github.com/rainey/antiword-xp-rb/wiki

Haupt answered 15/4, 2011 at 3:14 Comment(1)
The have used (the upper) antiword many times, but it does not works with docx. From its page: "Antiword converts the binary files from Word 2, 6, 7, 97, 2000, 2002 and 2003 to plain text and to PostScript"Henpeck
M
5

I find wv to be better than catdoc or antiword. It can deal with .docx and convert to text or html. Here is a function I added to my .bashrc to temporarily view the file in the terminal. Change it as required.

# open word in less (ie worl document.doc)
worl() {
    DOC=$(mktemp /tmp/output.XXXXXXXXXX)
    wvText $1 $DOC
    less $DOC
    rm $DOC
}
Maite answered 31/10, 2013 at 11:29 Comment(2)
For those on OSX, you can brew install wv && brew install elinks.Cambridge
Works a treat and supports .doc and .docxMathematician
I
1

I recently dealt with this issue and found OpenOffice/LibreOffice commandline tools to be unreliable in production (thousands of docs processed, dozens concurrently).

Ultimately, I built a light-weight wrapper, DocRipper that is much faster and grabs all text from .doc, .docx and .pdf without formatting. DocRipper utilizes Antiword, grep and pdftotext to grab text and return it.

Irate answered 23/7, 2014 at 16:22 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.