How to extract text from a PDF file?
Asked Answered
B

35

367

I'm trying to extract the text included in this PDF file using Python.

I'm using the PyPDF2 package (version 1.27.2), and have the following script:

import PyPDF2

with open("sample.pdf", "rb") as pdf_file:
    read_pdf = PyPDF2.PdfFileReader(pdf_file)
    number_of_pages = read_pdf.getNumPages()
    page = read_pdf.pages[0]
    page_content = page.extractText()
print(page_content)

When I run the code, I get the following output which is different from that included in the PDF document:

 ! " # $ % # $ % &% $ &' ( ) * % + , - % . / 0 1 ' * 2 3% 4
5
 ' % 1 $ # 2 6 % 3/ % 7 / ) ) / 8 % &) / 2 6 % 8 # 3" % 3" * % 31 3/ 9 # &)
%

How can I extract the text as is in the PDF document?

Boise answered 17/1, 2016 at 11:16 Comment(9)
Copy the text using a good PDF viewer - Adobe's canonical Acrobat Reader, if possible. Do you get the same result? The difference is not that the text is different, but the font is - the character codes map to other values. Not all PDFs contain the correct data to restore this.Obcordate
I tried another document and it worked. Yes, it seems the issue is with the PDF itselfBoise
That PDF contains a character CMap table, so the restrictions and work-arounds discussed in this thread are is relevant - #4203914.Arioso
The PDF indeed contains a correct CMAP so it is trivial to convert the ad hoc character mapping to plain text. However, it takes additional processing to retrieve the correct order of text. Mac OS X's Quartz PDF renderer is a nasty piece of work! In its original rendering order I get "m T’h iuss iisn ga tosam fopllloew DalFo dnogc wumithe ntht eI tutorial"... Only after sorting by x coordinates I get a far more likely correct result: "This is a sample PDF document I’m using to follow along with the tutorial".Obcordate
#32667898Goodkin
Pandas users (in particular) interested in table extraction must check bottom answers (Tabula and Camelot).Kukri
PyPDF2 adds random whitespaces between/in words. very hard to process.Collateral
PyPDF2 recently got way better text extraction! Give it a second try :-)Nutter
Still getting random whitespaces between words... Using PyPDF version 2.11.1Bloodshot
H
314

I was looking for a simple solution to use for python 3.x and windows. There doesn't seem to be support from textract, which is unfortunate, but if you are looking for a simple solution for windows/python 3 checkout the tika package, really straight forward for reading pdfs.

Tika-Python is a Python binding to the Apache Tika™ REST services allowing Tika to be called natively in the Python community.

from tika import parser # pip install tika

raw = parser.from_file('sample.pdf')
print(raw['content'])

Note that Tika is written in Java so you will need a Java runtime installed.

Horatio answered 7/2, 2018 at 21:43 Comment(12)
I tested pypdf2, tika and tried and failed to install textract and pdftotext. Pypdf2 returned 99 words while tika returned all 858 words from my test invoice. So I ended up going with tika.Oddity
I keep getting a "RuntimeError: Unable to start Tika server" error.Hoch
If you need to run this on all the PDF files in a directory (recursively), take this scriptMarlinemarlinespike
This is very slow as it runs a Java REST web-server in localhost port 9998 under the hoods.Anuria
for who is having the "Unable to start Tika server" error, I solved installing the last version of Java as suggested here, which I did on Mac Os X with brew following this answerDejection
As I am behind firewall, tika is no use to me, because it's contacting outside serverTuque
It downloads a tika-server.jar 76 MB file into C:\Users\User\AppData\Local\Temp. Is there a way to make this permanent if I clean temp later? It also requires a JAVA vm installed, is that right?Gladiator
"RuntimeError: Unable to start Tika server" error <-- Same errorEarp
I got this error when running your code - RuntimeError: Unable to start Tika server. could you help me?Spatiotemporal
RuntimeError: Unable to start Tika server was solved after actually installing Java. I installed Java 8 Update 291 - 8.0.2910.10 and Java 8 Update 291 (64-bit) - 8.0.2910.10.Wayward
@Oddity PyPDF2 improved a lot. Could you please check again + update your comment?Nutter
search.maven.org/remotecontent?filepath=org/apache/tika/…Lucila
N
207

pypdf recently improved a lot. Depending on the data, it is on-par or better than pdfminer.six.

pymupdf / tika / PDFium are better than pypdf, but the difference became rather small - (mostly when to set a new line). The core part is that they are way faster. But they are not pure-Python which can mean that you cannot execute it. And some might have too restrictive licenses so that you may not use it.

Have a look at the benchmark. This benchmark mainly considers English texts, but also German ones. It does not include:

  • Anything special regarding tables (just that the text is there, not about the formatting)
  • Arabic test (RTL-languages)
  • Mathematical formulas.

That means if your use-case requires those points, you might perceive the quality differently.

Having said that, the results from November 2022:

Quality

Speed

pypdf

I became the maintainer of pypdf and PyPDF2 in 2022! 😁 The community improved the text extraction a lot in 2022. Give it a try :-)

from pypdf import PdfReader

reader = PdfReader("example.pdf")
text = ""
for page in reader.pages:
    text += page.extract_text() + "\n"

Please note that those packages are not maintained:

  • PyPDF2, PyPDF3, PyPDF4
  • pdfminer (without .six)

pymupdf

import fitz # install using: pip install PyMuPDF

with fitz.open("my.pdf") as doc:
    text = ""
    for page in doc:
        text += page.get_text()

print(text)

Other PDF libraries

  • pikepdf does not support text extraction (source)
Nutter answered 21/8, 2020 at 7:2 Comment(8)
However, there seems to be a problem with the order of the text from the PDF. Intuitively the text would read from top to bottom and left to right, but here it seem to show up in another orderEparchy
Except, it occasionally just can't find the text in a page...Philipphilipa
@Philipphilipa If you have an example PDF, please go ahead and create an issue: github.com/pymupdf/PyMuPDF/issues - the developer behin it is pretty activeNutter
This is the most light-weight answer I've seen so far. No java server necessary!Psychopathology
This is the latest working solution as of 23 Jan 2022.Illustrate
AttributeError: function/symbol 'ARC4_stream_init' not found in library 'C:\QGB\Anaconda3\lib\site-packages\Crypto\Util\..\Cipher_ARC4.cp37-win_amd64.pyd': error 0x7fLucila
That might be a pycryptodome issue: github.com/py-pdf/PyPDF2/issues/1192Nutter
I've struggled with PDFs in the past but this implementation of pypdf is a breeze. Absolutely fantastic work, thanks!Infarct
A
86

Use textract.

It supports many types of files including PDFs

import textract
text = textract.process("path/to/file.extension")
Abert answered 12/11, 2016 at 10:55 Comment(10)
Works for PDFs, epubs, etc - processes PDFs that even PDFMiner fails on.Stines
how to use it in aws lambda , I tried this but , import error occured fro textractRiemann
textract is a wrapper for Poppler:pdftotext (among others).Dialogist
@ArunKumar: To use anything in AWS Lambda that's not built-in, you have to include it and all extra dependencies, in your bundle.Cretin
@DavidBrown if you conda install swig before pip install pocketsphinx then pip install textract that seems to be the incantation that makes it work.Rearm
@DavidBrown: got error 'requests 2.21.0 has requirement chardet<3.1.0,>=3.0.2, but you'll have chardet 2.3.0 which is incompatible.' - tried to all versions of 'chardet'. Couldn't work in windows.Hester
Not recomending 'textract' library. it is very difficult to run and Only works for MAC. Not working inWindowsHester
textract is requiring me to downgrade to python 2.7 (from 3.7). No can do.Ilocano
it requires installation of extra packages in the system, but this library reads PDF like a magic. Note: it is better to add text.decode('utf-8') for non-ASCII documentsConvexoconvex
textract seems to be dead (source). Use either pdfminer.six directly or pymupdfNutter
C
64

Look at this code for PyPDF2<=1.26.0:

import PyPDF2
pdf_file = open('sample.pdf', 'rb')
read_pdf = PyPDF2.PdfFileReader(pdf_file)
page = read_pdf.getPage(0)
page_content = page.extractText()
print page_content.encode('utf-8')

The output is:

!"#$%#$%&%$&'()*%+,-%./01'*23%4
5'%1$#26%3/%7/))/8%&)/26%8#3"%3"*%313/9#&)
%

Using the same code to read a pdf from 201308FCR.pdf .The output is normal.

Its documentation explains why:

def extractText(self):
    """
    Locate all text drawing commands, in the order they are provided in the
    content stream, and extract the text.  This works well for some PDF
    files, but poorly for others, depending on the generator used.  This will
    be refined in the future.  Do not rely on the order of text coming out of
    this function, as it will change if this function is made more
    sophisticated.
    :return: a unicode string object.
    """
Clynes answered 20/1, 2016 at 4:0 Comment(4)
@VineeshTP: Are you getting anything for page_content? If yes, then see if it helps by using a different encoding other than (utf-8)Clynes
Best library I found for reading the pdf using python is 'tika'Hester
201308FCR.pdf not found.Sigfrid
@Matin Thoma is it possible to preserve the format, when extracting, say python code from a PDF?Eternal
P
48

After trying textract (which seemed to have too many dependencies) and pypdf2 (which could not extract text from the pdfs I tested with) and tika (which was too slow) I ended up using pdftotext from xpdf (as already suggested in another answer) and just called the binary from python directly (you may need to adapt the path to pdftotext):

import os, subprocess
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
args = ["/usr/local/bin/pdftotext",
        '-enc',
        'UTF-8',
        "{}/my-pdf.pdf".format(SCRIPT_DIR),
        '-']
res = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output = res.stdout.decode('utf-8')

There is pdftotext which does basically the same but this assumes pdftotext in /usr/local/bin whereas I am using this in AWS lambda and wanted to use it from the current directory.

Btw: For using this on lambda you need to put the binary and the dependency to libstdc++.so into your lambda function. I personally needed to compile xpdf. As instructions for this would blow up this answer I put them on my personal blog.

Pledgee answered 13/3, 2018 at 20:30 Comment(4)
Oh my god, it works!! Finally, a solution that extracts the text in the correct order! I want to hug you for this answer! (Or if you don't like hugs, here's a virtual coffee/beer/...)Gile
glad it helped! Upvoting gives the same sensation as hugging, so I'm fine!Pledgee
simple ... gr8 out of box thinking!Vestige
Please give PyPDF2 another chance. We've improved it a lot :-)Nutter
M
19

I've try many Python PDF converters, and I like to update this review. Tika is one of the best. But PyMuPDF is a good news from @ehsaneha user.

I did a code to compare them in: https://github.com/erfelipe/PDFtextExtraction I hope to help you.

Tika-Python is a Python binding to the Apache Tika™ REST services allowing Tika to be called natively in the Python community.

from tika import parser

raw = parser.from_file("///Users/Documents/Textos/Texto1.pdf")
raw = str(raw)

safe_text = raw.encode('utf-8', errors='ignore')

safe_text = str(safe_text).replace("\n", "").replace("\\", "")
print('--- safe text ---' )
print( safe_text )
Mary answered 1/3, 2019 at 1:12 Comment(3)
special thanks for .encode('utf-8', errors='ignore')Mcnamee
AttributeError: module 'os' has no attribute 'setsid'Knitted
this worked for me, when opening the file in 'rb' mode with open('../path/to/pdf','rb') as pdf: raw = str(parser.from_file(pdf)) text = raw.encode('utf-8', errors='ignore')Southeastward
F
13

You may want to use time proved xPDF and derived tools to extract text instead as pyPDF2 seems to have various issues with the text extraction still.

The long answer is that there are lot of variations how a text is encoded inside PDF and that it may require to decoded PDF string itself, then may need to map with CMAP, then may need to analyze distance between words and letters etc.

In case the PDF is damaged (i.e. displaying the correct text but when copying it gives garbage) and you really need to extract text, then you may want to consider converting PDF into image (using ImageMagik) and then use Tesseract to get text from image using OCR.

Flied answered 18/1, 2016 at 8:42 Comment(2)
-1 because the OP is asking for reading pdfs in Python, and although there is an xpdf wrapper for python it is poorly maintained.Ilocano
You might want to give PyPDF2 another shot (also mind the capitalization)Nutter
A
11

I found a solution here PDFLayoutTextStripper

It's good because it can keep the layout of the original PDF.

It's written in Java but I have added a Gateway to support Python.

Sample code:

from py4j.java_gateway import JavaGateway

gw = JavaGateway()
result = gw.entry_point.strip('samples/bus.pdf')

# result is a dict of {
#   'success': 'true' or 'false',
#   'payload': pdf file content if 'success' is 'true'
#   'error': error message if 'success' is 'false'
# }

print result['payload']

Sample output from PDFLayoutTextStripper: enter image description here

You can see more details here Stripper with Python

Allodium answered 7/5, 2019 at 1:54 Comment(1)
The best feature of this library is definitely its ability to (mostly) preserve the layout. The worst is that you need to standup a gateway service in Java.Parcheesi
C
10

PyPDF2 in some cases ignores the white spaces and makes the result text a mess, but I use PyMuPDF and I'm really satisfied you can use this link for more info

Coaxial answered 4/8, 2018 at 16:38 Comment(5)
pymupdf is the best solution I observed, does not require additional C++ libraries like pdftotext or java like tikaTambour
pymypdf is really the best solution, no additional server or libraries, and it works with file where PyPDF2 PypDF3 PyPDF4 retrive empty string of text. many thanks!Sacred
to install pymupdf, run pip install pymupdf==1.16.16. Using this specific version because today the newest version (17) is not working. I opted for pymupdf because it extracts text wrapping fields in new line char \n. So I'm extracting the text from pdf to a string with pymupdf and then I'm using my_extracted_text.splitlines() to get the text splitted in lines, into a list.Emelyemelyne
PyMuPDF was really surprising. Thanks.Mary
Page doesn't existPeruke
D
10

pdftotext is the best and simplest one! pdftotext also reserves the structure as well.

I tried PyPDF2, PDFMiner and a few others but none of them gave a satisfactory result.

Diatribe answered 3/4, 2019 at 12:16 Comment(4)
Message as follows when installing pdf2text,Collecting PDFMiner (from pdf2text), so I don't understand this answer now.Rajab
pdf2text and pdftotext are different. You can use the link from the answer.Diatribe
OK. That's a little bit confusing.Rajab
You might want to give PyPDF2 another shot. We've improved it a lot.Nutter
P
10

In 2020 the solutions above were not working for the particular pdf I was working with. Below is what did the trick. I am on Windows 10 and Python 3.8

Test pdf file: https://drive.google.com/file/d/1aUfQAlvq5hA9kz2c9CyJADiY3KpY3-Vn/view?usp=sharing

#pip install pdfminer.six
import io

from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage


def convert_pdf_to_txt(path):
    '''Convert pdf content from a file path to text

    :path the file path
    '''
    rsrcmgr = PDFResourceManager()
    codec = 'utf-8'
    laparams = LAParams()

    with io.StringIO() as retstr:
        with TextConverter(rsrcmgr, retstr, codec=codec,
                           laparams=laparams) as device:
            with open(path, 'rb') as fp:
                interpreter = PDFPageInterpreter(rsrcmgr, device)
                password = ""
                maxpages = 0
                caching = True
                pagenos = set()

                for page in PDFPage.get_pages(fp,
                                              pagenos,
                                              maxpages=maxpages,
                                              password=password,
                                              caching=caching,
                                              check_extractable=True):
                    interpreter.process_page(page)

                return retstr.getvalue()


if __name__ == "__main__":
    print(convert_pdf_to_txt('C:\\Path\\To\\Test_PDF.pdf')) 
Partiality answered 31/7, 2020 at 11:18 Comment(7)
Excellent answer. There's an anaconda install as well. I was installed and had extracted text in < 5 minutes. [note: tika also worked, but pdfminer.six was much faster)Erlond
You are a lifesaver!Defection
In 2023, 3 lines of pypdf do the same: extract text with pypdfNutter
In 2024, many libraries can extract the text, but depending upon the original structure of the PDF -- particularly the use of tables -- the result will vary dramatically. 3 lines of code does not imply that the output from a given PDF will be coherent or useful.Parcheesi
I tested Jortega's code above, and it really struggled with data in tables, especially when there was a blank cell.Parcheesi
@Parcheesi If you can point me in the direction of where to get an example PDF like the one you are having trouble with I could see what I can put together.Partiality
Sorry, I can't. They are proprietary. Just test with PDFs that have a lot of tables. BTW, the irony in all this was that your method above produced the exact same output in my test cases as using this 1 line of code: text = extract_text(filepath). :-) Perhaps Martin was right.Parcheesi
B
9

The below code is a solution to the question in Python 3. Before running the code, make sure you have installed the pypdf library in your environment. If not installed, open the command prompt and run the following command (instead of pip you might need pip3):

pip install pypdf --upgrade

Solution Code using pypdf > 3.0.0:

import pypdf

reader = PyPDF2.PdfReader('sample.pdf')
for page in reader.pages:
    print(page.extract_text())
Brittaney answered 23/5, 2018 at 13:38 Comment(1)
How would u save all the content in one text file and use it for further analysisPutto
O
8

pdfplumber is one of the better libraries to read and extract data from pdf. It also provides ways to read table data and after struggling with a lot of such libraries, pdfplumber worked best for me.

Mind you, it works best for machine-written pdf and not scanned pdf.

import pdfplumber
with pdfplumber.open(r'D:\examplepdf.pdf') as pdf:
first_page = pdf.pages[0]
print(first_page.extract_text())
Oleviaolfaction answered 19/10, 2021 at 14:4 Comment(1)
This is nice, but I have a question on the format of the output. I want to save the result of the print into a pandas dataframe. Is that possible?Vinery
T
7

I've got a better work around than OCR and to maintain the page alignment while extracting the text from a PDF. Should be of help:

from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
from io import StringIO

def convert_pdf_to_txt(path):
    rsrcmgr = PDFResourceManager()
    retstr = StringIO()
    codec = 'utf-8'
    laparams = LAParams()
    device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
    fp = open(path, 'rb')
    interpreter = PDFPageInterpreter(rsrcmgr, device)
    password = ""
    maxpages = 0
    caching = True
    pagenos=set()


    for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True):
        interpreter.process_page(page)


    text = retstr.getvalue()

    fp.close()
    device.close()
    retstr.close()
    return text

text= convert_pdf_to_txt('test.pdf')
print(text)
Toxin answered 16/3, 2020 at 12:38 Comment(1)
Nb. The latest version no longer uses the codec arg . I fixed this by removing it i.e. device = TextConverter(rsrcmgr, retstr, laparams=laparams)Insight
N
6

Multi - page pdf can be extracted as text at single stretch instead of giving individual page number as argument using below code

import PyPDF2
import collections
pdf_file = open('samples.pdf', 'rb')
read_pdf = PyPDF2.PdfFileReader(pdf_file)
number_of_pages = read_pdf.getNumPages()
c = collections.Counter(range(number_of_pages))
for i in c:
   page = read_pdf.getPage(i)
   page_content = page.extractText()
   print page_content.encode('utf-8')
Nuptial answered 22/6, 2018 at 10:13 Comment(1)
Only problem here the content of new page overwrites the last onePutto
C
5

You can use PDFtoText https://github.com/jalan/pdftotext

PDF to text keeps text format indentation, doesn't matter if you have tables.

Classicism answered 6/12, 2017 at 23:20 Comment(0)
W
5

As of 2021 I would like to recommend pdfreader due to the fact that PyPDF2/3 seems to be troublesome now and tika is actually written in java and needs a jre in the background. pdfreader is pythonic, currently well maintained and has extensive documentation here.

Installation as usual: pip install pdfreader

Short example of usage:

from pdfreader import PDFDocument, SimplePDFViewer

# get raw document
fd = open(file_name, "rb")
doc = PDFDocument(fd)

# there is an iterator for pages
page_one = next(doc.pages())
all_pages = [p for p in doc.pages()]

# and even a viewer
fd = open(file_name, "rb")
viewer = SimplePDFViewer(fd)
Whiz answered 12/8, 2021 at 7:23 Comment(2)
On a note, installing pdfreader on Windows requires Microsoft C++ Build Tools installed on your system, whilst the answer below recommending pymupdf installed directly using pip without any extra requirement.Philipphilipa
I couldnt use it on jupyter notebook, keeps crashing the kernelBerretta
E
4

If wanting to extract text from a table, I've found tabula to be easily implemented, accurate, and fast:

to get a pandas dataframe:

import tabula

df = tabula.read_pdf('your.pdf')

df

By default, it ignores page content outside of the table. So far, I've only tested on a single-page, single-table file, but there are kwargs to accommodate multiple pages and/or multiple tables.

install via:

pip install tabula-py
# or
conda install -c conda-forge tabula-py 

In terms of straight-up text extraction see: https://mcmap.net/q/87296/-how-to-extract-text-from-a-pdf-file

Erlond answered 21/9, 2020 at 2:12 Comment(2)
tabula is impressive. Of all the solutions I tested from this page, this is the only one that was able to maintain the order of rows and fields. There are still a few adjustments needed for complex tables, but since the output seems reproductible from one table to the other and is stored in a pandas.DataFrame it is easy to correct.Kukri
Also check Camelot.Kukri
C
3

Here is the simplest code for extracting text

code:

# importing required modules
import PyPDF2

# creating a pdf file object
pdfFileObj = open('filename.pdf', 'rb')

# creating a pdf reader object
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)

# printing number of pages in pdf file
print(pdfReader.numPages)

# creating a page object
pageObj = pdfReader.getPage(5)

# extracting text from page
print(pageObj.extractText())

# closing the pdf file object
pdfFileObj.close()
Capuche answered 14/6, 2018 at 7:12 Comment(2)
Recomending 'tika'Hester
PyPDF2 / PyPDF3 / PyPDF4 are all dead. Use pymupdfNutter
C
3

You can simply do this using pytessaract and OpenCV. Refer the following code. You can get more details from this article.

import os
from PIL import Image
from pdf2image import convert_from_path
import pytesseract

filePath = ‘021-DO-YOU-WONDER-ABOUT-RAIN-SNOW-SLEET-AND-HAIL-Free-Childrens-Book-By-Monkey-Pen.pdf’
doc = convert_from_path(filePath)

path, fileName = os.path.split(filePath)
fileBaseName, fileExtension = os.path.splitext(fileName)

for page_number, page_data in enumerate(doc):
txt = pytesseract.image_to_string(page_data).encode(“utf-8”)
print(“Page # {} — {}”.format(str(page_number),txt))
Cedilla answered 5/8, 2021 at 17:34 Comment(0)
B
2

Use pdfminer.six. Here is the the doc : https://pdfminersix.readthedocs.io/en/latest/index.html

To convert pdf to text :

    def pdf_to_text():
        from pdfminer.high_level import extract_text

        text = extract_text('test.pdf')
        print(text)
Belfry answered 3/1, 2021 at 19:31 Comment(1)
Order is not proper.Cheddite
U
0

I am adding code to accomplish this: It is working fine for me:

# This works in python 3
# required python packages
# tabula-py==1.0.0
# PyPDF2==1.26.0
# Pillow==4.0.0
# pdfminer.six==20170720

import os
import shutil
import warnings
from io import StringIO

import requests
import tabula
from PIL import Image
from PyPDF2 import PdfFileWriter, PdfFileReader
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.pdfpage import PDFPage

warnings.filterwarnings("ignore")


def download_file(url):
    local_filename = url.split('/')[-1]
    local_filename = local_filename.replace("%20", "_")
    r = requests.get(url, stream=True)
    print(r)
    with open(local_filename, 'wb') as f:
        shutil.copyfileobj(r.raw, f)

    return local_filename


class PDFExtractor():
    def __init__(self, url):
        self.url = url

    # Downloading File in local
    def break_pdf(self, filename, start_page=-1, end_page=-1):
        pdf_reader = PdfFileReader(open(filename, "rb"))
        # Reading each pdf one by one
        total_pages = pdf_reader.numPages
        if start_page == -1:
            start_page = 0
        elif start_page < 1 or start_page > total_pages:
            return "Start Page Selection Is Wrong"
        else:
            start_page = start_page - 1

        if end_page == -1:
            end_page = total_pages
        elif end_page < 1 or end_page > total_pages - 1:
            return "End Page Selection Is Wrong"
        else:
            end_page = end_page

        for i in range(start_page, end_page):
            output = PdfFileWriter()
            output.addPage(pdf_reader.getPage(i))
            with open(str(i + 1) + "_" + filename, "wb") as outputStream:
                output.write(outputStream)

    def extract_text_algo_1(self, file):
        pdf_reader = PdfFileReader(open(file, 'rb'))
        # creating a page object
        pageObj = pdf_reader.getPage(0)

        # extracting extract_text from page
        text = pageObj.extractText()
        text = text.replace("\n", "").replace("\t", "")
        return text

    def extract_text_algo_2(self, file):
        pdfResourceManager = PDFResourceManager()
        retstr = StringIO()
        la_params = LAParams()
        device = TextConverter(pdfResourceManager, retstr, codec='utf-8', laparams=la_params)
        fp = open(file, 'rb')
        interpreter = PDFPageInterpreter(pdfResourceManager, device)
        password = ""
        max_pages = 0
        caching = True
        page_num = set()

        for page in PDFPage.get_pages(fp, page_num, maxpages=max_pages, password=password, caching=caching,
                                      check_extractable=True):
            interpreter.process_page(page)

        text = retstr.getvalue()
        text = text.replace("\t", "").replace("\n", "")

        fp.close()
        device.close()
        retstr.close()
        return text

    def extract_text(self, file):
        text1 = self.extract_text_algo_1(file)
        text2 = self.extract_text_algo_2(file)

        if len(text2) > len(str(text1)):
            return text2
        else:
            return text1

    def extarct_table(self, file):

        # Read pdf into DataFrame
        try:
            df = tabula.read_pdf(file, output_format="csv")
        except:
            print("Error Reading Table")
            return

        print("\nPrinting Table Content: \n", df)
        print("\nDone Printing Table Content\n")

    def tiff_header_for_CCITT(self, width, height, img_size, CCITT_group=4):
        tiff_header_struct = '<' + '2s' + 'h' + 'l' + 'h' + 'hhll' * 8 + 'h'
        return struct.pack(tiff_header_struct,
                           b'II',  # Byte order indication: Little indian
                           42,  # Version number (always 42)
                           8,  # Offset to first IFD
                           8,  # Number of tags in IFD
                           256, 4, 1, width,  # ImageWidth, LONG, 1, width
                           257, 4, 1, height,  # ImageLength, LONG, 1, lenght
                           258, 3, 1, 1,  # BitsPerSample, SHORT, 1, 1
                           259, 3, 1, CCITT_group,  # Compression, SHORT, 1, 4 = CCITT Group 4 fax encoding
                           262, 3, 1, 0,  # Threshholding, SHORT, 1, 0 = WhiteIsZero
                           273, 4, 1, struct.calcsize(tiff_header_struct),  # StripOffsets, LONG, 1, len of header
                           278, 4, 1, height,  # RowsPerStrip, LONG, 1, lenght
                           279, 4, 1, img_size,  # StripByteCounts, LONG, 1, size of extract_image
                           0  # last IFD
                           )

    def extract_image(self, filename):
        number = 1
        pdf_reader = PdfFileReader(open(filename, 'rb'))

        for i in range(0, pdf_reader.numPages):

            page = pdf_reader.getPage(i)

            try:
                xObject = page['/Resources']['/XObject'].getObject()
            except:
                print("No XObject Found")
                return

            for obj in xObject:

                try:

                    if xObject[obj]['/Subtype'] == '/Image':
                        size = (xObject[obj]['/Width'], xObject[obj]['/Height'])
                        data = xObject[obj]._data
                        if xObject[obj]['/ColorSpace'] == '/DeviceRGB':
                            mode = "RGB"
                        else:
                            mode = "P"

                        image_name = filename.split(".")[0] + str(number)

                        print(xObject[obj]['/Filter'])

                        if xObject[obj]['/Filter'] == '/FlateDecode':
                            data = xObject[obj].getData()
                            img = Image.frombytes(mode, size, data)
                            img.save(image_name + "_Flate.png")
                            # save_to_s3(imagename + "_Flate.png")
                            print("Image_Saved")

                            number += 1
                        elif xObject[obj]['/Filter'] == '/DCTDecode':
                            img = open(image_name + "_DCT.jpg", "wb")
                            img.write(data)
                            # save_to_s3(imagename + "_DCT.jpg")
                            img.close()
                            number += 1
                        elif xObject[obj]['/Filter'] == '/JPXDecode':
                            img = open(image_name + "_JPX.jp2", "wb")
                            img.write(data)
                            # save_to_s3(imagename + "_JPX.jp2")
                            img.close()
                            number += 1
                        elif xObject[obj]['/Filter'] == '/CCITTFaxDecode':
                            if xObject[obj]['/DecodeParms']['/K'] == -1:
                                CCITT_group = 4
                            else:
                                CCITT_group = 3
                            width = xObject[obj]['/Width']
                            height = xObject[obj]['/Height']
                            data = xObject[obj]._data  # sorry, getData() does not work for CCITTFaxDecode
                            img_size = len(data)
                            tiff_header = self.tiff_header_for_CCITT(width, height, img_size, CCITT_group)
                            img_name = image_name + '_CCITT.tiff'
                            with open(img_name, 'wb') as img_file:
                                img_file.write(tiff_header + data)

                            # save_to_s3(img_name)
                            number += 1
                except:
                    continue

        return number

    def read_pages(self, start_page=-1, end_page=-1):

        # Downloading file locally
        downloaded_file = download_file(self.url)
        print(downloaded_file)

        # breaking PDF into number of pages in diff pdf files
        self.break_pdf(downloaded_file, start_page, end_page)

        # creating a pdf reader object
        pdf_reader = PdfFileReader(open(downloaded_file, 'rb'))

        # Reading each pdf one by one
        total_pages = pdf_reader.numPages

        if start_page == -1:
            start_page = 0
        elif start_page < 1 or start_page > total_pages:
            return "Start Page Selection Is Wrong"
        else:
            start_page = start_page - 1

        if end_page == -1:
            end_page = total_pages
        elif end_page < 1 or end_page > total_pages - 1:
            return "End Page Selection Is Wrong"
        else:
            end_page = end_page

        for i in range(start_page, end_page):
            # creating a page based filename
            file = str(i + 1) + "_" + downloaded_file

            print("\nStarting to Read Page: ", i + 1, "\n -----------===-------------")

            file_text = self.extract_text(file)
            print(file_text)
            self.extract_image(file)

            self.extarct_table(file)
            os.remove(file)
            print("Stopped Reading Page: ", i + 1, "\n -----------===-------------")

        os.remove(downloaded_file)


# I have tested on these 3 pdf files
# url = "http://s3.amazonaws.com/NLP_Project/Original_Documents/Healthcare-January-2017.pdf"
url = "http://s3.amazonaws.com/NLP_Project/Original_Documents/Sample_Test.pdf"
# url = "http://s3.amazonaws.com/NLP_Project/Original_Documents/Sazerac_FS_2017_06_30%20Annual.pdf"
# creating the instance of class
pdf_extractor = PDFExtractor(url)

# Getting desired data out
pdf_extractor.read_pages(15, 23)
Univalence answered 22/2, 2018 at 15:52 Comment(0)
O
0

You can download tika-app-xxx.jar(latest) from Here.

Then put this .jar file in the same folder of your python script file.

then insert the following code in the script:

import os
import os.path

tika_dir=os.path.join(os.path.dirname(__file__),'<tika-app-xxx>.jar')

def extract_pdf(source_pdf:str,target_txt:str):
    os.system('java -jar '+tika_dir+' -t {} > {}'.format(source_pdf,target_txt))

The advantage of this method:

fewer dependency. Single .jar file is easier to manage that a python package.

multi-format support. The position source_pdf can be the directory of any kind of document. (.doc, .html, .odt, etc.)

up-to-date. tika-app.jar always release earlier than the relevant version of tika python package.

stable. It is far more stable and well-maintained (Powered by Apache) than PyPDF.

disadvantage:

A jre-headless is necessary.

Oxcart answered 9/8, 2018 at 5:27 Comment(3)
totally not pythonic solution. If you recommend this, you should build a python package and have people import that. Don't recommend using command line executions of java code in python.Poulard
@MichaelTamillow, if writing a code which is going to be uploaded into pypi, I admit that it is not a good idea. However, if it is just a python script with shebang for temporary usage, it is not bad, doesn't it?Oxcart
Well, the question isn't titled with "python" - so I think stating "here's how to do it in Java" is more acceptable than this. Technically, you can do whatever you want in Python. That's why it is both awesome and terrible. Temporary usage is a bad habit.Poulard
P
0

If you try it in Anaconda on Windows, PyPDF2 might not handle some of the PDFs with non-standard structure or unicode characters. I recommend using the following code if you need to open and read a lot of pdf files - the text of all pdf files in folder with relative path .//pdfs// will be stored in list pdf_text_list.

from tika import parser
import glob

def read_pdf(filename):
    text = parser.from_file(filename)
    return(text)


all_files = glob.glob(".\\pdfs\\*.pdf")
pdf_text_list=[]
for i,file in enumerate(all_files):
    text=read_pdf(file)
    pdf_text_list.append(text['content'])

print(pdf_text_list)
Publicness answered 9/7, 2019 at 13:56 Comment(0)
A
0

For extracting Text from PDF use below code

import PyPDF2
pdfFileObj = open('mypdf.pdf', 'rb')

pdfReader = PyPDF2.PdfFileReader(pdfFileObj)

print(pdfReader.numPages)

pageObj = pdfReader.getPage(0)

a = pageObj.extractText()

print(a)
Antananarivo answered 13/1, 2020 at 18:31 Comment(1)
PyPDF2 / PyPDF3 / PyPDF4 are all dead. Use pymupdfNutter
Z
0

A more robust way, supposing there are multiple PDF's or just one !

import os
from PyPDF2 import PdfFileWriter, PdfFileReader
from io import BytesIO

mydir = # specify path to your directory where PDF or PDF's are

for arch in os.listdir(mydir): 
    buffer = io.BytesIO()
    archpath = os.path.join(mydir, arch)
    with open(archpath) as f:
            pdfFileObj = open(archpath, 'rb')
            pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
            pdfReader.numPages
            pageObj = pdfReader.getPage(0) 
            ley = pageObj.extractText()
            file1 = open("myfile.txt","w")
            file1.writelines(ley)
            file1.close()
            
Zebedee answered 1/8, 2020 at 17:53 Comment(1)
All PyPDF derivates are dead as of 2021. Consider this answer outdated.Whiz
J
0

This Question so far has 35 answers and not one seems to mention that
the text extracted is the true text from the Questioners PDF page. Nor explained WHY.

For comparison here is the RAW PDF code when decompressed (inflated, under the surface, by the PDF viewer). Thus in some cases this is what is extractable The native "Literal" plain text.

 BT 50 0 0 50 0 0 Tm /TT2 1 Tf [ (!) -0.3 (") -0.4 (#) -0.5 ($) -0.1 (%)
-0.1 (#) -0.5 ($) -0.1 (%) -0.1 (&%) -0.1 ($) -0.1 (&') 0.2 (\() -0.4 (\))
-0.5 (*) 0.4 (%) -0.1 (+) 0.4 (,) -0.2 (-) -0.5 (%) -0.1 (.) -0.4 (/) -0.3
(0) 0.1 (1) -0.4 (') 0.2 (*) 0.4 (2) -0.4 (3%) -0.1 (4) ] TJ ET

and

BT 50 0 0 50 0 0
Tm /TT2 1 Tf (5) Tj ET

and

BT 50 0 0 50 0 0 Tm /TT2 1 Tf [ (')
0.2 (%) -0.1 (1) -0.4 ($) -0.1 (#) -0.5 (2) -0.4 (6) 0.3 (%) -0.1 (3/) -0.3
(%) -0.1 (7) -0.2 (/) -0.3 (\)) -0.5 (\)) -0.5 (/) -0.3 (8) 0.2 (%) -0.1 (&\))
-0.5 (/) -0.3 (2) -0.4 (6) 0.3 (%) -0.1 (8) 0.2 (#) -0.5 (3") -0.4 (%) -0.1
(3") -0.4 (*) 0.4 (%) -0.1 (31) -0.4 (3/) -0.3 (9) 0.4 (#) -0.5 (&\)) ] TJ
ET

If you study PDF you know that the body text is the bracketed text from above thus we can expect to extract this raw text coding.

! " # $ % # $ % &% $ &' ( ) * % + , - % . / 0 1 ' * 2 3% 4
5
'% 1 $ # 2 6 % 3/ % 7 / ) ) / 8 % &) / 2 6 % 8 # 3" % 3" * % 31 3/ 9 # &)

Compare that with the OP observation

! " # $ % # $ % &% $ &' ( ) * % + , - % . / 0 1 ' * 2 3% 4
5
' % 1 $ # 2 6 % 3/ % 7 / ) ) / 8 % &) / 2 6 % 8 # 3" % 3" * % 31 3/ 9 # &)
%

So my bad mistake, In my extraction I missed that final (%)

So what was the real problem with "different from that included in the PDF document:"

Answer

When raw page text is placed in the page it is binary encoded as numeric data. Which to our human eyes looks like the above separate ANSI letters, but they are encoded in a PDF for simplicity as single bytes. There is a secondary PDFtoText "ToUnicode" process where the extractor has to convert the short codes into conventional CALIBRI UTF-8 screen pixels.
Here is that table

24 beginbfrange
<21><21><0054>
<22><23><0068>
<24><24><0073>
<25><25><0009>
<26><26><0061>
<27><27><006d>
<28><28><0070>
<29><29><006c>
<2a><2a><0065>
<2b><2b><0050>
<2c><2c><0044>
<2d><2d><0046>
<2e><2e><0064>
<2f><2f><006f>
<30><30><0063>
<31><31><0075>
<32><32><006e>
<33><33><0074>
<34><34><0049>
<35><35><2019>
<36><36><0067>
<37><37><0066>
<38><38><0077>
<39><39><0072>
endbfrange

Most notable in the longer Unicode is, in this case, fistly one on one with more conventional ANSI codes <0000> to <00FF>. However, there is one odd boy out <35><35><2019> so we can see that is ANSI 5 and Unicode 2019 is so that single 5 on its own, has been isolated as a separate entry. enter image description here

Also what about that odd % on its own at the end that I missed, why might that be ? Well look up % and it is hex <25> which in a PDF is counted as a comment but in this case converts to \U+0009 very oddly that is (Character Tabulation) which is usually discarded when building a PDF. Thus usually has no physical width.

So using the ToUnicode values in a PDFtoText conversion we can expect it to be post extraction re-coded into

This is a sample PDF document I
’
m using to follow along with the tutorial

But there seem to be other issues with that source !! (remember all those % characters have no width ?)
enter image description here

Solution

We need to fix the file and one very simple fix is replace the tabs with spaces by change 2 bytes from <0009> to <0020>, then resave to rebuild without error. enter image description here Now extraction should be improved, but do convert with an ANSI to UTF-8 extraction such as.

pdftotext -layout -enc UTF-8 sample-fixed.pdf -

enter image description here

Jueta answered 9/2 at 22:12 Comment(0)
S
-1

PyPDF2 does work, but results may vary. I am seeing quite inconsistent findings from its result extraction.

reader=PyPDF2.pdf.PdfFileReader(self._path)
eachPageText=[]
for i in range(0,reader.getNumPages()):
    pageText=reader.getPage(i).extractText()
    print(pageText)
    eachPageText.append(pageText)
Southwest answered 14/12, 2018 at 21:18 Comment(1)
PyPDF2 / PyPDF3 / PyPDF4 are all dead. Use pymupdfNutter
K
-1

Camelot seems a fairly powerful solution to extract tables from PDFs in Python.

At first sight it seems to achieve almost as accurate extraction as the tabula-py package suggested by CreekGeek, which is already waaaaay above any other posted solution as of today in terms of reliability, but it is supposedly much more configurable. Furthermore it has its own accuracy indicator (results.parsing_report), and great debugging features.

Both Camelot and Tabula provide the results as Pandas’ DataFrames, so it is easy to adjust tables afterwards.

pip install camelot-py

(Not to be confused with the camelot package.)

import camelot

df_list = []
results = camelot.read_pdf("file.pdf", ...)
for table in results:
    print(table.parsing_report)
    df_list.append(results[0].df)

It can also output results as CSV, JSON, HTML or Excel.

Camelot comes at the expense of a number of dependencies.

NB : Since my input is pretty complex with many different tables I ended up using both Camelot and Tabula, depending on the table, to achieve the best results.

Kukri answered 1/2, 2021 at 16:56 Comment(0)
T
-1

Disclaimer: I am the author of borb the library used in this answer.

Try out borb, a pure python PDF library

import typing  
from borb.pdf.document import Document  
from borb.pdf.pdf import PDF  
from borb.toolkit.text.simple_text_extraction import SimpleTextExtraction  


def main():

    # variable to hold Document instance
    doc: typing.Optional[Document] = None  

    # this implementation of EventListener handles text-rendering instructions
    l: SimpleTextExtraction = SimpleTextExtraction()  

    # open the document, passing along the array of listeners
    with open("input.pdf", "rb") as in_file_handle:  
        doc = PDF.loads(in_file_handle, [l])  
  
    # were we able to read the document?
    assert doc is not None  

    # print the text on page 0
    print(l.get_text(0))  

if __name__ == "__main__":
    main()

Tusk answered 4/8, 2021 at 7:19 Comment(3)
How do you get the total number of pages of the document with borb? (or how do you get the complete text directly?)Nutter
Failed to disclose that he is the author of said library.Parcheesi
Fixed. I must have forgotten in this post. Because I do typically mention it.Tusk
C
-1

It includes creating a new sheet for each PDF page being set dynamically based on number of pages in the document.

import PyPDF2 as p2
import xlsxwriter

pdfFileName = "sample.pdf"
pdfFile = open(pdfFileName, 'rb')
pdfread = p2.PdfFileReader(pdfFile)
number_of_pages = pdfread.getNumPages()
workbook = xlsxwriter.Workbook('pdftoexcel.xlsx')

for page_number in range(number_of_pages):
    print(f'Sheet{page_number}')
    pageinfo = pdfread.getPage(page_number)
    rawInfo = pageinfo.extractText().split('\n')

    row = 0
    column = 0
    worksheet = workbook.add_worksheet(f'Sheet{page_number}')

    for line in rawInfo:
        worksheet.write(row, column, line)
        row += 1
workbook.close()
Canadianism answered 9/10, 2021 at 10:40 Comment(0)
S
-1

Objectives: Extract text from PDF

Required Tools:

  1. Poppler for windows: wrapper for pdftotext file in windows for anaanaconda: conda install -c conda-forge

  2. pdftotext utility to convert PDF to text.

Steps: Install Poppler. For windows, Add “xxx/bin/” to env path pip install pdftotext

import pdftotext
 
# Load your PDF
with open("Target.pdf", "rb") as f:
    pdf = pdftotext.PDF(f)
 
# Save all text to a txt file.
with open('output.txt', 'w') as f:
    f.write("\n\n".join(pdf))
Santiago answered 27/12, 2021 at 15:52 Comment(0)
D
-1

Go through the official documentation there it is given

from PyPDF2 import PdfReader

reader = PdfReader("example.pdf")
page = reader.pages[0]
print(page.extract_text())
Donnydonnybrook answered 19/2, 2023 at 11:14 Comment(0)
D
-1

I will introduce another library that hasn't been mentioned yet, providing you with additional options. Extracting text from PDFs can also be achieved using IronPdf.

The IronPDF library can be added via pip. Use the command below to install IronPDF using pip:

pip install ironpdf

IronPDF Python relies on .NET 6.0, as its underlying technology. Therefore, it is necessary to have the .NET 6.0 SDK installed on your machine in order to use IronPDF Python.

from ironpdf import *
 
# Load existing PDF document
pdf = PdfDocument.FromFile("content.pdf")
 
# Extract text from PDF document
all_text = pdf.ExtractAllText()
 
# Extract text from specific page in the document
page_2_text = pdf.ExtractTextFromPage(1)

In the provided code snippet, the PDF document is imported, and a method is employed to extract text from the imported PDF document. This approach enables efficient text extraction from PDF files.

Library | Code example link

Dudek answered 9/8, 2023 at 4:54 Comment(0)
R
-9

How to extract text from a PDF file?

The first thing to understand is the PDF format. It has a public specification written in English, see ISO 32000-2:2017 and read the more than 700 pages of PDF 1.7 specification. You certainly at least need to read the wikipedia page about PDF

Once you understood the details of the PDF format, extracting text is more or less easy (but what about text appearing in figures or images; its figure 1)? Don't expect writing a perfect software text extractor alone in a few weeks....

On Linux, you might also use pdf2text which you could popen from your Python code.

In general, extracting text from a PDF file is an ill defined problem. For a human reader some text could be made (as a figure) from different dots, or a photo, etc...

The Google search engine is capable of extracting text from PDF, but is rumored to need more than half a billion lines of source code. Do you have the necessary resources (in man power, in budget) to develop a competitor?

A possibility might be to print the PDF to some virtual printer (e.g. using GhostScript or Firefox), then to use OCR techniques to extract text.

I would recommend instead to work on the data representation which has generated that PDF file, for example on the original LaTeX code (or Lout code) or on OOXML code.

In all cases, you need to budget at least several person years of software development.

Ratfink answered 21/8, 2020 at 7:8 Comment(1)
This is not an answer. It says read this 700-page document and doesn't give an approach for actually addressing the question.Coxa

© 2022 - 2024 — McMap. All rights reserved.