Extracting text from HTML file using Python
Asked Answered
L

37

328

I'd like to extract the text from an HTML file using Python. I want essentially the same output I would get if I copied the text from a browser and pasted it into notepad.

I'd like something more robust than using regular expressions that may fail on poorly formed HTML. I've seen many people recommend Beautiful Soup, but I've had a few problems using it. For one, it picked up unwanted text, such as JavaScript source. Also, it did not interpret HTML entities. For example, I would expect ' in HTML source to be converted to an apostrophe in text, just as if I'd pasted the browser content into notepad.

Update html2text looks promising. It handles HTML entities correctly and ignores JavaScript. However, it does not exactly produce plain text; it produces markdown that would then have to be turned into plain text. It comes with no examples or documentation, but the code looks clean.


Related questions:

Longwise answered 30/11, 2008 at 2:28 Comment(0)
G
177

html2text is a Python program that does a pretty good job at this.

Gyrate answered 30/11, 2008 at 3:23 Comment(8)
bit it's gpl 3.0 which means it may be incompatibleValentinevalentino
Amazing! it's author is RIP Aaron Swartz.Echolocation
Did anyone find any alternatives to html2text because of GPL 3.0?Seibert
GPL not as bad as people want it to be. Aaron knew best.Promise
I tried both html2text and nltk but they didn't work for me. I ended up going with Beautiful Soup 4, which works beautifully (no pun intended).Fingerboard
I'm looking for a module for this. Is that what html2text is?Harmonious
This does not seem to work any more, any updates or suggestions?Narcoanalysis
I know that's not (AT ALL) the place, but i follow the link to Aaron's blog and github profile and projects, and found myself very disturbed by the fact there is no mention of his death and it's of course frozen in 2012, as if time stopped or he took a very long vacation. Very disturbing.Cosmo
V
258

The best piece of code I found for extracting text without getting javascript or not wanted things :

from urllib.request import urlopen
from bs4 import BeautifulSoup

url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = urlopen(url).read()
soup = BeautifulSoup(html, features="html.parser")

# kill all script and style elements
for script in soup(["script", "style"]):
    script.extract()    # rip it out

# get text
text = soup.get_text()

# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)

print(text)

You just have to install BeautifulSoup before :

pip install beautifulsoup4
Victual answered 7/7, 2014 at 19:18 Comment(13)
How if we want to select some line, just said, line #3?Genagenappe
After going through a lot of stackoverflow answers, I feel like this is the best option for me. One problem I encountered is that lines were added together in some cases. I was able to overcome it by adding a separator in get_text function: text = soup.get_text(separator=' ')Homocyclic
Instead of soup.get_text() I used soup.body.get_text(), so that I don't get any text from the <head> element, such as the title.Kismet
I needed soup.getText()Malinowski
How to extract the &nbsp;,&lt; symbols in the contentHernando
For Python 3, from urllib.request import urlopenSpecies
This works great! Is there an easy way to extract all the links from the HTML as well, and keep them fairly in line with the corresponding text?Namaqualand
Perfect except for it doesn't break lines at <br>Doorsill
Actually you can achieve the same clean result without these manual loops just using two additional standard parameters: soup.get_text(separator='\n', strip=True)Spoor
this seems to be painfully slow, is there any way to do this faster?Joijoice
For faster processing I ended up using selectolax lib. It's pretty limited and produced output with additional spaces which I had to remove manually. But it seems to be working much much faster.Joijoice
I get the following error when using your code @PeYoTIL: Traceback (most recent call last): File "c:\Users\easy\Desktop\GreenMail\Main.py", line 15, in <module> soup = BeautifulSoup(html, features="html.parser") File "C:\Users\easy\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\bs4\__init__.py", line 311, in __init__ markup = markup.read() io.UnsupportedOperation: not readableSherellsherer
How can I add BeautifulSoup in Python 3.10Lactam
G
177

html2text is a Python program that does a pretty good job at this.

Gyrate answered 30/11, 2008 at 3:23 Comment(8)
bit it's gpl 3.0 which means it may be incompatibleValentinevalentino
Amazing! it's author is RIP Aaron Swartz.Echolocation
Did anyone find any alternatives to html2text because of GPL 3.0?Seibert
GPL not as bad as people want it to be. Aaron knew best.Promise
I tried both html2text and nltk but they didn't work for me. I ended up going with Beautiful Soup 4, which works beautifully (no pun intended).Fingerboard
I'm looking for a module for this. Is that what html2text is?Harmonious
This does not seem to work any more, any updates or suggestions?Narcoanalysis
I know that's not (AT ALL) the place, but i follow the link to Aaron's blog and github profile and projects, and found myself very disturbed by the fact there is no mention of his death and it's of course frozen in 2012, as if time stopped or he took a very long vacation. Very disturbing.Cosmo
E
107

NOTE: NTLK no longer supports clean_html function

Original answer below, and an alternative in the comments sections.


Use NLTK

I wasted my 4-5 hours fixing the issues with html2text. Luckily i could encounter NLTK.
It works magically.

import nltk   
from urllib import urlopen

url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"    
html = urlopen(url).read()    
raw = nltk.clean_html(html)  
print(raw)
Encaenia answered 20/11, 2011 at 12:34 Comment(8)
It just removes HTML markup and does not process any tags (such as <p> and <br/>) or entities.Interatomic
sometimes that is enough :)Clathrate
I want to up vote this a thousand times. I was stuck in regex hell, but lo, now I see the wisdom of NLTK.Noel
Apparently, clean_html is not supported anymore: github.com/nltk/nltk/commit/…Hallux
importing a heavy library like nltk for such a simple task would be too muchForehand
@Hallux From the source: raise NotImplementedError ("To remove HTML markup, use BeautifulSoup's get_text() function")Tavey
@ChrisArena Yes good call, I switched to BeautifulSoup because of this.Fingerboard
clean_html() and clean_url() is a cute function in NLTK that was dropped since BeautifulSoup does a better job and parsing markup language, see github.com/nltk/nltk/commit/… Here's BeautifulSoup's documentation: crummy.com/software/BeautifulSoup/bs4/docGuffaw
G
57

Found myself facing just the same problem today. I wrote a very simple HTML parser to strip incoming content of all markups, returning the remaining text with only a minimum of formatting.

from HTMLParser import HTMLParser
from re import sub
from sys import stderr
from traceback import print_exc

class _DeHTMLParser(HTMLParser):
    def __init__(self):
        HTMLParser.__init__(self)
        self.__text = []

    def handle_data(self, data):
        text = data.strip()
        if len(text) > 0:
            text = sub('[ \t\r\n]+', ' ', text)
            self.__text.append(text + ' ')

    def handle_starttag(self, tag, attrs):
        if tag == 'p':
            self.__text.append('\n\n')
        elif tag == 'br':
            self.__text.append('\n')

    def handle_startendtag(self, tag, attrs):
        if tag == 'br':
            self.__text.append('\n\n')

    def text(self):
        return ''.join(self.__text).strip()


def dehtml(text):
    try:
        parser = _DeHTMLParser()
        parser.feed(text)
        parser.close()
        return parser.text()
    except:
        print_exc(file=stderr)
        return text


def main():
    text = r'''
        <html>
            <body>
                <b>Project:</b> DeHTML<br>
                <b>Description</b>:<br>
                This small script is intended to allow conversion from HTML markup to 
                plain text.
            </body>
        </html>
    '''
    print(dehtml(text))


if __name__ == '__main__':
    main()
Gassing answered 21/10, 2010 at 13:14 Comment(3)
This seems to be the most straightforward way of doing this in Python (2.7) using only the default modules. Which is really silly, as this is such a commonly needed thing and there's no good reason why there isn't a parser for this in the default HTMLParser module.Phyllous
I don't think will convert html characters into unicode, right? For example, &amp; won't be converted into &, right?Eellike
For Python 3 use from html.parser import HTMLParserThermomagnetic
W
23

I know there are a lot of answers already, but the most elegent and pythonic solution I have found is described, in part, here.

from bs4 import BeautifulSoup

text = ' '.join(BeautifulSoup(some_html_string, "html.parser").findAll(text=True))

Update

Based on Fraser's comment, here is more elegant solution:

from bs4 import BeautifulSoup

clean_text = ' '.join(BeautifulSoup(some_html_string, "html.parser").stripped_strings)
Winding answered 6/10, 2016 at 15:8 Comment(3)
To avoid a warning, specify a parser for BeautifulSoup to use: text = ''.join(BeautifulSoup(some_html_string, "lxml").findAll(text=True))Winding
You can use the stripped_strings generator to avoid excessive white-space - i.e. clean_text = ''.join(BeautifulSoup(some_html_string, "html.parser").stripped_stringsDonalddonaldson
I would recomment ' '.join(BeautifulSoup(some_html_string, "html.parser").stripped_strings) with at least one space, otherwise a string such as Please click <a href="link">text</a> to continue is rendered as Please clicktextto continueCaesaria
N
14

Here is a version of xperroni's answer which is a bit more complete. It skips script and style sections and translates charrefs (e.g., &#39;) and HTML entities (e.g., &amp;).

It also includes a trivial plain-text-to-html inverse converter.

"""
HTML <-> text conversions.
"""
from HTMLParser import HTMLParser, HTMLParseError
from htmlentitydefs import name2codepoint
import re

class _HTMLToText(HTMLParser):
    def __init__(self):
        HTMLParser.__init__(self)
        self._buf = []
        self.hide_output = False

    def handle_starttag(self, tag, attrs):
        if tag in ('p', 'br') and not self.hide_output:
            self._buf.append('\n')
        elif tag in ('script', 'style'):
            self.hide_output = True

    def handle_startendtag(self, tag, attrs):
        if tag == 'br':
            self._buf.append('\n')

    def handle_endtag(self, tag):
        if tag == 'p':
            self._buf.append('\n')
        elif tag in ('script', 'style'):
            self.hide_output = False

    def handle_data(self, text):
        if text and not self.hide_output:
            self._buf.append(re.sub(r'\s+', ' ', text))

    def handle_entityref(self, name):
        if name in name2codepoint and not self.hide_output:
            c = unichr(name2codepoint[name])
            self._buf.append(c)

    def handle_charref(self, name):
        if not self.hide_output:
            n = int(name[1:], 16) if name.startswith('x') else int(name)
            self._buf.append(unichr(n))

    def get_text(self):
        return re.sub(r' +', ' ', ''.join(self._buf))

def html_to_text(html):
    """
    Given a piece of HTML, return the plain text it contains.
    This handles entities and char refs, but not javascript and stylesheets.
    """
    parser = _HTMLToText()
    try:
        parser.feed(html)
        parser.close()
    except HTMLParseError:
        pass
    return parser.get_text()

def text_to_html(text):
    """
    Convert the given text to html, wrapping what looks like URLs with <a> tags,
    converting newlines to <br> tags and converting confusing chars into html
    entities.
    """
    def f(mo):
        t = mo.group()
        if len(t) == 1:
            return {'&':'&amp;', "'":'&#39;', '"':'&quot;', '<':'&lt;', '>':'&gt;'}.get(t)
        return '<a href="%s">%s</a>' % (t, t)
    return re.sub(r'https?://[^] ()"\';]+|[&\'"<>]', f, text)
Noella answered 7/5, 2013 at 16:4 Comment(3)
python 3 version: gist.github.com/Crazometer/af441bc7dc7353d41390a59f20f07b51Nilsson
In get_text, ''.join should be ' '.join. There should be an empty space, otherwise some of the texts will join together.Southsoutheast
Also, this will not catch ALL texts, except you include other text container tags like H1, H2 ...., span, etc. I had to tweak it for a better coverage.Southsoutheast
A
11

I know there's plenty of answers here already but I think newspaper3k also deserves a mention. I recently needed to complete a similar task of extracting the text from articles on the web and this library has done an excellent job of achieving this so far in my tests. It ignores the text found in menu items and side bars as well as any JavaScript that appears on the page as the OP requests.

from newspaper import Article

article = Article(url)
article.download()
article.parse()
article.text

If you already have the HTML files downloaded you can do something like this:

article = Article('')
article.set_html(html)
article.parse()
article.text

It even has a few NLP features for summarizing the topics of articles:

article.nlp()
article.summary
Assize answered 18/2, 2018 at 13:36 Comment(0)
P
8

You can use html2text method in the stripogram library also.

from stripogram import html2text
text = html2text(your_html_string)

To install stripogram run sudo easy_install stripogram

Porush answered 23/9, 2009 at 3:21 Comment(1)
This module, according to its pypi page, is deprecated: "Unless you have some historical reason for using this package, I'd advise against it!"Beyrouth
U
6

PyParsing does a great job. The PyParsing wiki was killed so here is another location where there are examples of the use of PyParsing (example link). One reason for investing a little time with pyparsing is that he has also written a very brief very well organized O'Reilly Short Cut manual that is also inexpensive.

Having said that, I use BeautifulSoup a lot and it is not that hard to deal with the entities issues, you can convert them before you run BeautifulSoup.

Goodluck

Underline answered 30/11, 2008 at 15:46 Comment(1)
The link is dead or soured.Revenue
A
6

There is Pattern library for data mining.

http://www.clips.ua.ac.be/pages/pattern-web

You can even decide what tags to keep:

s = URL('http://www.clips.ua.ac.be').download()
s = plaintext(s, keep={'h1':[], 'h2':[], 'strong':[], 'a':['href']})
print s
Allard answered 29/11, 2012 at 19:28 Comment(0)
E
6

if you need more speed and less accuracy then you could use raw lxml.

import lxml.html as lh
from lxml.html.clean import clean_html

def lxml_to_text(html):
    doc = lh.fromstring(html)
    doc = clean_html(doc)
    return doc.text_content()
Edp answered 30/8, 2016 at 11:21 Comment(0)
B
5

This isn't exactly a Python solution, but it will convert text Javascript would generate into text, which I think is important (E.G. google.com). The browser Links (not Lynx) has a Javascript engine, and will convert source to text with the -dump option.

So you could do something like:

fname = os.tmpnam()
fname.write(html_source)
proc = subprocess.Popen(['links', '-dump', fname], 
                        stdout=subprocess.PIPE,
                        stderr=open('/dev/null','w'))
text = proc.stdout.read()
Bimah answered 18/5, 2012 at 10:2 Comment(0)
H
5

If you want to automatically extract text passages from a webpage there are some python packages available such as Trafilatura. As part of its benchmarking several python packages have been compared:

https://github.com/adbar/trafilatura#evaluation-and-alternatives

Huddle answered 18/9, 2022 at 20:12 Comment(0)
S
4

Instead of the HTMLParser module, check out htmllib. It has a similar interface, but does more of the work for you. (It is pretty ancient, so it's not much help in terms of getting rid of javascript and css. You could make a derived class, but and add methods with names like start_script and end_style (see the python docs for details), but it's hard to do this reliably for malformed html.) Anyway, here's something simple that prints the plain text to the console

from htmllib import HTMLParser, HTMLParseError
from formatter import AbstractFormatter, DumbWriter
p = HTMLParser(AbstractFormatter(DumbWriter()))
try: p.feed('hello<br>there'); p.close() #calling close is not usually needed, but let's play it safe
except HTMLParseError: print ':(' #the html is badly malformed (or you found a bug)
Selry answered 20/2, 2012 at 6:39 Comment(1)
NB: HTMLError and HTMLParserError should both read HTMLParseError. This works, but does a bad job of maintaining line breaks.Avid
Q
4

I recommend a Python Package called goose-extractor Goose will try to extract the following information:

Main text of an article Main image of article Any Youtube/Vimeo movies embedded in article Meta Description Meta tags

More :https://pypi.python.org/pypi/goose-extractor/

Quillan answered 25/11, 2015 at 13:5 Comment(0)
T
4

Anyone has tried bleach.clean(html,tags=[],strip=True) with bleach? it's working for me.

Tradesman answered 16/1, 2017 at 14:10 Comment(1)
Seems to work for me too, but they don't recommend using it for this purpose: "This function is a security-focused function whose sole purpose is to remove malicious content from a string such that it can be displayed as content in a web page." -> bleach.readthedocs.io/en/latest/clean.html#bleach.cleanRozele
G
4

install html2text using

pip install html2text

then,

>>> import html2text
>>>
>>> h = html2text.HTML2Text()
>>> # Ignore converting links from HTML
>>> h.ignore_links = True
>>> print h.handle("<p>Hello, <a href='http://earth.google.com/'>world</a>!")
Hello, world!
Gatian answered 5/4, 2017 at 7:16 Comment(0)
A
4

Best worked for me is inscripts .

https://github.com/weblyzard/inscriptis

import urllib.request
from inscriptis import get_text

url = "http://www.informationscience.ch"
html = urllib.request.urlopen(url).read().decode('utf-8')

text = get_text(html)
print(text)

The results are really good

Axiom answered 6/4, 2018 at 3:14 Comment(0)
E
3

Beautiful soup does convert html entities. It's probably your best bet considering HTML is often buggy and filled with unicode and html encoding issues. This is the code I use to convert html to raw text:

import BeautifulSoup
def getsoup(data, to_unicode=False):
    data = data.replace("&nbsp;", " ")
    # Fixes for bad markup I've seen in the wild.  Remove if not applicable.
    masssage_bad_comments = [
        (re.compile('<!-([^-])'), lambda match: '<!--' + match.group(1)),
        (re.compile('<!WWWAnswer T[=\w\d\s]*>'), lambda match: '<!--' + match.group(0) + '-->'),
    ]
    myNewMassage = copy.copy(BeautifulSoup.BeautifulSoup.MARKUP_MASSAGE)
    myNewMassage.extend(masssage_bad_comments)
    return BeautifulSoup.BeautifulSoup(data, markupMassage=myNewMassage,
        convertEntities=BeautifulSoup.BeautifulSoup.ALL_ENTITIES 
                    if to_unicode else None)

remove_html = lambda c: getsoup(c, to_unicode=True).getText(separator=u' ') if c else ""
Eellike answered 30/11, 2012 at 8:23 Comment(0)
N
3

Another non-python solution: Libre Office:

soffice --headless --invisible --convert-to txt input1.html

The reason I prefer this one over other alternatives is that every HTML paragraph gets converted into a single text line (no line breaks), which is what I was looking for. Other methods require post-processing. Lynx does produce nice output, but not exactly what I was looking for. Besides, Libre Office can be used to convert from all sorts of formats...

Neckar answered 11/12, 2015 at 4:11 Comment(0)
J
3

I had a similar question and actually used one of the answers with BeautifulSoup. The problem was it was really slow. I ended up using library called selectolax. It's pretty limited but it works for this task. The only issue was that I had manually remove unnecessary white spaces. But it seems to be working much faster that BeautifulSoup solution.

from selectolax.parser import HTMLParser

def get_text_selectolax(html):
    tree = HTMLParser(html)

    if tree.body is None:
        return None

    for tag in tree.css('script'):
        tag.decompose()
    for tag in tree.css('style'):
        tag.decompose()

    text = tree.body.text(separator='')
    text = " ".join(text.split()) # this will remove all the whitespaces
    return text
Joijoice answered 30/6, 2020 at 16:14 Comment(0)
H
2

Another option is to run the html through a text based web browser and dump it. For example (using Lynx):

lynx -dump html_to_convert.html > converted_html.txt

This can be done within a python script as follows:

import subprocess

with open('converted_html.txt', 'w') as outputFile:
    subprocess.call(['lynx', '-dump', 'html_to_convert.html'], stdout=testFile)

It won't give you exactly just the text from the HTML file, but depending on your use case it may be preferable to the output of html2text.

Higher answered 8/8, 2014 at 2:29 Comment(0)
U
2

@PeYoTIL's answer using BeautifulSoup and eliminating style and script content didn't work for me. I tried it using decompose instead of extract but it still didn't work. So I created my own which also formats the text using the <p> tags and replaces <a> tags with the href link. Also copes with links inside text. Available at this gist with a test doc embedded.

from bs4 import BeautifulSoup, NavigableString

def html_to_text(html):
    "Creates a formatted text email message as a string from a rendered html template (page)"
    soup = BeautifulSoup(html, 'html.parser')
    # Ignore anything in head
    body, text = soup.body, []
    for element in body.descendants:
        # We use type and not isinstance since comments, cdata, etc are subclasses that we don't want
        if type(element) == NavigableString:
            # We use the assumption that other tags can't be inside a script or style
            if element.parent.name in ('script', 'style'):
                continue

            # remove any multiple and leading/trailing whitespace
            string = ' '.join(element.string.split())
            if string:
                if element.parent.name == 'a':
                    a_tag = element.parent
                    # replace link text with the link
                    string = a_tag['href']
                    # concatenate with any non-empty immediately previous string
                    if (    type(a_tag.previous_sibling) == NavigableString and
                            a_tag.previous_sibling.string.strip() ):
                        text[-1] = text[-1] + ' ' + string
                        continue
                elif element.previous_sibling and element.previous_sibling.name == 'a':
                    text[-1] = text[-1] + ' ' + string
                    continue
                elif element.parent.name == 'p':
                    # Add extra paragraph formatting newline
                    string = '\n' + string
                text += [string]
    doc = '\n'.join(text)
    return doc
Underlet answered 6/12, 2016 at 15:6 Comment(3)
Thanks, this answer is underrated. For those of us who want to have a clean text representation that behaves more like a browser (ignoring newlines, and only taking paragraphs and line breaks into consideration), BeautifulSoup's get_text simply doesn't cut it.Monophony
@Monophony glad you found it useful, also thanks for the contrib. For anyone else, the gist linked has been enhanced quite a bit. What the OP seems to allude to is a tool which renders html to text, much like a text based browser like lynx. That's what this solution attempts. What most people are contributing are just text extractors.Underlet
Completely underrated indeed, wow, thank you! Will check the gist too.Humboldt
V
2

I've had good results with Apache Tika. Its purpose is the extraction of metadata and text from content, hence the underlying parser is tuned accordingly out of the box.

Tika can be run as a server, is trivial to run / deploy in a Docker container, and from there can be accessed via Python bindings.

Veg answered 7/5, 2018 at 11:7 Comment(0)
G
2

While alot of people mentioned using regex to strip html tags, there are a lot of downsides.

for example:

<p>hello&nbsp;world</p>I love you

Should be parsed to:

Hello world
I love you

Here's a snippet I came up with, you can cusomize it to your specific needs, and it works like a charm

import re
import html
def html2text(htm):
    ret = html.unescape(htm)
    ret = ret.translate({
        8209: ord('-'),
        8220: ord('"'),
        8221: ord('"'),
        160: ord(' '),
    })
    ret = re.sub(r"\s", " ", ret, flags = re.MULTILINE)
    ret = re.sub("<br>|<br />|</p>|</div>|</h\d>", "\n", ret, flags = re.IGNORECASE)
    ret = re.sub('<.*?>', ' ', ret, flags=re.DOTALL)
    ret = re.sub(r"  +", " ", ret)
    return ret
Goulder answered 21/1, 2019 at 19:30 Comment(0)
W
1

In Python 3.x you can do it in a very easy way by importing 'imaplib' and 'email' packages. Although this is an older post but maybe my answer can help new comers on this post.

status, data = self.imap.fetch(num, '(RFC822)')
email_msg = email.message_from_bytes(data[0][1]) 
#email.message_from_string(data[0][1])

#If message is multi part we only want the text version of the body, this walks the message and gets the body.

if email_msg.is_multipart():
    for part in email_msg.walk():       
        if part.get_content_type() == "text/plain":
            body = part.get_payload(decode=True) #to control automatic email-style MIME decoding (e.g., Base64, uuencode, quoted-printable)
            body = body.decode()
        elif part.get_content_type() == "text/html":
            continue

Now you can print body variable and it will be in plaintext format :) If it is good enough for you then it would be nice to select it as accepted answer.

Worldly answered 2/2, 2014 at 0:13 Comment(2)
This doesn't convert anything.Glowworm
This shows you how to extract a text/plain part from an email if somebody else put one there. It doesn't do anything to convert the HTML into plaintext, and does nothing remotely useful if you are trying to convert HTML from, say, a web site.Capriccio
B
1

in a simple way

import re

html_text = open('html_file.html').read()
text_filtered = re.sub(r'<(.*?)>', '', html_text)

this code finds all parts of the html_text started with '<' and ending with '>' and replace all found by an empty string

Biliary answered 2/6, 2016 at 15:4 Comment(0)
H
1

Here's the code I use on a regular basis.

from bs4 import BeautifulSoup
import urllib.request


def processText(webpage):

    # EMPTY LIST TO STORE PROCESSED TEXT
    proc_text = []

    try:
        news_open = urllib.request.urlopen(webpage.group())
        news_soup = BeautifulSoup(news_open, "lxml")
        news_para = news_soup.find_all("p", text = True)

        for item in news_para:
            # SPLIT WORDS, JOIN WORDS TO REMOVE EXTRA SPACES
            para_text = (' ').join((item.text).split())

            # COMBINE LINES/PARAGRAPHS INTO A LIST
            proc_text.append(para_text)

    except urllib.error.HTTPError:
        pass

    return proc_text

I hope that helps.

Habitant answered 25/10, 2017 at 0:8 Comment(0)
V
1

you can extract only text from HTML with BeautifulSoup

url = "https://www.geeksforgeeks.org/extracting-email-addresses-using-regular-expressions-python/"
con = urlopen(url).read()
soup = BeautifulSoup(con,'html.parser')
texts = soup.get_text()
print(texts)
Viburnum answered 13/4, 2018 at 11:3 Comment(0)
Y
1

Another example using BeautifulSoup4 in Python 2.7.9+

includes:

import urllib2
from bs4 import BeautifulSoup

Code:

def read_website_to_text(url):
    page = urllib2.urlopen(url)
    soup = BeautifulSoup(page, 'html.parser')
    for script in soup(["script", "style"]):
        script.extract() 
    text = soup.get_text()
    lines = (line.strip() for line in text.splitlines())
    chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
    text = '\n'.join(chunk for chunk in chunks if chunk)
    return str(text.encode('utf-8'))

Explained:

Read in the url data as html (using BeautifulSoup), remove all script and style elements, and also get just the text using .get_text(). Break into lines and remove leading and trailing space on each, then break multi-headlines into a line each chunks = (phrase.strip() for line in lines for phrase in line.split(" ")). Then using text = '\n'.join, drop blank lines, finally return as sanctioned utf-8.

Notes:

Yamashita answered 27/8, 2019 at 16:52 Comment(0)
D
1

Answer using Pandas to get table data from HTML.

If you want to extract table data quickly from HTML. You can use the read_HTML function, docs are here. Before using this function you should read the gotchas/issues surrounding the BeautifulSoup4/html5lib/lxml parsers HTML parsing libraries.

import pandas as pd

http = r'https://www.ibm.com/docs/en/cmofz/10.1.0?topic=SSQHWE_10.1.0/com.ibm.ondemand.mp.doc/arsa0257.htm'
table = pd.read_html(http)
df = table[0]
df

output enter image description here

There are a number of option that can be played with see here and here.

Doby answered 6/9, 2021 at 10:37 Comment(0)
A
0

I am achieving it something like this.

>>> import requests
>>> url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
>>> res = requests.get(url)
>>> text = res.text
Amargo answered 7/8, 2016 at 17:27 Comment(2)
I am using python 3.4 and this code is working fine for me.Amargo
text would have html tags in itAsserted
G
0

The LibreOffice writer comment has merit since the application can employ python macros. It seems to offer multiple benefits both for answering this question and furthering the macro base of LibreOffice. If this resolution is a one-off implementation, rather than to be used as part of a greater production program, opening the HTML in writer and saving the page as text would seem to resolve the issues discussed here.

Gonad answered 15/5, 2018 at 19:36 Comment(0)
E
0

Perl way (sorry mom, i'll never do it in production).

import re

def html2text(html):
    res = re.sub('<.*?>', ' ', html, flags=re.DOTALL | re.MULTILINE)
    res = re.sub('\n+', '\n', res)
    res = re.sub('\r+', '', res)
    res = re.sub('[\t ]+', ' ', res)
    res = re.sub('\t+', '\t', res)
    res = re.sub('(\n )+', '\n ', res)
    return res
Ennui answered 6/7, 2018 at 11:36 Comment(2)
This is bad practice for so many reason, for example &nbsp;Goulder
Yes! It's true! Don't do it anythere!Ennui
T
0

All methods here did not work quite well with some websites. The paragraphs that are generated by the JS code were resistant to all the above. Here is what eventually worked for me inspired by this answer and this.

The idea is to load the page in webdriver and scroll to the end of the page to make JS do its thing to generate/load the rest of the page. Then insert keystroke commands to select all copy/paste the whole page:

import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import pyperclip
import time

driver = webdriver.Chrome()
driver.get("https://www.lazada.com.ph/products/nike-womens-revolution-5-running-shoes-black-i1262506154-s4552606107.html?spm=a2o4l.seller.list.3.6f5d7b6cHO8G2Y&mp=1&freeshipping=1")

# Scroll down to end of the page to let all javascript code load its content
lenOfPage = driver.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
match=False
while(match==False):
        lastCount = lenOfPage
        time.sleep(1)
        lenOfPage = driver.execute_script("window.scrollTo(0, document.body.scrollHeight);var lenOfPage=document.body.scrollHeight;return lenOfPage;")
        if lastCount==lenOfPage:
            match=True

# copy from the webpage
element = driver.find_element_by_tag_name('body')
element.send_keys(Keys.CONTROL,'a')
element.send_keys(Keys.CONTROL,'c')
alltext = pyperclip.paste()
alltext = alltext.replace("\n", " ").replace("\r", " ")  # cleaning the copied text
print(alltext )

It is slow. But nothing else did work out.

UPDATE: A better method is to load the source of the page AFTER scrolling to the end of the page using inscriptis library:

from inscriptis import get_text
text = get_text(driver.page_source)

Still will not work with a headless driver (page detects somehow that it is not shown by real and scroll to end will not make JS code loading its thing), but at least we don't need the crazy copy/paste which hinders us from running multiple scripts on a machine with a shared clipboard.

Tamtama answered 28/7, 2021 at 0:17 Comment(0)
A
0

I like using pyquery to solve this:

from pyquery import PyQuery as pq


def html_to_text(html):
    """Return a list of the visible utf8 text for some HTML string."""

    if not html:
        return []

    if not isinstance(html, pq):
        html = pq(html)

    skip = ['style', 'title', 'noscript', 'head', 'meta']

    text = []

    try:
        if html.tag and html.tag.lower() in skip:
            return []
    except AttributeError:
        pass

    try:
        style = dict([y.strip() for y in x.strip().split(":")] for x in html.attr.style.split(";") if x.strip())
        if style["display"].lower() == "none":
            return []
    except (AttributeError, KeyError):
        pass

    for el in html:
        try:
            if not el.tag or el.tag.lower() in skip:
                continue
        except AttributeError:
            continue

        for child in el.getchildren():
            text.extend(html_to_text(child))

        if not el.text:
            continue

        text.append(el.text)

    return text


print(" ".join(html_to_text("<p>test</p>")))
Aquacade answered 18/7, 2023 at 17:3 Comment(0)
P
0

I came here looking for answers about my own code but I feel like I can help here (I hope you guys give some feedbacks on my code, its the first one):

import requests
from bs4 import BeautifulSoup
import pandas as pd

#creating a fake header to avoid Forbbiden
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}

page = 'your-url'
pageTree = requests.get(page, headers=headers)
pageSoup = BeautifulSoup(pageTree.text, 'html.parser')

#here you can get all the code from a div or a td:
code = pageSoup.find_all("your-tag(e.g. div)", attrs={'class': 'your-class'})

#In my code, I was trying to get the numbers from a table with 82 rolls, so I wrote this:
i=1
text = []
while i<83:
  #I had to do some math to get the specific text I wanted at code[6*i-1] bellow:
  clean_text = code[6*i-1].text
  #solving the problem with the encoding:
  get = clean_text.replace(u'\xa0', '')
  get = get.replace(u'-', '')
  text.append(get)

I believe that with this one you can get all the text you need but will be on a list of rolls.

Pitchy answered 20/1 at 13:44 Comment(1)
Oh, I used pandas further in the code, should not be there. Sorry!Pitchy

© 2022 - 2024 — McMap. All rights reserved.