Convert UTF-8 with BOM to UTF-8 with no BOM in Python
Asked Answered
S

7

112

Two questions here. I have a set of files which are usually UTF-8 with BOM. I'd like to convert them (ideally in place) to UTF-8 with no BOM. It seems like codecs.StreamRecoder(stream, encode, decode, Reader, Writer, errors) would handle this. But I don't really see any good examples on usage. Would this be the best way to handle this?

source files:
Tue Jan 17$ file brh-m-157.json 
brh-m-157.json: UTF-8 Unicode (with BOM) text

Also, it would be ideal if we could handle different input encoding wihtout explicitly knowing (seen ASCII and UTF-16). It seems like this should all be feasible. Is there a solution that can take any known Python encoding and output as UTF-8 without BOM?

edit 1 proposed sol'n from below (thanks!)

fp = open('brh-m-157.json','rw')
s = fp.read()
u = s.decode('utf-8-sig')
s = u.encode('utf-8')
print fp.encoding  
fp.write(s)

This gives me the following error:

IOError: [Errno 9] Bad file descriptor

Newsflash

I'm being told in comments that the mistake is I open the file with mode 'rw' instead of 'r+'/'r+b', so I should eventually re-edit my question and remove the solved part.

Satori answered 17/1, 2012 at 16:37 Comment(2)
You need to open your file for reading plus update, i.e., with a r+ mode. Add b too so that it will work on Windows as well without any funny line ending business. Finally, you'll want to seek back to the beginning of the file and truncate it at the end — please see my updated answer.Swick
The (currently second) Python 3 answer is in need of upvotes.Oneness
S
163

This answer is for Python 2

Simply use the "utf-8-sig" codec:

fp = open("file.txt")
s = fp.read()
u = s.decode("utf-8-sig")

That gives you a unicode string without the BOM. You can then use

s = u.encode("utf-8")

to get a normal UTF-8 encoded string back in s. If your files are big, then you should avoid reading them all into memory. The BOM is simply three bytes at the beginning of the file, so you can use this code to strip them out of the file:

import os, sys, codecs

BUFSIZE = 4096
BOMLEN = len(codecs.BOM_UTF8)

path = sys.argv[1]
with open(path, "r+b") as fp:
    chunk = fp.read(BUFSIZE)
    if chunk.startswith(codecs.BOM_UTF8):
        i = 0
        chunk = chunk[BOMLEN:]
        while chunk:
            fp.seek(i)
            fp.write(chunk)
            i += len(chunk)
            fp.seek(BOMLEN, os.SEEK_CUR)
            chunk = fp.read(BUFSIZE)
        fp.seek(-BOMLEN, os.SEEK_CUR)
        fp.truncate()

It opens the file, reads a chunk, and writes it out to the file 3 bytes earlier than where it read it. The file is rewritten in-place. As easier solution is to write the shorter file to a new file like newtover's answer. That would be simpler, but use twice the disk space for a short period.

As for guessing the encoding, then you can just loop through the encoding from most to least specific:

def decode(s):
    for encoding in "utf-8-sig", "utf-16":
        try:
            return s.decode(encoding)
        except UnicodeDecodeError:
            continue
    return s.decode("latin-1") # will always work

An UTF-16 encoded file wont decode as UTF-8, so we try with UTF-8 first. If that fails, then we try with UTF-16. Finally, we use Latin-1 — this will always work since all 256 bytes are legal values in Latin-1. You may want to return None instead in this case since it's really a fallback and your code might want to handle this more carefully (if it can).

Swick answered 17/1, 2012 at 16:47 Comment(2)
hmm, i updated the question in edit #1 with sample code but getting a bad file descriptor. thx for any help. Trying to figure this out.Satori
seems got AttributeError: 'str' object has no attribute 'decode'. So I finally used the code as with open(filename,encoding='utf-8-sig') as f_content:, then doc = f_content.read() and it worked for me.Nugget
A
92

In Python 3 it's quite easy: read the file and rewrite it with utf-8 encoding:

s = open(bom_file, mode='r', encoding='utf-8-sig').read()
open(bom_file, mode='w', encoding='utf-8').write(s)
Autonomy answered 23/10, 2015 at 2:57 Comment(0)
M
8
import codecs
import shutil
import sys

s = sys.stdin.read(3)
if s != codecs.BOM_UTF8:
    sys.stdout.write(s)

shutil.copyfileobj(sys.stdin, sys.stdout)
Mouldy answered 17/1, 2012 at 17:3 Comment(2)
can you explain how this code is work? $ remove_bom.py < input.txt > output.txt Am i right?Cailean
@guneysus, yes, exactlyMouldy
C
6

I found this question because having trouble with configparser.ConfigParser().read(fp) when opening files with UTF8 BOM header.

For those who are looking for a solution to remove the header so that ConfigPhaser could open the config file instead of reporting an error of: File contains no section headers, please open the file like the following:

configparser.ConfigParser().read(config_file_path, encoding="utf-8-sig")

This could save you tons of effort by making the remove of the BOM header of the file unnecessary.

(I know this sounds unrelated, but hopefully this could help people struggling like me.)

Communicant answered 7/9, 2017 at 0:46 Comment(2)
as I was first working with try - except --> this also opens UTF-8 "not BOM" encoded files without problemsZollie
A similar issue with secedit /export, which creates UTF-16 LE BOM files. The following works for them: config.read('sec.cfg', encoding="utf-16")Downstate
A
5

This is my implementation to convert any kind of encoding to UTF-8 without BOM and replacing windows enlines by universal format:

def utf8_converter(file_path, universal_endline=True):
    '''
    Convert any type of file to UTF-8 without BOM
    and using universal endline by default.

    Parameters
    ----------
    file_path : string, file path.
    universal_endline : boolean (True),
                        by default convert endlines to universal format.
    '''

    # Fix file path
    file_path = os.path.realpath(os.path.expanduser(file_path))

    # Read from file
    file_open = open(file_path)
    raw = file_open.read()
    file_open.close()

    # Decode
    raw = raw.decode(chardet.detect(raw)['encoding'])
    # Remove windows end line
    if universal_endline:
        raw = raw.replace('\r\n', '\n')
    # Encode to UTF-8
    raw = raw.encode('utf8')
    # Remove BOM
    if raw.startswith(codecs.BOM_UTF8):
        raw = raw.replace(codecs.BOM_UTF8, '', 1)

    # Write to file
    file_open = open(file_path, 'w')
    file_open.write(raw)
    file_open.close()
    return 0
Anam answered 14/5, 2014 at 8:4 Comment(0)
K
4

You can use codecs.

import codecs
with open("test.txt",'r') as filehandle:
    content = filehandle.read()
if content[:3] == codecs.BOM_UTF8:
    content = content[3:]
print content.decode("utf-8")
Knutson answered 9/2, 2015 at 10:44 Comment(1)
not usable snipplet at all (filehandle? also codecs.BOM_UTF8 return a syntax error)Earleneearley
C
1

In python3 you should add encoding='utf-8-sig':

with open(file_name, mode='a', encoding='utf-8-sig') as csvfile:
    csvfile.writelines(rows)

that's it.

Cognition answered 21/8, 2022 at 4:46 Comment(1)
This is exactly the same answer as this one from 2015.Postnasal

© 2022 - 2024 — McMap. All rights reserved.