UnicodeDecodeError: 'charmap' codec can't decode byte X in position Y: character maps to <undefined>
Asked Answered
T

15

1065

I'm trying to get a Python 3 program to do some manipulations with a text file filled with information. However, when trying to read the file I get the following error:

Traceback (most recent call last):  
  File "SCRIPT LOCATION", line NUMBER, in <module>  
    text = file.read()
  File "C:\Python31\lib\encodings\cp1252.py", line 23, in decode  
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2907500: character maps to `<undefined>`  

After reading this Q&A, see How to determine the encoding of text if you need help figuring out the encoding of the file you are trying to open.

Twentieth answered 10/2, 2012 at 18:43 Comment(2)
For the same error these solution has helped me , solution of charmap errorIndifferentism
See Processing Text Files in Python 3 to understand why you get this error.Abeyant
P
1848

The file in question is not using the CP1252 encoding. It's using another encoding. Which one you have to figure out yourself. Common ones are Latin-1 and UTF-8. Since 0x90 doesn't actually mean anything in Latin-1, UTF-8 (where 0x90 is a continuation byte) is more likely.

You specify the encoding when you open the file:

file = open(filename, encoding="utf8")
Parik answered 10/2, 2012 at 18:53 Comment(7)
if you're using Python 2.7, and getting the same error, try the io module: io.open(filename,encoding="utf8")Teraterai
+1 for specifying the encoding on read. p.s. is it supposed to be encoding="utf8" or is it encoding="utf-8" ?Venusian
@1vand1ng0: of course Latin-1 works; it'll work for any file regardless of what the actual encoding of the file is. That's because all 256 possible byte values in a file have a Latin-1 codepoint to map to, but that doesn't mean you get legible results! If you don't know the encoding, even opening the file in binary mode instead might be better than assuming Latin-1.Bordeaux
I get the OP error even though the encoding is already specified correctly as UTF-8 (as shown above) in open(). Any ideas?Geostrophic
The suggested encoding string should have a dash and therefore it should be: open(csv_file, encoding='utf-8') (as tested on Python3)Thinker
@Geostrophic It's not possible to get the same error as OP when using UTF-8, since UTF-8 doesn't have any undefined bytes. You must be getting a different error, like say UnicodeDecodeError: 'utf-8' codec can't decode byte 0x90 in position 0: invalid start byte.Blastocoel
@Thinker That's not necessary. 'utf8' is an alias for UTF-8. docsBlastocoel
P
137

If file = open(filename, encoding="utf-8") doesn't work, try
file = open(filename, errors="ignore"), if you want to remove unneeded characters. (docs)

Paramagnet answered 5/6, 2018 at 22:3 Comment(2)
Warning: This will result in data loss when unknown characters are encountered (which may be fine depending on your situation).Barkeeper
using file = open(filename, errors="ignore") will ignore any error and not display them in terminal. It doesn't solve the actual issue.Wolfsbane
U
88

Alternatively, if you don't need to decode the file, such as uploading the file to a website, use:

open(filename, 'rb')

where r = reading, b = binary

Undergird answered 8/7, 2019 at 23:50 Comment(3)
Perhaps emphasize that the b will produce bytes instead of str data. Like you note, this is suitable if you don't need to process the bytes in any way.Commanding
The top two answers didn't work, but this one did. I was trying to read a dictionary of pandas dataframes and kept getting errrors.Bedell
@Bedell Please see stackoverflow.com/questions/436220. Every text file has a particular encoding, and you have to know what it is in order to use it properly. The common guesses won't always be correct.Robles
E
49

TLDR: Try: file = open(filename, encoding='cp437')

Why? When one uses:

file = open(filename)
text = file.read()

Python assumes the file uses the same codepage as current environment (cp1252 in case of the opening post) and tries to decode it to its own default UTF-8. If the file contains characters of values not defined in this codepage (like 0x90) we get UnicodeDecodeError. Sometimes we don't know the encoding of the file, sometimes the file's encoding may be not handled by Python (like e.g. cp790), sometimes the file can contain mixed encodings.

If such characters are unneeded, one may decide to replace them by question marks, with:

file = open(filename, errors='replace')

Another workaround is to use:

file = open(filename, errors='ignore')

The characters are then left intact, but other errors will be masked too.

A very good solution is to specify the encoding, yet not any encoding (like cp1252), but the one which maps every single-byte value (0..255) to a character (like cp437 or latin1):

file = open(filename, encoding='cp437')

Codepage 437 is just an example. It is the original DOS encoding. All codes are mapped, so there are no errors while reading the file, no errors are masked out, the characters are preserved (not quite left intact but still distinguishable) and one can check their ord() values.

Please note that this advice is just a quick workaround for a nasty problem. Proper solution is to use binary mode, although it is not so quick.

Eudemonism answered 8/11, 2019 at 18:14 Comment(5)
Probably you should emphasize even more that randomly guessing at the encoding is likely to produce garbage. You have to know the encoding of the data.Commanding
There are many encodings that "have all characters defined" (you really mean "map every single-byte value to a character"). CP437 is very specifically associated with the Windows/DOS ecosystem. In most cases, Latin-1 (ISO-8859-1) will be a better starting guess.Robles
@Commanding - The solution is a quick workaround for a nasty error, allowing to check what is going on. Sometimes there is a garbage character placed inside a big, perfectly encoded text. Using the encoding of the character would break the decoding of the rest of the text. What's more, the encoding may be not handled by Python (e.g. cp790). Still, today I would rather use binary mode and handle the decoding myself.Eudemonism
@Karl Knechtel - Yes, your phrase is better. I am going to edit my text.Eudemonism
Thanks for the update. However, the part about decoding cp1252 "to its own default UTF-8" is still weird. Python is decoding cp1252 into the internal string representation, which is not UTF-8 (or necessarily any standard Unicode representation, although Python strings are defined to be Unicode).Commanding
M
44

As an extension to @LennartRegebro's answer:

If you can't tell what encoding your file uses and the solution above does not work (it's not utf8) and you found yourself merely guessing - there are online tools that you could use to identify what encoding that is. They aren't perfect but usually work just fine. After you figure out the encoding you should be able to use solution above.

EDIT: (Copied from comment)

A quite popular text editor Sublime Text has a command to display encoding if it has been set...

  1. Go to View -> Show Console (or Ctrl+`)

enter image description here

  1. Type into field at the bottom view.encoding() and hope for the best (I was unable to get anything but Undefined but maybe you will have better luck...)

enter image description here

Merino answered 22/3, 2016 at 16:12 Comment(4)
Some text editors will provide this information as well. I know that with vim you can get this via :set fileencoding (from this link)Selfimprovement
Sublime Text, also -- open up the console and type view.encoding().Tannin
alternatively, you can open your file with notepad. 'Save As' and you shall see a drop-down with the encoding usedTurgent
Please see stackoverflow.com/questions/436220 for more details on the general task.Robles
R
16

Stop wasting your time, just add the following encoding="cp437" and errors='ignore' to your code in both read and write:

open('filename.csv', encoding="cp437", errors='ignore')
open(file_name, 'w', newline='', encoding="cp437", errors='ignore')

Godspeed

Rummer answered 1/6, 2020 at 21:54 Comment(2)
Before you apply that, be sure that you want your 0x90 to be decoded to 'É'. Check b'\x90'.decode('cp437').Kistna
This is absolutely horrible advice. Code page 437 is a terrible guess unless your source data comes from an MS-DOS system from the 1990s, and ignoring errors is often the worst possible way to silence the warnings. It's like cutting the wires to the "engine hot" and "fuel low" lights in your car to get rid of those annoying distractions.Commanding
C
7
def read_files(file_path):

    with open(file_path, encoding='utf8') as f:
        text = f.read()
        return text

OR (AND)

def read_files(text, file_path):

    with open(file_path, 'rb') as f:
        f.write(text.encode('utf8', 'ignore'))

OR

document = Document()
document.add_heading(file_path.name, 0)
    file_path.read_text(encoding='UTF-8'))
        file_content = file_path.read_text(encoding='UTF-8')
        document.add_paragraph(file_content)

OR

def read_text_from_file(cale_fisier):
    text = cale_fisier.read_text(encoding='UTF-8')
    print("what I read: ", text)
    return text # return written text

def save_text_into_file(cale_fisier, text):
    f = open(cale_fisier, "w", encoding = 'utf-8') # open file
    print("Ce am scris: ", text)
    f.write(text) # write the content to the file

OR

def read_text_from_file(file_path):
    with open(file_path, encoding='utf8', errors='ignore') as f:
        text = f.read()
        return text # return written text


def write_to_file(text, file_path):
    with open(file_path, 'wb') as f:
        f.write(text.encode('utf8', 'ignore')) # write the content to the file

OR

import os
import glob

def change_encoding(fname, from_encoding, to_encoding='utf-8') -> None:
    '''
    Read the file at path fname with its original encoding (from_encoding)
    and rewrites it with to_encoding.
    '''
    with open(fname, encoding=from_encoding) as f:
        text = f.read()

    with open(fname, 'w', encoding=to_encoding) as f:
        f.write(text)
Cholent answered 29/9, 2022 at 8:38 Comment(0)
K
5

Before you apply the suggested solution, you can check what is the Unicode character that appeared in your file (and in the error log), in this case 0x90: https://unicodelookup.com/#0x90/1 (or directly at Unicode Consortium site http://www.unicode.org/charts/ by searching 0x0090)

and then consider removing it from the file.

Kistna answered 6/8, 2020 at 16:5 Comment(1)
I have a web page at tripleee.github.io/8bit/#90 where you can look up the character's value in the various 8-bit encodings supported by Python. With enough data points, you can often infer a suitable encoding (though some of them are quite similar, and so establishing exactly which encoding the original writer used will often involve some guesswork, too).Commanding
C
5

Below code will encode the utf8 symbols.

with open("./website.html", encoding="utf8") as file:
    contents = file.read()
Crake answered 14/9, 2023 at 14:6 Comment(0)
G
4

for me encoding with utf16 worked

file = open('filename.csv', encoding="utf16")
Ghetto answered 21/2, 2021 at 11:31 Comment(1)
Like many of the other answers on this page, randomly guessing which encoding the OP is actually dealing with is mostly a waste of time. The proper solution is to tell them how to figure out the correct encoding, not offer more guesses (the Python documentation contains a list of all of them; there are many, many more which are not suggested in any answer here yet, but which could be correct for any random visitor). UTF-16 is pesky in that the results will often look vaguely like valid Chinese or Korean text if you don't speak the language.Commanding
R
3

For those working in Anaconda in Windows, I had the same problem. Notepad++ help me to solve it.

Open the file in Notepad++. In the bottom right it will tell you the current file encoding. In the top menu, next to "View" locate "Encoding". In "Encoding" go to "character sets" and there with patiente look for the enconding that you need. In my case the encoding "Windows-1252" was found under "Western European"

Raisin answered 1/9, 2019 at 5:36 Comment(1)
Only the viewing encoding is changed in this way. In order to effectively change the file's encoding, change preferences in Notepad++ and create a new document, as shown here: superuser.com/questions/1184299/….Kistna
C
3

In the newer version of Python (starting with 3.7), you can add the interpreter option -Xutf8, which should fix your problem. If you use Pycharm, just got to Run > Edit configurations (in tab Configuration change value in field Interpreter options to -Xutf8).

Or, equivalently, you can just set the environmental variable PYTHONUTF8 to 1.

Clarence answered 7/11, 2021 at 11:19 Comment(1)
This assumes that the source data is UTF-8, which is by no means a given.Commanding
F
2

If you are on Windows, the file may be starting with a UTF-8 BOM indicating it definitely is a UTF-8 file. As per https://bugs.python.org/issue44510, I used encoding="utf-8-sig", and the csv file was read successfully.

Flask answered 23/5, 2023 at 12:49 Comment(0)
I
1

for me changing the Mysql character encoding the same as my code helped to sort out the solution. photo=open('pic3.png',encoding=latin1) enter image description here

Isisiskenderun answered 4/2, 2020 at 5:45 Comment(2)
Like many other random guesses, "latin-1" will remove the error, but will not guarantee that the file is decoded correctly. You have to know which encoding the file actually uses. Also notice that latin1 without quotes is a syntax error (unless you have a variable with that name, and it contains a string which represents a valid Python character encoding name).Commanding
In this particular example, the real problem is that a PNG file does not contain text at all. You should instead read the raw bytes (open('pic3.png', 'rb') where the b signifies binary mode).Commanding
F
1

This is an example of how I open and close file with UTF-8, extracted from a recent code:

def traducere_v1_txt(translator, file):
  data = []
  with open(f"{base_path}/{file}" , "r" ,encoding='utf8', errors='ignore') as open_file:
    data = open_file.readlines()
    
    
file_name = file.replace(".html","")
        with open(f"Translated_Folder/{file_name}_{input_lang}.html","w", encoding='utf8') as htmlfile:
          htmlfile.write(lxml1)
Fertilize answered 21/3, 2023 at 10:24 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.