If you don't know the encoding, then to read binary input into string in Python 3 and Python 2 compatible way, use the ancient MS-DOS CP437 encoding:
PY3K = sys.version_info >= (3, 0)
lines = []
for line in stream:
if not PY3K:
lines.append(line)
else:
lines.append(line.decode('cp437'))
Because encoding is unknown, expect non-English symbols to translate to characters of cp437
(English characters are not translated, because they match in most single byte encodings and UTF-8).
Decoding arbitrary binary input to UTF-8 is unsafe, because you may get this:
>>> b'\x00\x01\xffsd'.decode('utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 2: invalid
start byte
The same applies to latin-1
, which was popular (the default?) for Python 2. See the missing points in Codepage Layout - it is where Python chokes with infamous ordinal not in range
.
UPDATE 20150604: There are rumors that Python 3 has the surrogateescape
error strategy for encoding stuff into binary data without data loss and crashes, but it needs conversion tests, [binary] -> [str] -> [binary]
, to validate both performance and reliability.
UPDATE 20170116: Thanks to comment by Nearoo - there is also a possibility to slash escape all unknown bytes with backslashreplace
error handler. That works only for Python 3, so even with this workaround you will still get inconsistent output from different Python versions:
PY3K = sys.version_info >= (3, 0)
lines = []
for line in stream:
if not PY3K:
lines.append(line)
else:
lines.append(line.decode('utf-8', 'backslashreplace'))
See Python’s Unicode Support for details.
UPDATE 20170119: I decided to implement slash escaping decode that works for both Python 2 and Python 3. It should be slower than the cp437
solution, but it should produce identical results on every Python version.
# --- preparation
import codecs
def slashescape(err):
""" codecs error handler. err is UnicodeDecode instance. return
a tuple with a replacement for the unencodable part of the input
and a position where encoding should continue"""
#print err, dir(err), err.start, err.end, err.object[:err.start]
thebyte = err.object[err.start:err.end]
repl = u'\\x'+hex(ord(thebyte))[2:]
return (repl, err.end)
codecs.register_error('slashescape', slashescape)
# --- processing
stream = [b'\x80abc']
lines = []
for line in stream:
lines.append(line.decode('utf-8', 'slashescape'))
str(text_bytes)
work? This seems bizarre to me. – Belitastr(text_bytes)
can't specify the encoding. Depending on what's in text_bytes,text_bytes.decode('cp1250
)` might result in a very different string totext_bytes.decode('utf-8')
. – Atianastr
function does not convert to a real string anymore. One HAS to say an encoding explicitly for some reason I am to lazy to read through why. Just convert it toutf-8
and see if ur code works. e.g.var = var.decode('utf-8')
– Belitaunicode_text = str(bytestring, character_encoding)
works as expected on Python 3. Thoughunicode_text = bytestring.decode(character_encoding)
is more preferable to avoid confusion with juststr(bytes_obj)
that produces a text representation forbytes_obj
instead of decoding it to text:str(b'\xb6', 'cp1252') == b'\xb6'.decode('cp1252') == '¶'
andstr(b'\xb6') == "b'\\xb6'" == repr(b'\xb6') != '¶'
– Montenegrotext=True
tosubprocess.run()
or.Popen()
and then you'll get a string back, no need to convert bytes. Or specifyencoding="utf-8"
to either function. – Khmerstr(<bytes>)
, but maybe it's just for consistency with otherstr
calls. I would have thought they could default to UTF-8 encoding, but maybe it's because Windows has too many funny encodings that it doesn't default to UTF-8; but I agree with you. – Picrodecode()
is equivalent todecode("utf-8")
. It often happens to be, but settings of thePYTHONIOENCODING
orPYTHONCOERCECLOCALE
environment variables can change that. See docs.python.org/3/using/cmdline.html#envvar-PYTHONIOENCODING – Birch