I'm scraping some webpages using selenium and beautifulsoup. I'm iterating through a bunch of links, grabbing info, and then dumping it into a JSON:
for event in events:
case = {'Artist': item['Artist'], 'Date': item['Date'], 'Time': item['Time'], 'Venue': item['Venue'],
'Address': item['Address'], 'Coordinates': item['Coordinates']}
item[event] = case
with open("testScrape.json", "w") as writeJSON:
json.dump(item, writeJSON, ensure_ascii=False)
When I get to this link: https://www.bandsintown.com/e/100778334-jean-deaux-music-at-rickshaw-stop?came_from=257&utm_medium=web&utm_source=home&utm_campaign=event
The code breaks and I get the following error:
Traceback (most recent call last):
File "/Users/s/PycharmProjects/hi/BandsintownWebScraper.py", line 126, in <module>
json.dump(item, writeJSON, ensure_ascii=False)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 190, in dump
fp.write(chunk)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe6' in position 7: ordinal not in range(128)
I've tried to use:
json.dump(item, writeJSON, ensure_ascii=False).decode('utf-8')
And:
json.dump(item, writeJSON, ensure_ascii=False).encode('utf-8')
With no success. I believe it is the ï character on the link that is causing this to fail. Can anyone give a brief run-down of what's happening, what encode/decode means, and how to fix this issue?
Latin1
,CP1250
. – Decerebrateutf-8
,latin1
, etc. to use less space. Encoded chars may use 1 byte, other 2 or more bytes - and it use less space then 8 bytes for evey char. When you get it then you have to convert (decode) it back to unicode so Python can use it. – Decerebrateopen(..., encode='utf-8')
– Decerebrate