Is there a way to remove the BOM from a UTF-8 encoded file?
Asked Answered
M

7

39

Is there a way to remove the BOM from a UTF-8 encoded file?

I know that all of my JSON files are encoded in UTF-8, but the data entry person who edited the JSON files saved it as UTF-8 with the BOM.

When I run my Ruby scripts to parse the JSON, it is failing with an error. I don't want to manually open 58+ JSON files and convert to UTF-8 without the BOM.

Maiduguri answered 16/2, 2011 at 1:15 Comment(1)
possible duplicate of How to avoid tripping over UTF-8 BOM when reading filesBwana
M
30

So, the solution was to do a search and replace on the BOM via gsub! I forced the encoding of the string to UTF-8 and also forced the regex pattern to be encoded in UTF-8.

I was able to derive a solution by looking at http://self.d-struct.org/195/howto-remove-byte-order-mark-with-ruby-and-iconv and http://blog.grayproductions.net/articles/ruby_19s_string

def read_json_file(file_name, index)
  content = ''
  file = File.open("#{file_name}\\game.json", "r") 
  content = file.read.force_encoding("UTF-8")

  content.gsub!("\xEF\xBB\xBF".force_encoding("UTF-8"), '')

  json = JSON.parse(content)

  print json
end
Maiduguri answered 16/2, 2011 at 2:0 Comment(4)
I would recommend sub!(/^\xEF\xBB\xBF/, '') as the BOM occures only once at the beginning of the string.Creamcolored
I would actually suggest String#delete! over String#gsub!(..., '')Nur
@Creamcolored your suggestion is good unless your setup is defaulting to UTF-8 or at least that's what I believe to be my problem because your command does not work on mine. Just doing content.sub!(/^\xEF\xBB\xBF/, '') gives an "invalid UTF-8 byte sequence" error.Johnson
@Johnson I think @Creamcolored simply meant using sub instead of gsub, but didn't necessarily mean to drop the force_encoding.Horseplay
B
42

With ruby >= 1.9.2 you can use the mode r:bom|utf-8

This should work (I haven't test it in combination with json):

json = nil #define the variable outside the block to keep the data
File.open('file.txt', "r:bom|utf-8"){|file|
  json = JSON.parse(file.read)
}

It doesn't matter, if the BOM is available in the file or not.


Andrew remarked, that File#rewind can't be used with BOM.

If you need a rewind-function you must remember the position and replace rewind with pos=:

#Prepare test file
File.open('file.txt', "w:utf-8"){|f|
  f << "\xEF\xBB\xBF" #add BOM
  f << 'some content'
}

#Read file and skip BOM if available
File.open('file.txt', "r:bom|utf-8"){|f|
  pos =f.pos
  p content = f.read  #read and write file content
  f.pos = pos   #f.rewind  goes to pos 0
  p content = f.read  #(re)read and write file content
}
Bwana answered 15/10, 2011 at 20:42 Comment(3)
one-liner for ruby (takes file paths on STDIN): ruby -ne 'File.open($_.strip, "r:bom|utf-8") {|f| content=f.read(f.size); f.close; File.open($_.strip, "w").write(content) }Lukin
So this failed for me because I used File#rewind. At least in ruby 1.9.2, using "bom|UTF-8" simply advances the file pointer when you open it, and this gets forgotten when you rewind. Seems to me like a poor implementation of open given this encoding parameter.Mailable
@AndrewSchwartz You are right. I extended my answer with a work around for your problem.Bwana
M
30

So, the solution was to do a search and replace on the BOM via gsub! I forced the encoding of the string to UTF-8 and also forced the regex pattern to be encoded in UTF-8.

I was able to derive a solution by looking at http://self.d-struct.org/195/howto-remove-byte-order-mark-with-ruby-and-iconv and http://blog.grayproductions.net/articles/ruby_19s_string

def read_json_file(file_name, index)
  content = ''
  file = File.open("#{file_name}\\game.json", "r") 
  content = file.read.force_encoding("UTF-8")

  content.gsub!("\xEF\xBB\xBF".force_encoding("UTF-8"), '')

  json = JSON.parse(content)

  print json
end
Maiduguri answered 16/2, 2011 at 2:0 Comment(4)
I would recommend sub!(/^\xEF\xBB\xBF/, '') as the BOM occures only once at the beginning of the string.Creamcolored
I would actually suggest String#delete! over String#gsub!(..., '')Nur
@Creamcolored your suggestion is good unless your setup is defaulting to UTF-8 or at least that's what I believe to be my problem because your command does not work on mine. Just doing content.sub!(/^\xEF\xBB\xBF/, '') gives an "invalid UTF-8 byte sequence" error.Johnson
@Johnson I think @Creamcolored simply meant using sub instead of gsub, but didn't necessarily mean to drop the force_encoding.Horseplay
W
13

You can also specify encoding with the File.read and CSV.read methods, but you don't specify the read mode.

File.read(path, :encoding => 'bom|utf-8')
CSV.read(path, :encoding => 'bom|utf-8')
Ware answered 10/5, 2013 at 16:2 Comment(2)
it works well 10 years laterSigfried
nice and clean, it just feels like the right way! also.. it worked like a charm!Kor
M
5

the "bom|UTF-8" encoding works well if you only read the file once, but fails if you ever call File#rewind, as I was doing in my code. To address this, I did the following:

def ignore_bom
  @file.ungetc if @file.pos==0 && @file.getc != "\xEF\xBB\xBF".force_encoding("UTF-8")
end

which seems to work well. Not sure if there are other similar type characters to look out for, but they could easily be built into this method that can be called any time you rewind or open.

Mailable answered 14/11, 2014 at 20:28 Comment(0)
D
2

Server side cleanup of utf-8 bom bytes that worked for me:

csv_text.gsub!("\xEF\xBB\xBF".force_encoding(Encoding::BINARY), '')
Digiovanni answered 28/8, 2018 at 19:36 Comment(0)
S
0

I just implemented this for the smarter_csv gem, and want to share this in case someone comes across this issue.

The problem is to remove byte-sequences independent of the string encoding. The solution is to use the methods bytes and byteslice from the String class.

See: https://ruby-doc.org/core-3.1.1/String.html#method-i-bytes

    UTF_8_BOM = %w[ef bb bf].freeze

    def remove_bom(str)
      str_as_hex = str.bytes.map{|x| x.to_s(16)}
      return str.byteslice(3..-1) if str_as_hex[0..2] == UTF_8_BOM

      str
    end
Sura answered 19/3, 2023 at 7:59 Comment(0)
D
0

Server side cleanup of utf-8 bom bytes that worked for me: csv_text.delete("\uFEFF")

Deuced answered 18/7 at 13:11 Comment(3)
Thank you for this code snippet, which might provide some limited, immediate help. A proper explanation would greatly improve its long-term value by showing why this is a good solution to the problem and would make it more useful to future readers with other, similar questions. Please edit your answer to add some explanation, including the assumptions you’ve made.Gombroon
@jasle This is a trivial deletion of a BOM character from the CSV body file before parsing. There is no need to describe anything here, as the topic of the open question and tags clearly explain what is going on here and why.Deuced
Sorry, I disagree. A reference to documentation is always useful, for example. Maybe this article can change your mind: How To Answer WellGombroon

© 2022 - 2024 — McMap. All rights reserved.