Effective way to find any file's Encoding
Asked Answered
Y

12

176

Yes is a most frequent question, and this matter is vague for me and since I don't know much about it.

But i would like a very precise way to find a files Encoding. So precise as Notepad++ is.

Yolondayon answered 29/9, 2010 at 20:2 Comment(7)
possible duplicate of Java : How to determine the correct charset encoding of a streamTiffie
Which encodings? UTF-8 vs UTF-16, big vs little endian? Or are you referring to the old MSDos codepages, such as shift-JIS or Cyrillic etc?Durkheim
Another possible duplicate: #436720Tiffie
@Oded: Quote "The getEncoding() method will return the encoding which was set up (read the JavaDoc) for the stream. It will not guess the encoding for you.".Tessi
@dthorpe: Sorry i wasn't specif, i don't know much about Encoding formats. Find any kind of Encoding basically.Tessi
For some background reading, joelonsoftware.com/articles/Unicode.html is a good read. If there is one thing you should know about text, it's that there is no such thing as plain text.Crashland
There is only one way to know for sure: find out from the sender/writer.Youthful
S
192

The StreamReader.CurrentEncoding property rarely returns the correct text file encoding for me. I've had greater success determining a file's endianness, by analyzing its byte order mark (BOM). If the file does not have a BOM, this cannot determine the file's encoding.

*UPDATED 4/08/2020 to include UTF-32LE detection and return correct encoding for UTF-32BE

/// <summary>
/// Determines a text file's encoding by analyzing its byte order mark (BOM).
/// Defaults to ASCII when detection of the text file's endianness fails.
/// </summary>
/// <param name="filename">The text file to analyze.</param>
/// <returns>The detected encoding.</returns>
public static Encoding GetEncoding(string filename)
{
    // Read the BOM
    var bom = new byte[4];
    using (var file = new FileStream(filename, FileMode.Open, FileAccess.Read))
    {
        file.Read(bom, 0, 4);
    }

    // Analyze the BOM
    if (bom[0] == 0x2b && bom[1] == 0x2f && bom[2] == 0x76) return Encoding.UTF7;
    if (bom[0] == 0xef && bom[1] == 0xbb && bom[2] == 0xbf) return Encoding.UTF8;
    if (bom[0] == 0xff && bom[1] == 0xfe && bom[2] == 0 && bom[3] == 0) return Encoding.UTF32; //UTF-32LE
    if (bom[0] == 0xff && bom[1] == 0xfe) return Encoding.Unicode; //UTF-16LE
    if (bom[0] == 0xfe && bom[1] == 0xff) return Encoding.BigEndianUnicode; //UTF-16BE
    if (bom[0] == 0 && bom[1] == 0 && bom[2] == 0xfe && bom[3] == 0xff) return new UTF32Encoding(true, true);  //UTF-32BE

    // We actually have no idea what the encoding is if we reach this point, so
    // you may wish to return null instead of defaulting to ASCII
    return Encoding.ASCII;
}
Sporangium answered 9/10, 2013 at 22:30 Comment(17)
+1. This worked for me too (whereas detectEncodingFromByteOrderMarks did not). I used "new FileStream(filename, FileMode.Open, FileAccess.Read)" to avoid a IOException because the file is read only.Honeyhoneybee
UTF-8 files can be without BOM, in this case it will return ASCII incorrectly.Nodal
@Sporangium I have tested it for UCS-2 Little Endian.. But it returns ASCII only.. which is big issue for me as I have to reject the UCS but accept the ASCII.. Help.Carlyncarlynn
This answer is wrong. Looking at the reference source for StreamReader, that implementation is what more people will want. They make new encodings rather than using the existing Encoding.Unicode objects, so equality checks will fail (which might rarely happen anyway because, for instance, Encoding.UTF8 can return different objects), but it (1) doesn't use the really weird UTF-7 format, (2) defaults to UTF-8 if no BOM is found, and (3) can be overridden to use a different default encoding.Benzel
i had better success with new StreamReader(filename, true).CurrentEncodingSnort
Strangely enough, if you actually make a UTF7Encoding object and perform GetPreamble() on it, you get an empty array...Downwash
By the way, this is missing little-endian UTF32, which must be tested before little-endian UTF-16 since it starts with the same two bytes.Downwash
Seems UTF7 is a gigantic mess; its preamble actually includes two bits of the first character. It can't be detected or decoded correctly with a simple preamble check like that.Downwash
@blacai to detect UTF-8 without BOM you may want to scan the file like this: https://mcmap.net/q/11406/-how-to-recognize-if-a-string-contains-unicode-charsJejune
@Jejune Thanks for the link. The problem I face would require to mix both solutions I suppose. I need to recognize whenever a file is UTF-8 and in case it is, it should not carry BOM...Lalapalooza
There is a fundamental error in the code; when you detect the big-endian UTF32 signature (00 00 FE FF), you return the system-provided Encoding.UTF32, which is a little-endian encoding (as noted here). And also, as noted by @Nyerguds, you still are not looking for UTF32LE, which has signature FF FE 00 00 (according to en.wikipedia.org/wiki/Byte_order_mark). As that user noted, because it is subsuming, that check must come before the 2-byte checks.Bachelor
Encoding.UTF32 is little endian, therefore should the if should be bom[0] == 0xff && bom[1] == 0xfe && bom[2] == 0Messing
Works only if BOM present!Heed
Thanks. I don't know if the BOM character list is ok and as complete as it can be, and if endianness is correct. But I find it much cleaner and understandable than other answers that deal with test characters and exception management. I have just removed the return Encoding.ASCII; that I don't like at all, I prefer to return null and deal with files without BOM outside of method.Romaineromains
Disagree with falling back to ASCII. The safer fallback is UTF-8. Running ASCII (even if it uses extended ASCII characters) through a UTF8 decoder almost always produces the right result because a realistic sequence of extended ASCII characters that would be interpreted as a valid UTF-8 character is exceedingly rare. See en.wikipedia.org/wiki/UTF-8.Mooneye
Is there a BOM for ISO-8859-1 and ISO-8859-2?Lamellirostral
file.Read(bom, 0, 4) does not guarantee the first 4 bytes are read. By contract, it returns a minimum of 1 and a maximum of 4 (the given number of) bytes and the filled in number of bytes are returned. This is a rather low level method that needs to be used in a loop, best make an extension method that does this for you. Also make sure that the method does not throw an exception if the file's size is less than 4 bytes.Olympus
K
79

The following code works fine for me, using the StreamReader class:

  using (var reader = new StreamReader(fileName, defaultEncodingIfNoBom, true))
  {
      reader.Peek(); // you need this!
      var encoding = reader.CurrentEncoding;
  }

The trick is to use the Peek call, otherwise, .NET has not done anything (and it hasn't read the preamble, the BOM). Of course, if you use any other ReadXXX call before checking the encoding, it works too.

If the file has no BOM, then the defaultEncodingIfNoBom encoding will be used. There is also a StreamReader constructor overload without this argument (in this case, the encoding will by default be set to UTF8 before any read), but I recommend to define what you consider the default encoding in your context.

I have tested this successfully with files with BOM for UTF8, UTF16/Unicode (LE & BE) and UTF32 (LE & BE). It does not work for UTF7.

Kitti answered 22/5, 2015 at 9:59 Comment(13)
I get back what set as default encoding. Could I be missing momething?Ridgway
@DRAM - this can happen if the file has no BOMKitti
Thanks @Simon Mourier. I dint expect my pdf / any file would not have bom. This link #4520684 might be helpful for someone who try to detect without bom.Ridgway
In powershell I had to run $reader.close(), or else it was locked from writing. foreach($filename in $args) { $reader = [System.IO.StreamReader]::new($filename, [System.Text.Encoding]::default,$true); $peek = $reader.Peek(); $reader.currentencoding | select bodyname,encodingname; $reader.close() }Merwin
@SimonMourier This does not work if encoding of the file is UTF-8 without BOMCentillion
@Centillion - If there's no BOM, you can't guarantee 100% what a file encoding is. That's why I added a defaultEncodingIfNoBom parameter in my sample code. It's up to you to decide what that could be depending on your context. It's often UTF-8 these days.Kitti
@OndraStarenko - ANSI encoding can only be detected as a fallback, because it's "no encoding". .NET code only detects encoding with BOMs.Kitti
This doesn't work for me neither, always returning back UTF8, when I KNOW (cause I can see in different editors) that it's not that.Almund
@Almund - A file w/o a BOM can be seen as basically anything (you will get back what you passed as defaultEncodingIfNoBom). The fact that you know is irrelevant, only the binary is. You can post your file somewhere so we can investigate more.Kitti
If you use the overload which does not take a default encoding, UTF-8 will be used (not ANSI): new StreamReader(@"C:\Temp\File without BOM.txt", true).CurrentEncoding.EncodingName returns Unicode (UTF-8)Yearround
@Yearround - When I try that today it indeed returns UTF8 but I'm pretty sure I tried that back then and it was ANSI. I think it's related to Windows version, like Notepad which also used ANSI as default and now uses UTF8 (kimconnect.com/…). Anyway I updated my answer.Kitti
It's in the doc (learn.microsoft.com/en-us/dotnet/api/…) : > StreamReader defaults to UTF-8 encoding unless specified otherwise, instead of defaulting to the ANSI code page for the current system.Yearround
But in this answer (https://mcmap.net/q/11408/-what-character-encoding-is-used-by-streamreader-readtoend) it is said that there was an error in a previous version of the documentationYearround
M
18

Providing the implementation details for the steps proposed by @CodesInChaos:

1) Check if there is a Byte Order Mark

2) Check if the file is valid UTF8

3) Use the local "ANSI" codepage (ANSI as Microsoft defines it)

Step 2 works because most non ASCII sequences in codepages other that UTF8 are not valid UTF8. https://mcmap.net/q/11407/-how-to-detect-the-character-encoding-of-a-text-file explains the tactic in more details.

using System; using System.IO; using System.Text;

// Using encoding from BOM or UTF8 if no BOM found,
// check if the file is valid, by reading all lines
// If decoding fails, use the local "ANSI" codepage

public string DetectFileEncoding(Stream fileStream)
{
    var Utf8EncodingVerifier = Encoding.GetEncoding("utf-8", new EncoderExceptionFallback(), new DecoderExceptionFallback());
    using (var reader = new StreamReader(fileStream, Utf8EncodingVerifier,
           detectEncodingFromByteOrderMarks: true, leaveOpen: true, bufferSize: 1024))
    {
        string detectedEncoding;
        try
        {
            while (!reader.EndOfStream)
            {
                var line = reader.ReadLine();
            }
            detectedEncoding = reader.CurrentEncoding.BodyName;
        }
        catch (Exception e)
        {
            // Failed to decode the file using the BOM/UT8. 
            // Assume it's local ANSI
            detectedEncoding = "ISO-8859-1";
        }
        // Rewind the stream
        fileStream.Seek(0, SeekOrigin.Begin);
        return detectedEncoding;
   }
}


[Test]
public void Test1()
{
    Stream fs = File.OpenRead(@".\TestData\TextFile_ansi.csv");
    var detectedEncoding = DetectFileEncoding(fs);

    using (var reader = new StreamReader(fs, Encoding.GetEncoding(detectedEncoding)))
    {
       // Consume your file
        var line = reader.ReadLine();
        ...
Mitosis answered 7/9, 2018 at 18:9 Comment(5)
Thank you! This solved for me. But I would prefer use just reader.Peek() instead of while (!reader.EndOfStream) { var line = reader.ReadLine(); }Aurthur
reader.Peek() doesn't read the whole stream. I found that with larger streams, Peek() was inadequate. I used reader.ReadToEndAsync() instead.Manado
And what is Utf8EncodingVerifier?Mooneye
@PeterMoore Its an encoding for utf8, var Utf8EncodingVerifier = Encoding.GetEncoding("utf-8", new EncoderExceptionFallback(), new DecoderExceptionFallback()); It is used in the try block when reading a line. If the encoder fails to parse the provided text (the text is not encoded with utf8), Utf8EncodingVerifier will throw. The exception is catched and we then know the text is not utf8, and default to ISO-8859-1Mitosis
I'm getting an exception with "Unable to translate bytes [E9] at index 313 from specified code page to Unicode." when using this code on an easy ANSI file.Almund
I
17

Check this.

UDE

This is a port of Mozilla Universal Charset Detector and you can use it like this...

public static void Main(String[] args)
{
    string filename = args[0];
    using (FileStream fs = File.OpenRead(filename)) {
        Ude.CharsetDetector cdet = new Ude.CharsetDetector();
        cdet.Feed(fs);
        cdet.DataEnd();
        if (cdet.Charset != null) {
            Console.WriteLine("Charset: {0}, confidence: {1}", 
                 cdet.Charset, cdet.Confidence);
        } else {
            Console.WriteLine("Detection failed.");
        }
    }
}
Illyria answered 9/10, 2017 at 12:3 Comment(6)
You should know that UDE is GPLTwentyfour
Ok if you are worried about the license then you can use this one. Licensed as MIT and you can use it for both open source and closed source software. nuget.org/packages/SimpleHelpers.FileEncodingFruiter
The license is MPL with a GPL option. The library is subject to the Mozilla Public License Version 1.1 (the "License"). Alternatively, it may be used under the terms of either the GNU General Public License Version 2 or later (the "GPL"), or the GNU Lesser General Public License Version 2.1 or later (the "LGPL").Landwehr
It appears this fork is currently the most active and has a nuget package UDE.Netstandard. github.com/yinyue200/udeLandwehr
very useful library, coped with a lot of different and unusual encodings! tanks!Daffy
That's nice but I refuse to believe a bloated library is needed for something so simple.Mooneye
M
12

I'd try the following steps:

1) Check if there is a Byte Order Mark

2) Check if the file is valid UTF8

3) Use the local "ANSI" codepage (ANSI as Microsoft defines it)

Step 2 works because most non ASCII sequences in codepages other that UTF8 are not valid UTF8.

Mistymisunderstand answered 29/9, 2010 at 20:26 Comment(8)
This seems like the more correct answer, as the other answer does not work for me. One can do it with File.OpenRead and .Read-ing the first few bytes of the file.Cimbri
Step 2 is a whole bunch of programming work to check the bit patterns, though.Downwash
@Downwash The lazy approach is trying to parse it as UTF-8 and restart from the beginning when you get a decoding error. A bit ugly (exceptions for control flow) and of course the parsing needs to be side-effect free.Mistymisunderstand
I'm not sure decoding actually throws exceptions though, or if it just replaces the unrecognized sequences with '?'. I went with writing a bit pattern checking class, anyway.Downwash
When you create an instance of Utf8Encoding you can pass in an extra parameter that determines if an exception should be thrown or if you prefer silent data corruption.Mistymisunderstand
I like this answer. Most encodings (like 99% of your uses cases probably) will be either UTF-8 or ANSI (Windows codepage 1252). You can check if the string contains the replacement character (0xFFFD) to determine if the encoding failed.Araucanian
You just list step 2 like it's simple. It's not.Mooneye
@PeterMoore As long as you're able and willing to restart at the beginning of the file if it's not UTF-8, it's simple.Mistymisunderstand
E
9

.NET is not very helpful, but you can try the following algorithm:

  1. try to find the encoding by BOM(byte order mark) ... very likely not to be found
  2. try parsing into different encodings

Here is the call:

var encoding = FileHelper.GetEncoding(filePath);
if (encoding == null)
    throw new Exception("The file encoding is not supported. Please choose one of the following encodings: UTF8/UTF7/iso-8859-1");

Here is the code:

public class FileHelper
{
    /// <summary>
    /// Determines a text file's encoding by analyzing its byte order mark (BOM) and if not found try parsing into diferent encodings       
    /// Defaults to UTF8 when detection of the text file's endianness fails.
    /// </summary>
    /// <param name="filename">The text file to analyze.</param>
    /// <returns>The detected encoding or null.</returns>
    public static Encoding GetEncoding(string filename)
    {
        var encodingByBOM = GetEncodingByBOM(filename);
        if (encodingByBOM != null)
            return encodingByBOM;

        // BOM not found :(, so try to parse characters into several encodings
        var encodingByParsingUTF8 = GetEncodingByParsing(filename, Encoding.UTF8);
        if (encodingByParsingUTF8 != null)
            return encodingByParsingUTF8;

        var encodingByParsingLatin1 = GetEncodingByParsing(filename, Encoding.GetEncoding("iso-8859-1"));
        if (encodingByParsingLatin1 != null)
            return encodingByParsingLatin1;

        var encodingByParsingUTF7 = GetEncodingByParsing(filename, Encoding.UTF7);
        if (encodingByParsingUTF7 != null)
            return encodingByParsingUTF7;

        return null;   // no encoding found
    }

    /// <summary>
    /// Determines a text file's encoding by analyzing its byte order mark (BOM)  
    /// </summary>
    /// <param name="filename">The text file to analyze.</param>
    /// <returns>The detected encoding.</returns>
    private static Encoding GetEncodingByBOM(string filename)
    {
        // Read the BOM
        var byteOrderMark = new byte[4];
        using (var file = new FileStream(filename, FileMode.Open, FileAccess.Read))
        {
            file.Read(byteOrderMark, 0, 4);
        }

        // Analyze the BOM
        if (byteOrderMark[0] == 0x2b && byteOrderMark[1] == 0x2f && byteOrderMark[2] == 0x76) return Encoding.UTF7;
        if (byteOrderMark[0] == 0xef && byteOrderMark[1] == 0xbb && byteOrderMark[2] == 0xbf) return Encoding.UTF8;
        if (byteOrderMark[0] == 0xff && byteOrderMark[1] == 0xfe) return Encoding.Unicode; //UTF-16LE
        if (byteOrderMark[0] == 0xfe && byteOrderMark[1] == 0xff) return Encoding.BigEndianUnicode; //UTF-16BE
        if (byteOrderMark[0] == 0 && byteOrderMark[1] == 0 && byteOrderMark[2] == 0xfe && byteOrderMark[3] == 0xff) return Encoding.UTF32;

        return null;    // no BOM found
    }

    private static Encoding GetEncodingByParsing(string filename, Encoding encoding)
    {            
        var encodingVerifier = Encoding.GetEncoding(encoding.BodyName, new EncoderExceptionFallback(), new DecoderExceptionFallback());

        try
        {
            using (var textReader = new StreamReader(filename, encodingVerifier, detectEncodingFromByteOrderMarks: true))
            {
                while (!textReader.EndOfStream)
                {                        
                    textReader.ReadLine();   // in order to increment the stream position
                }

                // all text parsed ok
                return textReader.CurrentEncoding;
            }
        }
        catch (Exception ex) { }

        return null;    // 
    }
}
Erbil answered 9/7, 2019 at 12:4 Comment(0)
R
7

The solution proposed by @nonoandy is really interesting, I have succesfully tested it and seems to be working perfectly.

The nuget package needed is Microsoft.ProgramSynthesis.Detection (version 8.17.0 at the moment)

I suggest to use the EncodingTypeUtils.GetDotNetName instead of using a switch for getting the Encoding instance:

using System.Text;
using Microsoft.ProgramSynthesis.Detection.Encoding;

...

public Encoding? DetectEncoding(Stream stream)
{
    try
    {
        if (stream.CanSeek)
        {
            // Read from the beginning if possible
            stream.Seek(0, SeekOrigin.Begin);
        }

        // Detect encoding type (enum)
        var encodingType = EncodingIdentifier.IdentifyEncoding(stream);
        
        // Get the corresponding encoding name to be passed to System.Text.Encoding.GetEncoding
        var encodingDotNetName = EncodingTypeUtils.GetDotNetName(encodingType);

        if (!string.IsNullOrEmpty(encodingDotNetName))
        {
            return Encoding.GetEncoding(encodingDotNetName);
        }
    }
    catch (Exception e)
    {
        // Handle exception (log, throw, etc...)
    }

    // In case of error return null or a default value
    return null;
}
Rox answered 27/12, 2022 at 12:57 Comment(0)
O
3

Look here for c#

https://msdn.microsoft.com/en-us/library/system.io.streamreader.currentencoding%28v=vs.110%29.aspx

string path = @"path\to\your\file.ext";

using (StreamReader sr = new StreamReader(path, true))
{
    while (sr.Peek() >= 0)
    {
        Console.Write((char)sr.Read());
    }

    //Test for the encoding after reading, or at least
    //after the first read.
    Console.WriteLine("The encoding used was {0}.", sr.CurrentEncoding);
    Console.ReadLine();
    Console.WriteLine();
}
Optimum answered 28/6, 2016 at 13:31 Comment(0)
A
1

The following codes are my Powershell codes to determinate if some cpp or h or ml files are encodeding with ISO-8859-1(Latin-1) or UTF-8 without BOM, if neither then suppose it to be GB18030. I am a Chinese working in France and MSVC saves as Latin-1 on french computer and saves as GB on Chinese computer so this helps me avoid encoding problem when do source file exchanges between my system and my colleagues.

The way is simple, if all characters are between x00-x7E, ASCII, UTF-8 and Latin-1 are all the same, but if I read a non ASCII file by UTF-8, we will find the special character � show up, so try to read with Latin-1. In Latin-1, between \x7F and \xAF is empty, while GB uses full between x00-xFF so if I got any between the two, it's not Latin-1

The code is written in PowerShell, but uses .net so it's easy to be translated into C# or F#

$Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding($False)
foreach($i in Get-ChildItem .\ -Recurse -include *.cpp,*.h, *.ml) {
    $openUTF = New-Object System.IO.StreamReader -ArgumentList ($i, [Text.Encoding]::UTF8)
    $contentUTF = $openUTF.ReadToEnd()
    [regex]$regex = '�'
    $c=$regex.Matches($contentUTF).count
    $openUTF.Close()
    if ($c -ne 0) {
        $openLatin1 = New-Object System.IO.StreamReader -ArgumentList ($i, [Text.Encoding]::GetEncoding('ISO-8859-1'))
        $contentLatin1 = $openLatin1.ReadToEnd()
        $openLatin1.Close()
        [regex]$regex = '[\x7F-\xAF]'
        $c=$regex.Matches($contentLatin1).count
        if ($c -eq 0) {
            [System.IO.File]::WriteAllLines($i, $contentLatin1, $Utf8NoBomEncoding)
            $i.FullName
        } 
        else {
            $openGB = New-Object System.IO.StreamReader -ArgumentList ($i, [Text.Encoding]::GetEncoding('GB18030'))
            $contentGB = $openGB.ReadToEnd()
            $openGB.Close()
            [System.IO.File]::WriteAllLines($i, $contentGB, $Utf8NoBomEncoding)
            $i.FullName
        }
    }
}
Write-Host -NoNewLine 'Press any key to continue...';
$null = $Host.UI.RawUI.ReadKey('NoEcho,IncludeKeyDown');
Arvid answered 15/6, 2017 at 14:21 Comment(0)
H
1

This seems to work well.

First create a helper method:

  private static Encoding TestCodePage(Encoding testCode, byte[] byteArray)
    {
      try
      {
        var encoding = Encoding.GetEncoding(testCode.CodePage, EncoderFallback.ExceptionFallback, DecoderFallback.ExceptionFallback);
        var a = encoding.GetCharCount(byteArray);
        return testCode;
      }
      catch (Exception e)
      {
        return null;
      }
    }

Then create code to test the source. In this case, I've got a byte array I need to get the encoding of:

 public static Encoding DetectCodePage(byte[] contents)
    {
      if (contents == null || contents.Length == 0)
      {
        return Encoding.Default;
      }

      return TestCodePage(Encoding.UTF8, contents)
             ?? TestCodePage(Encoding.Unicode, contents)
             ?? TestCodePage(Encoding.BigEndianUnicode, contents)
             ?? TestCodePage(Encoding.GetEncoding(1252), contents) // Western European
             ?? TestCodePage(Encoding.GetEncoding(28591), contents) // ISO Western European
             ?? TestCodePage(Encoding.ASCII, contents)
             ?? TestCodePage(Encoding.Default, contents); // likely Unicode
    }
Hwang answered 18/4, 2021 at 22:33 Comment(2)
Hmmm... this returns Unicode for me when the file is ANSIChunk
@colmde Let us know if you were able to solve the issue!Hwang
S
0

I have tried a few different ways to detect encoding and hit issues with most of them.

I made the following leveraging a Microsoft Nuget Package and it seems to work for me so far but needs tested a lot more.
Most of my testing has been on UTF8, UTF8 with BOM and ANSI.

static void Main(string[] args)
{
    var path = Directory.GetCurrentDirectory() + "\\TextFile2.txt";
    List<string> contents = File.ReadLines(path, GetEncoding(path)).Where(w => !string.IsNullOrWhiteSpace(w)).ToList();

    int i = 0;
    foreach (var line in contents)
    {
        i++;
        Console.WriteLine(line);
        if (i > 100)
            break;
    }

}


public static Encoding GetEncoding(string filename)
{
    using (var file = new FileStream(filename, FileMode.Open, FileAccess.Read))
    {
        var detectedEncoding = Microsoft.ProgramSynthesis.Detection.Encoding.EncodingIdentifier.IdentifyEncoding(file);
        switch (detectedEncoding)
        {
            case Microsoft.ProgramSynthesis.Detection.Encoding.EncodingType.Utf8:
                return Encoding.UTF8;
            case Microsoft.ProgramSynthesis.Detection.Encoding.EncodingType.Utf16Be:
                return Encoding.BigEndianUnicode;
            case Microsoft.ProgramSynthesis.Detection.Encoding.EncodingType.Utf16Le:
                return Encoding.Unicode;
            case Microsoft.ProgramSynthesis.Detection.Encoding.EncodingType.Utf32Le:
                return Encoding.UTF32;
            case Microsoft.ProgramSynthesis.Detection.Encoding.EncodingType.Ascii:
                return Encoding.ASCII;
            case Microsoft.ProgramSynthesis.Detection.Encoding.EncodingType.Iso88591:
            case Microsoft.ProgramSynthesis.Detection.Encoding.EncodingType.Unknown:
            case Microsoft.ProgramSynthesis.Detection.Encoding.EncodingType.Windows1252:
            default:
            return Encoding.Default;
        }
    }
}
Socialization answered 18/6, 2022 at 3:30 Comment(0)
T
-3

It may be useful

string path = @"address/to/the/file.extension";

using (StreamReader sr = new StreamReader(path))
{ 
    Console.WriteLine(sr.CurrentEncoding);                        
}
Tootsie answered 11/6, 2019 at 19:48 Comment(1)
Regardless of the encoding of a file, this always evaluates to UTF-8, at least on my environment.Moselle

© 2022 - 2024 — McMap. All rights reserved.