How can I programmatically (not using vi
) convert DOS/Windows newlines to Unix newlines?
The dos2unix
and unix2dos
commands are not available on certain systems.
How can I emulate them with commands such as sed
, awk
, and tr
?
How can I programmatically (not using vi
) convert DOS/Windows newlines to Unix newlines?
The dos2unix
and unix2dos
commands are not available on certain systems.
How can I emulate them with commands such as sed
, awk
, and tr
?
You can use tr
to convert from DOS to Unix; however, you can only do this safely if CR appears in your file only as the first byte of a CRLF byte pair. This is usually the case. You then use:
tr -d '\015' <DOS-file >UNIX-file
Note that the name DOS-file
is different from the name UNIX-file
; if you try to use the same name twice, you will end up with no data in the file.
You can't do it the other way round (with standard 'tr').
If you know how to enter carriage return into a script (control-V, control-M to enter control-M), then:
sed 's/^M$//' # DOS to Unix
sed 's/$/^M/' # Unix to DOS
where the '^M' is the control-M character. You can also use the bash
ANSI-C Quoting mechanism to specify the carriage return:
sed $'s/\r$//' # DOS to Unix
sed $'s/$/\r/' # Unix to DOS
However, if you're going to have to do this very often (more than once, roughly speaking), it is far more sensible to install the conversion programs (e.g. dos2unix
and unix2dos
, or perhaps dtou
and utod
) and use them.
If you need to process entire directories and subdirectories, you can use zip
:
zip -r -ll zipfile.zip somedir/
unzip zipfile.zip
This will create a zip archive with line endings changed from CRLF to CR. unzip
will then put the converted files back in place (and ask you file by file - you can answer: Yes-to-all). Credits to @vmsnomad for pointing this out.
tr -d '\015' <DOS-file >UNIX-file
where DOS-file
== UNIX-file
just results in an empty file. The output file has to be a different file, unfortunately. –
Generous sed
option -i
(for in-place) works; the limits are linked files and symlinks. The sort
command has 'always' (since 1979, if not earlier) supported the -o
option which can list one of the input files. However, that is in part because sort
must read all its input before it can write any of its output. Other programs sporadically support overwriting one of their input files. You can find a general purpose program (script) to avoid problems in 'The UNIX Programming Environment' by Kernighan & Pike. –
Hin sed -i $'s/\r$//' filename
- to edit in place. I am working on a machine that does not have access to the internet, so software installation is a problem. –
Vitalism tr -d '\015' < original_file > t && mv t original_file
- basically works by creating temp file, then overwriting the old one with it. –
Zante sed
does not (by default, not sure you can change this?) recognise the escaped versions \r
, \015
, \x0d
for carriage return; sed
does recognise CR when entered with ctrl-v ctrl-m
as described above (👍😊), which is ok for the command line; for scripts try sed "s/$(printf '\r')$//"
(hat tip @twm), or fallback to tr
, which recognises \r
and \015
. –
Gillian sponge
and can be found in moreutils: tr -d '\015' < original_file | sponge original_file
. I use it daily. –
Loophole find
to identify the files that need changing (or otherwise create a list of file names — one hopes they don't have spaces and other unruly punctuation in the names) and then apply the script to the files. Using find … -exec sh script.sh {} +
is pretty effective. The alternatives are legion. The find
technique works with absurd names. –
Hin sed $'s/$/\r/'
twice it will have the CR twice. For scripting solutions I recommend the following: sed 's/^$/\r/;s/\([^\r]\)$/\1\r/g'
For simplicity I would state this as the third way to make the original idea point out. –
Avi zip
method, works really fine. But one thing that can be good to know is that zip
preserves timestamps as default. Sometimes that's good, but if you want new timestamps use -DD
for folders and files, -D
is only folders. On VMS it's sets timestamps for folders by default, and -D
is folders and files. –
Mantle You can use Vim programmatically with the option -c {command}
:
DOS to Unix:
vim file.txt -c "set ff=unix" -c ":wq"
Unix to DOS:
vim file.txt -c "set ff=dos" -c ":wq"
"set ff=unix/dos"
means change fileformat (ff) of the file to Unix/DOS end of line format.
":wq"
means write the file to disk and quit the editor (allowing to use the command in a loop).
Use:
tr -d "\r" < file
Take a look here for examples using sed
:
# In a Unix environment: convert DOS newlines (CR/LF) to Unix format.
sed 's/.$//' # Assumes that all lines end with CR/LF
sed 's/^M$//' # In Bash/tcsh, press Ctrl-V then Ctrl-M
sed 's/\x0D$//' # Works on ssed, gsed 3.02.80 or higher
# In a Unix environment: convert Unix newlines (LF) to DOS format.
sed "s/$/`echo -e \\\r`/" # Command line under ksh
sed 's/$'"/`echo \\\r`/" # Command line under bash
sed "s/$/`echo \\\r`/" # Command line under zsh
sed 's/$/\r/' # gsed 3.02.80 or higher
Use sed -i
for in-place conversion, e.g., sed -i 's/..../' file
.
\r
: tr "\r" "\n" < infile > outfile
–
Roswell -d
is featured more frequently and will not help in the "only \r
" situation. –
Rhnegative \r
to \n
mapping has the effect of double-spacing the files; each single CRLF line ending in DOS becomes \n\n
in Unix. –
Hin Install dos2unix
, then convert a file in-place with
dos2unix <filename>
To output converted text to a different file use
dos2unix -n <input-file> <output-file>
You can install it on Ubuntu or Debian with
sudo apt install dos2unix
or on macOS using Homebrew
brew install dos2unix
Using AWK you can do:
awk '{ sub("\r$", ""); print }' dos.txt > unix.txt
Using Perl you can do:
perl -pe 's/\r$//' < dos.txt > unix.txt
awk
solution. –
Piatt This problem can be solved with standard tools, but there are sufficiently many traps for the unwary that I recommend you install the flip
command, which was written over 20 years ago by Rahul Dhesi, the author of zoo
.
It does an excellent job converting file formats while, for example, avoiding the inadvertant destruction of binary files, which is a little too easy if you just race around altering every CRLF you see...
If you don't have access to dos2unix, but can read this page, then you can copy/paste dos2unix.py from here.
#!/usr/bin/env python
"""\
convert dos linefeeds (crlf) to unix (lf)
usage: dos2unix.py <input> <output>
"""
import sys
if len(sys.argv[1:]) != 2:
sys.exit(__doc__)
content = ''
outsize = 0
with open(sys.argv[1], 'rb') as infile:
content = infile.read()
with open(sys.argv[2], 'wb') as output:
for line in content.splitlines():
outsize += len(line) + 1
output.write(line + '\n')
print("Done. Saved %s bytes." % (len(content)-outsize))
(Cross-posted from Super User.)
dos2unix
converts all input files by default. Your usage implies -n
parameter. And the real dos2unix
is a filter that reads from stdin, writes to stdout if the files are not given. –
Galicia python
-- they apparently can't be bothered with backward compatibility, so it is python2
or python3
or ... –
Permenter The solutions posted so far only deal with part of the problem, converting DOS/Windows' CRLF into Unix's LF; the part they're missing is that DOS use CRLF as a line separator, while Unix uses LF as a line terminator. The difference is that a DOS file (usually) won't have anything after the last line in the file, while Unix will. To do the conversion properly, you need to add that final LF (unless the file is zero-length, i.e. has no lines in it at all). My favorite incantation for this (with a little added logic to handle Mac-style CR-separated files, and not molest files that're already in unix format) is a bit of perl:
perl -pe 'if ( s/\r\n?/\n/g ) { $f=1 }; if ( $f || ! $m ) { s/([^\n])\z/$1\n/ }; $m=1' PCfile.txt
Note that this sends the Unixified version of the file to stdout. If you want to replace the file with a Unixified version, add perl's -i
flag.
It is super duper easy with PCRE;
As a script, or replace $@
with your files.
#!/usr/bin/env bash
perl -pi -e 's/\r\n/\n/g' -- $@
This will overwrite your files in place!
I recommend only doing this with a backup (version control or otherwise)
--
. I chose this solution because it's easy to understand and adapt for me. FYI, this is what the switches do: -p
assume a "while input" loop, -i
edit input file in place, -e
execute following command –
Outgroup I had just to ponder that same question (on Windows-side, but equally applicable to Linux).
Surprisingly, nobody mentioned a very much automated way of doing CRLF <-> LF conversion for text-files using the good old zip -ll
option (Info-ZIP):
zip -ll textfiles-lf.zip files-with-crlf-eol.*
unzip textfiles-lf.zip
NOTE: this would create a ZIP file preserving the original file names, but converting the line endings to LF. Then unzip
would extract the files as zip'ed, that is, with their original names (but with LF-endings), thus prompting to overwrite the local original files if any.
The relevant excerpt from the zip --help
:
zip --help
...
-l convert LF to CR LF (-ll CR LF to LF)
zip
preserves timestamps as default. Sometimes that's good, but if you want new timestamps use -DD
for folders and files, -D
is only folders. On VMS it's sets timestamps for folders by default, and -D
is folders and files. –
Mantle Interestingly, in my Git Bash on Windows, sed ""
did the trick already:
$ echo -e "abc\r" >tst.txt
$ file tst.txt
tst.txt: ASCII text, with CRLF line terminators
$ sed -i "" tst.txt
$ file tst.txt
tst.txt: ASCII text
My guess is that sed ignores them when reading lines from the input and always writes Unix line endings to the output.
sed ""
will not do the trick, though. –
Permittivity An even simpler AWK solution without a program:
awk -v ORS='\r\n' '1' unix.txt > dos.txt
Technically '1' is your program, because AWK requires one when the given option.
Alternatively, an internal solution is:
while IFS= read -r line;
do printf '%s\n' "${line%$'\r'}";
done < dos.txt > unix.txt
awk -v RS='\r\n' '1' dos.txt > unix.txt
–
Euphemize awk
or sed
solution. Also, you must use while IFS= read -r line
to faithfully preserve the input lines, otherwise leading and trailing whitespace is trimmed (alternatively, use no variable name in the read
command and work with $REPLY
). –
Piatt $READ
), read just splits on line endings, and you can just use echo instead of printf (echo is more likely to be a builtin, and it's generally faster). So, using ctrl-v+ctrl-m to type the \r, one can simply do while read -r; do echo "${REPLY%^M}"; done < file > file.fixed
and it's about the same speed as sed. –
Insipience For Mac OS X if you have Homebrew installed (http://brew.sh/):
brew install dos2unix
for csv in *.csv; do dos2unix -c mac ${csv}; done;
Make sure you have made copies of the files, as this command will modify the files in place.
The -c mac
option makes the switch to be compatible with OS X.
-c mac
, which is for converting pre-OS X CR
-only newlines. You want to use that mode only for files to and from Mac OS 9 or before. –
Marylouisemaryly sed -i.bak --expression='s/\r\n/\n/g' <file_path>
Since the question mentions sed, this is the most straightforward way to use sed to achieve this. The expression says replace all carriage-returns and line-feeds with just line-feeds only. That is what you need when you go from Windows to Unix. I verified it works.
--in-place mydosfile.txt
to the end (or piping to a file). The end result was the file still had CRLF. I was testing on a Graviton (AArch64) EC2 instance. –
Nowt sed 's/\r\n/\n/g'
does not match anything. Refer to can-sed-replace-new-line-characters –
Thebes Just complementing @Jonathan Leffler's excellent answer, if you have a file with mixed line endings (LF and CRLF) and you need to normalize to CRLF (DOS), use the following commands in sequence...
# DOS to Unix
sed -i $'s/\r$//' "<YOUR_FILE>"
# Unix to DOS (normalized)
sed -i $'s/$/\r/' "<YOUR_FILE>"
NOTE: If you have a file with mixed line endings (LF and CRLF), the second command above alone will cause a mess.
If you need to convert to LF (Unix) the first command alone will be enough...
# DOS to Unix
sed -i $'s/\r$//' "<YOUR_FILE>"
Thanks! 🤗
[Ref(s).: https://mcmap.net/q/81620/-how-can-i-normalize-the-eol-character-in-java ]
perl -pe 's/\r\n/\n/; s/([^\n])\z/$1\n/ if eof' PCfile.txt
Based on Gordon Davisson's answer.
One must consider the possibility of [noeol]
...
You can use AWK. Set the record separator (RS
) to a regular expression that matches all possible newline character, or characters. And set the output record separator (ORS
) to the Unix-style newline character.
awk 'BEGIN{RS="\r|\n|\r\n|\n\r";ORS="\n"}{print}' windows_or_macos.txt > unix.txt
git diff
shows ^M, edited in vim) –
Beaverbrook awk 'BEGIN{RS="\r\n";ORS=""}{print}' dosfile > unixfile
fixed that issue, but it still does not fix the missing EOL on the last line. –
Permenter This worked for me
tr "\r" "\n" < sampledata.csv > sampledata2.csv
On Linux, it's easy to convert ^M (Ctrl + M) to *nix newlines (^J) with sed.
It will be something like this on the CLI, and there will actually be a line break in the text. However, the \
passes that ^J
along to sed:
sed 's/^M/\
/g' < ffmpeg.log > new.log
You get this by using ^V (Ctrl + V), ^M (Ctrl + M) and \
(backslash) as you type:
sed 's/^V^M/\^V^J/g' < ffmpeg.log > new.log
As an extension to Jonathan Leffler's Unix to DOS solution, to safely convert to DOS when you're unsure of the file's current line endings:
sed '/^M$/! s/$/^M/'
This checks that the line does not already end in CRLF before converting to CRLF.
I made a script based on the accepted answer, so you can convert it directly without needing an additional file in the end and removing and renaming afterwards.
convert-crlf-to-lf() {
file="$1"
tr -d '\015' <"$file" >"$file"2
rm -rf "$file"
mv "$file"2 "$file"
}
Just make sure if you have a file like "file1.txt" that "file1.txt2" doesn't already exist or it will be overwritten. I use this as a temporary place to store the file in.
With Bash 4.2 and newer you can use something like this to strip the trailing CR, which only uses Bash built-ins:
if [[ "${str: -1}" == $'\r' ]]; then
str="${str:: -1}"
fi
I tried
sed 's/^M$//' file.txt
on OS X as well as several other methods (Fixing Dos Line Endings or http://hintsforums.macworld.com/archive/index.php/t-125.html). None worked, and the file remained unchanged (by the way, Ctrl + V, Enter was needed to reproduce ^M
). In the end I used TextWrangler. It's not strictly command line, but it works and it doesn't complain.
© 2022 - 2024 — McMap. All rights reserved.
dos2unix
using your package manager, it really is much simpler and does exist on most platforms. – Eurypterid