How can I delete a newline if it is the last character in a file?
Asked Answered
C

23

181

I have some files that I'd like to delete the last newline if it is the last character in a file. od -c shows me that the command I run does write the file with a trailing new line:

0013600   n   t  >  \n

I've tried a few tricks with sed but the best I could think of isn't doing the trick:

sed -e '$s/\(.*\)\n$/\1/' abc

Any ideas how to do this?

Cyme answered 31/10, 2009 at 10:42 Comment(8)
newline is only one character for unix newlines. DOS newlines are two characters. Of course, literal "\n" is two characters. Which are you actually looking for?Phidippides
Although the representation might be \n, in linux is is one characterScaly
Can you elaborate on why you want to do this? Text files are supposed to end with an end-of-line, unless they are entirely empty. It seems strange to me that you'd want to have such a truncated file?Pint
The usual reason for doing something like this is to delete a trailing comma from the last line of a CSV file. Sed works well, but newlines have to be treated differently.Scaly
Yeah this is for Linux so thanks for correcting that newline is just one character. Fixed in post.Gaily
Please never delete the final newline in a file of newline-terminated lines. It screws up all kinds of things.Cavanagh
@ThomasPadron-McCarthy "In computing, for every good reason there is to do something there exists a good reason not to do it and visa versa." -Jesus -- "you shouldn't do that" is a horrible answer no matter the question. The correct format is: [how to do it] but [why it may be bad idea]. #sacrilegeTorrance
One reason to remove the trailing newline is if you're piping the string to somewhere else, and you can't have a trailing newline.Lamed
S
241
perl -pe 'chomp if eof' filename >filename2

or, to edit the file in place:

perl -pi -e 'chomp if eof' filename

[Editor's note: -pi -e was originally -pie, but, as noted by several commenters and explained by @hvd, the latter doesn't work.]

This was described as a 'perl blasphemy' on the awk website I saw.

But, in a test, it worked.

Scaly answered 31/10, 2009 at 10:55 Comment(12)
You can make it safer by using chomp. And it beats slurping the file.Oar
Blasphemy though it is, it works very well. perl -i -pe 'chomp if eof' filename. Thank you.Gaily
The funny thing about blasphemy and heresy is it's usually hated because it's correct. :)Bunn
Its not pretty but it works. Give it up for the swiss army chainsaw.Glace
Small correction: you can use perl -pi -e 'chomp if eof' filename, to edit a file in-place instead of creating a temporary fileShirt
perl -pie 'chomp if eof' filename -> Can't open perl script "chomp if eof": No such file or directory; perl -pi -e 'chomp if eof' filename -> worksUnpolite
Why is -pie "blasphemy," and why doesn't it behave the same as -pi -e? Anyone know?Doble
@KyleStrand I don't know about the "blasphemy" part other than perhaps the mere fact of recommending perl could be considered blasphemy on an awk website, but the reason -pie and -pi -e don't work the same way is that the -i option takes an optional argument. -pie uses e as the argument to -i, specifying the backup suffix, and then interprets 'chomp if eof' as a filename, since it isn't preceded by an -e option. -pi -e omits the argument for -i, and allows -e to be treated as an option.Senility
This also turns \r\n to \nCammie
isn't all of Perl blasphemy? ;)Bathsheeb
cat foo | perl -pe 'chomp if eof' removes the newline from foo, but git status still reports a diff. Maybe the file was \r\n and perl just removes the \n?Elaterium
@Bunn A fun read about thatOrchidectomy
V
69

You can take advantage of the fact that shell command substitutions remove trailing newline characters:

Simple form that works in bash, ksh, zsh:

printf %s "$(< in.txt)" > out.txt

Portable (POSIX-compliant) alternative (slightly less efficient):

printf %s "$(cat in.txt)" > out.txt

Note:


A guide to the other answers:

  • If Perl is available, go for the accepted answer - it is simple and memory-efficient (doesn't read the whole input file at once).

  • Otherwise, consider ghostdog74's Awk answer - it's obscure, but also memory-efficient; a more readable equivalent (POSIX-compliant) is:

  • awk 'NR > 1 { print prev } { prev=$0 } END { ORS=""; print }' in.txt

  • Printing is delayed by one line so that the final line can be handled in the END block, where it is printed without a trailing \n due to setting the output-record separator (OFS) to an empty string.

  • If you want a verbose, but fast and robust solution that truly edits in-place (as opposed to creating a temp. file that then replaces the original), consider jrockway's Perl script.

Vesicant answered 27/8, 2012 at 19:55 Comment(0)
L
57

You can do this with head from GNU coreutils, it supports arguments that are relative to the end of the file. So to leave off the last byte use:

head -c -1

To test for an ending newline you can use tail and wc. The following example saves the result to a temporary file and subsequently overwrites the original:

if [[ $(tail -c1 file | wc -l) == 1 ]]; then
  head -c -1 file > file.tmp
  mv file.tmp file
fi

You could also use sponge from moreutils to do "in-place" editing:

[[ $(tail -c1 file | wc -l) == 1 ]] && head -c -1 file | sponge file

You can also make a general reusable function by stuffing this in your .bashrc file:

# Example:  remove-last-newline < multiline.txt
function remove-last-newline(){
    local file=$(mktemp)
    cat > $file
    if [[ $(tail -c1 $file | wc -l) == 1 ]]; then
        head -c -1 $file > $file.tmp
        mv $file.tmp $file
    fi
    cat $file
}

Update

As noted by KarlWilbur in the comments and used in Sorentar's answer, truncate --size=-1 can replace head -c-1 and supports in-place editing.

Lamonica answered 25/9, 2012 at 9:2 Comment(4)
Best solution of all so far. Uses a standard tool that really every Linux distribution has, and is concise and clear, without any sed or perl wizardry.Xylography
Nice solution. One change is that I think I'd use truncate --size=-1 instead of head -c -1 since it just resizes the input file rather than reading in the input file, writing it out to another file, then replacing the original with the output file.Timtima
Note that head -c -1 will remove the last character regardless if it is a newline or not, that's why you have to check whether the last character is a newline before you remove it.Lamed
Unfortunately does not work on Mac. I suspect it doesn't work on any BSD variant.Acidulous
P
20
head -n -1 abc > newfile
tail -n 1 abc | tr -d '\n' >> newfile

Edit 2:

Here is an awk version (corrected) that doesn't accumulate a potentially huge array:

awk '{if (line) print line; line=$0} END {printf $0}' abc

Phidippides answered 31/10, 2009 at 10:59 Comment(9)
Good original way to think about it. Thanks Dennis.Gaily
You are correct. I defer to your awk version. It takes two offsets (and a different test) and I only used one. However, you could use printf instead of ORS.Phidippides
you can make the output a pipe with process substitution: head -n -1 abc | cat <(tail -n 1 abc | tr -d '\n') | ...Heinrick
@BCoates: That doesn't do the same thing. Yours only gives the last line (without a newline). The OP wants the whole file with only the last newline removed. Your pipeline would work like this: head -n -1 ifscomma && cat <(tail -n 1 ifscomma | tr -d '\n') or head -n -1 ifscomma | cat - <(tail -n 1 ifscomma | tr -d '\n'). In the latter one, the hyphen causes cat to concatenate what comes across the pipe with the output of the process substitution. Otherwise, the output of head would be ignored.Phidippides
I forgot to edit the name of the file in my previous comment to change it from the test file I was using to the sample name "abc": s/ifscomma/abc/gPhidippides
This worked faster than the perl command on a 1MB file for me. Great thanks!Tresa
Using -c instead of -n for head and tail should be even faster.Midi
For me, head -n -1 abc removed the last actual line of the file, leaving a trailing newline; head -c -1 abc seemed to work betterSkewback
just another variant on the "one liner" with cat examples show in the comments (no pipe necessary, using only process substitution, feel free to change the head or tail options as necessary): cat <(head -n -1 abc) <(tail -n 1 abc | tr -d '\n')Vogel
A
11

gawk

awk '{q=p;p=$0}NR>1{print q}END{ORS = ""; print p}' file
Andesite answered 31/10, 2009 at 11:21 Comment(4)
Still looks like a lot of characters to me... learning it slowly :). Does the job though. Thanks ghostdog.Gaily
awk '{ prev_line = line; line = $0; } NR > 1 { print prev_line; } END { ORS = ""; print line; }' file this should be easier to read.Zante
How about: awk 'NR>1 {print p} {p=$0} END {printf $0}' file.Ahl
@sorontar The first argument to printf is the format argument. Thus if the input file had something that could be interpreted as a format specifier like %d, you'd get an error. A fix would be to change it to printf "%s" $0Crawley
A
9

A fast solution is using the gnu utility truncate:

[ -z $(tail -c1 file) ] && truncate -s-1 file

The test will be true if the file does have a trailing new line.

The removal is very fast, truly in place, no new file is needed and the search is also reading from the end just one byte (tail -c1).

Ahl answered 13/11, 2016 at 1:55 Comment(2)
truncate: missing file operandBegone
it's just missing the trailing filename in the example, i.e., [ -z $(tail -c1 filename) ] && truncate -s -1 filename (also, in reply to the other comment, the truncate command does not work with stdin, a filename is required)Vogel
D
8

A very simple method for single-line files, requiring GNU echo from coreutils:

/bin/echo -n $(cat $file)
Doubletime answered 14/6, 2016 at 22:24 Comment(4)
This is a decent way if it's not too expensive (repetitive).Protect
This has issues when \n is present. As it gets converted to a new line.Corroboree
Also seems to work for multi-line files it the $(...) is quotedLamonica
definitely need to quote that... /bin/echo -n "$(cat infile)" Also, I'm not sure what the max len of echo or the shell would be across os/shell versions/distros (I was just googling this & it was a rabbit hole), so I'm not sure how portable (or performant) it actually would be for anything other than small files -- but for small files, great.Vogel
S
7

If you want to do it right, you need something like this:

use autodie qw(open sysseek sysread truncate);

my $file = shift;
open my $fh, '+>>', $file;
my $pos = tell $fh;
sysseek $fh, $pos - 1, 0;
sysread $fh, my $buf, 1 or die 'No data to read?';

if($buf eq "\n"){
    truncate $fh, $pos - 1;
}

We open the file for reading and appending; opening for appending means that we are already seeked to the end of the file. We then get the numerical position of the end of the file with tell. We use that number to seek back one character, and then we read that one character. If it's a newline, we truncate the file to the character before that newline, otherwise, we do nothing.

This runs in constant time and constant space for any input, and doesn't require any more disk space, either.

Struble answered 2/11, 2009 at 0:12 Comment(2)
but that has the disadvantage of not reseting ownership/permissions for the file...err, wait...Actiniform
Verbose, but both fast and robust - seems to be the only true in-place file-editing answer here (and since it may not be obvious to everyone: this is a Perl script).Vesicant
M
6

Here is a nice, tidy Python solution. I made no attempt to be terse here.

This modifies the file in-place, rather than making a copy of the file and stripping the newline from the last line of the copy. If the file is large, this will be much faster than the Perl solution that was chosen as the best answer.

It truncates a file by two bytes if the last two bytes are CR/LF, or by one byte if the last byte is LF. It does not attempt to modify the file if the last byte(s) are not (CR)LF. It handles errors. Tested in Python 2.6.

Put this in a file called "striplast" and chmod +x striplast.

#!/usr/bin/python

# strip newline from last line of a file


import sys

def trunc(filename, new_len):
    try:
        # open with mode "append" so we have permission to modify
        # cannot open with mode "write" because that clobbers the file!
        f = open(filename, "ab")
        f.truncate(new_len)
        f.close()
    except IOError:
        print "cannot write to file:", filename
        sys.exit(2)

# get input argument
if len(sys.argv) == 2:
    filename = sys.argv[1]
else:
    filename = "--help"  # wrong number of arguments so print help

if filename == "--help" or filename == "-h" or filename == "/?":
    print "Usage: %s <filename>" % sys.argv[0]
    print "Strips a newline off the last line of a file."
    sys.exit(1)


try:
    # must have mode "b" (binary) to allow f.seek() with negative offset
    f = open(filename, "rb")
except IOError:
    print "file does not exist:", filename
    sys.exit(2)


SEEK_EOF = 2
f.seek(-2, SEEK_EOF)  # seek to two bytes before end of file

end_pos = f.tell()

line = f.read()
f.close()

if line.endswith("\r\n"):
    trunc(filename, end_pos)
elif line.endswith("\n"):
    trunc(filename, end_pos + 1)

P.S. In the spirit of "Perl golf", here's my shortest Python solution. It slurps the whole file from standard input into memory, strips all newlines off the end, and writes the result to standard output. Not as terse as the Perl; you just can't beat Perl for little tricky fast stuff like this.

Remove the "\n" from the call to .rstrip() and it will strip all white space from the end of the file, including multiple blank lines.

Put this into "slurp_and_chomp.py" and then run python slurp_and_chomp.py < inputfile > outputfile.

import sys

sys.stdout.write(sys.stdin.read().rstrip("\n"))
Millenary answered 2/11, 2009 at 19:49 Comment(1)
os.path.isfile() will tell you about file presence. Using try/except might catch a lot of different errors :)Bugger
A
5

Yet another perl WTDI:

perl -i -p0777we's/\n\z//' filename
Actiniform answered 1/11, 2009 at 2:27 Comment(0)
M
3
$  perl -e 'local $/; $_ = <>; s/\n$//; print' a-text-file.txt

See also Match any character (including newlines) in sed.

Moffit answered 31/10, 2009 at 10:54 Comment(5)
That takes out all the newlines. Equivalent to tr -d '\n'Phidippides
This works good too, probably less blasphemous than paviums's.Gaily
Sinan, although Linux and Unix might define text files to end with a newline, Windows poses no such requirement. Notepad, for example, will write only the characters you type without adding anything extra at the end. C compilers might require a source file to end with a line break, but C source files aren't "just" text files, so they can have extra requirements.Hellbent
in that vein, most javascript/css minifiers will remove trailing newlines, and yet produce text files.Actiniform
@Rob Kennedy and @ysth: There is an interesting argument there as to why such files are not actually text files and such.Oar
G
3
perl -pi -e 's/\n$// if(eof)' your_file
Galenism answered 22/3, 2013 at 10:54 Comment(1)
Effectively the same as the accepted answer, but arguably clearer in concept to non-Perl users. Note that there's no need for the g or the parentheses around eof: perl -pi -e 's/\n$// if eof' your_file.Vesicant
T
2

Using dd:

file='/path/to/file'
[[ "$(tail -c 1 "${file}" | tr -dc '\n' | wc -c)" -eq 1 ]] && \
    printf "" | dd  of="${file}" seek=$(($(stat -f "%z" "${file}") - 1)) bs=1 count=1
    #printf "" | dd  of="${file}" seek=$(($(wc -c < "${file}") - 1)) bs=1 count=1
Theotheobald answered 3/5, 2010 at 16:39 Comment(0)
F
2

Assuming Unix file type and you only want the last newline this works.

sed -e '${/^$/d}'

It will not work on multiple newlines...

* Works only if the last line is a blank line.

Fortress answered 3/5, 2010 at 16:50 Comment(1)
Here's a sed solution that works even for a non-blank last line: stackoverflow.com/a/52047796Lamed
L
2

This is a good solution if you need it to work with pipes/redirection instead of reading/output from or to a file. This works with single or multiple lines. It works whether there is a trailing newline or not.

# with trailing newline
echo -en 'foo\nbar\n' | sed '$s/$//' | head -c -1

# still works without trailing newline
echo -en 'foo\nbar' | sed '$s/$//' | head -c -1

# read from a file
sed '$s/$//' myfile.txt | head -c -1

Details:

  • head -c -1 truncates the last character of the string, regardless of what the character is. So if the string does not end with a newline, then you would be losing a character.
  • So to address that problem, we add another command that will add a trailing newline if there isn't one: sed '$s/$//' . The first $ means only apply the command to the last line. s/$// means substitute the "end of the line" with "nothing", which is basically doing nothing. But it has a side effect of adding a trailing newline is there isn't one.

Note: Mac's default head does not support the -c option. You can do brew install coreutils and use ghead instead.

Lamed answered 27/8, 2018 at 22:36 Comment(0)
M
1

Yet another answer FTR (and my favourite!): echo/cat the thing you want to strip and capture the output through backticks. The final newline will be stripped. For example:

# Sadly, outputs newline, and we have to feed the newline to sed to be portable
echo thingy | sed -e 's/thing/sill/'

# No newline! Happy.
out=`echo thingy | sed -e 's/thing/sill/'`
printf %s "$out"

# Similarly for files:
file=`cat file_ending_in_newline`
printf %s "$file" > file_no_newline
Maple answered 11/4, 2012 at 13:55 Comment(1)
I found the cat-printf combo out by accident (was trying to get the opposite behavior). Note that this will remove ALL trailing newlines, not just the last.Kovach
B
1

ruby:

ruby -ne 'print $stdin.eof ? $_.strip : $_'

or:

ruby -ane 'q=p;p=$_;puts q if $.>1;END{print p.strip!}'
Barretter answered 18/5, 2016 at 14:21 Comment(0)
M
1

POSIX SED:

'${/^$/d}'

$ - match last line


{ COMMANDS } - A group of commands may be enclosed between { and } characters. This is particularly useful when you want a group of commands to be triggered by a single address (or address-range) match.
Millpond answered 25/8, 2016 at 10:2 Comment(1)
I think this will only remove it if the last line is blank. It will not remove the trailing newline if the last line is not blank. For example, echo -en 'a\nb\n' | sed '${/^$/d}' will not remove anything. echo -en 'a\nb\n\n' | sed '${/^$/d}' will remove since the entire last line is blank.Lamed
R
0

The only time I've wanted to do this is for code golf, and then I've just copied my code out of the file and pasted it into an echo -n 'content'>file statement.

Rubirubia answered 1/11, 2009 at 11:57 Comment(1)
Halfway there; complete approach here.Vesicant
A
0
sed ':a;/^\n*$/{$d;N;};/\n$/ba' file
Andesite answered 20/3, 2010 at 7:47 Comment(1)
Works, but removes all trailing newlines.Vesicant
E
0

I had a similar problem, but was working with a windows file and need to keep those CRLF -- my solution on linux:

sed 's/\r//g' orig | awk '{if (NR>1) printf("\r\n"); printf("%s",$0)}' > tweaked
Eleanor answered 22/6, 2012 at 10:17 Comment(0)
P
0
sed -n "1 x;1 !H
$ {x;s/\n*$//p;}
" YourFile

Should remove any last occurence of \n in file. Not working on huge file (due to sed buffer limitation)

Pentameter answered 4/11, 2013 at 13:46 Comment(0)
G
0

Here's a simple solution that uses sed. Your versions of sed needs to support the -z option.

       -z, --null-data

              separate lines by NUL characters

It can either be used in a pipe or used to edit the file in place with the -i option

sed -ze 's/\n$//' file
Gandhiism answered 6/5, 2022 at 8:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.