Split one file into multiple files based on delimiter
Asked Answered
O

12

110

I have one file with -| as delimiter after each section...need to create separate files for each section using unix.

example of input file

wertretr
ewretrtret
1212132323
000232
-|
ereteertetet
232434234
erewesdfsfsfs
0234342343
-|
jdhg3875jdfsgfd
sjdhfdbfjds
347674657435
-|

Expected result in File 1

wertretr
ewretrtret
1212132323
000232
-|

Expected result in File 2

ereteertetet
232434234
erewesdfsfsfs
0234342343
-|

Expected result in File 3

jdhg3875jdfsgfd
sjdhfdbfjds
347674657435
-|
Own answered 3/7, 2012 at 15:7 Comment(3)
Are you writing a program or do you want to do this using command line utilities?Niello
using command line utilities will be preferable..Own
You could use awk, it would be easy to write a 3 or 4 line program to do it. Unfortunately I am out of practice.Horten
H
120

A one liner, no programming. (except the regexp etc.)

csplit --digits=2  --quiet --prefix=outfile infile "/-|/+1" "{*}"

tested on: csplit (GNU coreutils) 8.30

Notes about usage on Apple Mac

"For OS X users, note that the version of csplit that comes with the OS doesn't work. You'll want the version in coreutils (installable via Homebrew), which is called gcsplit." — @Danial

"Just to add, you can get the version for OS X to work (at least with High Sierra). You just need to tweak the args a bit csplit -k -f=outfile infile "/-\|/+1" "{3}". Features that don't seem to work are the "{*}", I had to be specific on the number of separators, and needed to add -k to avoid it deleting all outfiles if it can't find a final separator. Also if you want --digits, you need to use -n instead." — @Pebbl

Horten answered 3/7, 2012 at 16:7 Comment(9)
+1 - shorter: csplit -n2 -s -b outfile infile "/-|/+1" "{*}"Cacique
@Cacique I did it in long, so that no explanation was needed.Horten
I suggest to add --elide-empty-files, otherwise there will be a empty file at the end.Beat
Just for those who wonder what the parameters mean: --digits=2 controls the number of digits used to number the output files (2 is default for me, so not necessary). --quiet suppresses output (also not really necessary or asked for here). --prefix specifies the prefix of the output files (default is xx). So you can skip all the parameters and will get output files like xx12.Powerhouse
I have updated the question to include the un-read comments about apple mac.Horten
wooop. it works on Mojave you are a scholar and a gentleman.. one question, how do u add a suffix like .html to the outfile?Telepathy
@Telepathy I had to look this up, when I answered ( I remembered for when I did my training, but still had to look it up). The first think to note is that Unix does not have suffixes like .html (that is there is nothing special about the . Try these options --quiet --prefix="" --suffix-format="prefix%02d.html" Note I got rid of --digits and set --prefix to "" (empty).Horten
where do you look this up.. seems my version isn't recognising these options.. i cant find any clear guide on this stuffTelepathy
man csplit. You may need the Gnu version. Gnu likes to improve the tools.Horten
R
48
awk '{f="file" NR; print $0 " -|"> f}' RS='-\\|'  input-file

Explanation (edited):

RS is the record separator, and this solution uses a gnu awk extension which allows it to be more than one character. NR is the record number.

The print statement prints a record followed by " -|" into a file that contains the record number in its name.

Rentier answered 3/7, 2012 at 16:4 Comment(13)
How well does this work on really big files (> 3 GB)? I'm not familiar with awk.Pash
Could you please explain the different parts? What is RS? What is NR?Seizing
RS is the record separator, and this solution uses a gnu awk extension which allows it to be more than one character. NR is the record number. The print statement prints a record followed by " -|" into a file that contains the record number in its name.Rentier
@rzetterbeg This should work well with large files. awk processes the file one record at a time, so it only reads as much as it needs to. If the first occurrence of the record separator shows up very late in the file, it may be a memory crunch since one whole record must fit into memory. Also, note that using more than one character in RS is not standard awk, but this will work in gnu awk.Rentier
For me it split 3.3 GB in 31.728sBunnell
How to customize the file extension (e.g. file1.txt, file2.txt, etc)?Tocopherol
@ccf The filename is just the string on the right side of the >, so you can construct it however you like. eg, print $0 "-|" > "file" NR ".txt"Rentier
throws error: awk: syntax error at source line 1 context is {print $0 " -|"> "file" >>> NR <<< } awk: illegal statement at source line 1Telepathy
@Telepathy That is version dependent. You can do awk '{f="file" NR; print $0 " -|" > f}'Rentier
thank you, also how can i add an extension to the file name like '.html'?Telepathy
@Telepathy f="file" NR ".html"... It's not the cleanest syntax, but in awk you concatenate strings by placing them next to each other with no operator. Alternatively, you can use sprintfRentier
$ awk '{f="file" NR; ".html" print $0 " -|" > f}' x.txt awk: syntax error at source line 1 context is {f="file" NR; ".html" >>> print <Telepathy
also i get this error without adding that ".html": awk: file18 makes too many open files input record number 18, file x.txtTelepathy
A
7

Debian has csplit, but I don't know if that's common to all/most/other distributions. If not, though, it shouldn't be too hard to track down the source and compile it...

Aeri answered 3/7, 2012 at 15:42 Comment(3)
I agree. My Debian box says that csplit is part of gnu coreutils. So any Gnu operating system, such as all the Gnu/Linux distros will have it. Wikipedia also mentions 'The Single UNIX® Specification, Issue 7' on the csplit page, so I suspect you got it.Horten
Since csplit is in POSIX, I would expect it to be available on essentially all Unix-like systems.Gutbucket
Although csplit is POISX, the problem (it seems doing a test with it on the Ubuntu system sitting in front of me) is that there is no obvious way to make it use a more modern regex syntax. Compare: csplit --prefix gold-data - "/^==*$/ vs csplit --prefix gold-data - "/^=+$/. At least GNU grep has -e.Laundes
D
5

I solved a slightly different problem, where the file contains a line with the name where the text that follows should go. This perl code does the trick for me:

#!/path/to/perl -w

#comment the line below for UNIX systems
use Win32::Clipboard;

# Get command line flags

#print ($#ARGV, "\n");
if($#ARGV == 0) {
    print STDERR "usage: ncsplit.pl --mff -- filename.txt [...] \n\nNote that no space is allowed between the '--' and the related parameter.\n\nThe mff is found on a line followed by a filename.  All of the contents of filename.txt are written to that file until another mff is found.\n";
    exit;
}

# this package sets the ARGV count variable to -1;

use Getopt::Long;
my $mff = "";
GetOptions('mff' => \$mff);

# set a default $mff variable
if ($mff eq "") {$mff = "-#-"};
print ("using file switch=", $mff, "\n\n");

while($_ = shift @ARGV) {
    if(-f "$_") {
    push @filelist, $_;
    } 
}

# Could be more than one file name on the command line, 
# but this version throws away the subsequent ones.

$readfile = $filelist[0];

open SOURCEFILE, "<$readfile" or die "File not found...\n\n";
#print SOURCEFILE;

while (<SOURCEFILE>) {
  /^$mff (.*$)/o;
    $outname = $1;
#   print $outname;
#   print "right is: $1 \n";

if (/^$mff /) {

    open OUTFILE, ">$outname" ;
    print "opened $outname\n";
    }
    else {print OUTFILE "$_"};
  }
Detailed answered 1/12, 2012 at 0:27 Comment(3)
Can you please explain why this code works? I have a similar situation to what you've described here - the required output file names are embedded inside the file. But I'm not a regular perl user so can't quite make sense of this code.Setaceous
The real beef is in the final while loop. If it finds the mff regex at beginning of line, it uses the rest of the line as the filename to open and start writing to. It never closes anything so it will run out of file handles after a few dozen.Accumbent
The script would actually be improved by removing most of the code before the final while loop and switching to while (<>)Accumbent
A
4

The following command works for me. Hope it helps.

awk 'BEGIN{file = 0; filename = "output_" file ".txt"}
    /-|/ {getline; file ++; filename = "output_" file ".txt"}
    {print $0 > filename}' input
Already answered 7/2, 2017 at 19:40 Comment(7)
This will run out of file handles after typically a few dozen files. The fix is to explicitly close the old file when you start a new one.Accumbent
@Accumbent how do you close it (beginner awk question). Can you provide an updated example?Domicile
@JesperRønn-Jensen This box is probably too small for any useful example but basically if (file) close(filename); before assigning a new filename value.Accumbent
aah found out how to close it: ; close(filename). Really simple, but it really fixes the example aboveDomicile
Thanks @Accumbent for the quick and helpful explanation :)Domicile
@JesperRønn-Jensen I rolled back your edit because you provided a broken script. Significant edits to other people's answers should probably be avoided -- feel free to post a new answer of your own (perhaps as a community wiki) if you think a separate answer is merited.Accumbent
throws error: awk: illegal primary in regular expression -| at source line number 2 context is >>> /-|/ <<<Telepathy
N
3

You can also use awk. I'm not very familiar with awk, but the following did seem to work for me. It generated part1.txt, part2.txt, part3.txt, and part4.txt. Do note, that the last partn.txt file that this generates is empty. I'm not sure how fix that, but I'm sure it could be done with a little tweaking. Any suggestions anyone?

awk_pattern file:

BEGIN{ fn = "part1.txt"; n = 1 }
{
   print > fn
   if (substr($0,1,2) == "-|") {
       close (fn)
       n++
       fn = "part" n ".txt"
   }
}

bash command:

awk -f awk_pattern input.file

Niello answered 3/7, 2012 at 16:0 Comment(0)
L
2

Here's a Python 3 script that splits a file into multiple files based on a filename provided by the delimiters. Example input file:

# Ignored

######## FILTER BEGIN foo.conf
This goes in foo.conf.
######## FILTER END

# Ignored

######## FILTER BEGIN bar.conf
This goes in bar.conf.
######## FILTER END

Here's the script:

#!/usr/bin/env python3

import os
import argparse

# global settings
start_delimiter = '######## FILTER BEGIN'
end_delimiter = '######## FILTER END'

# parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input-file", required=True, help="input filename")
parser.add_argument("-o", "--output-dir", required=True, help="output directory")

args = parser.parse_args()

# read the input file
with open(args.input_file, 'r') as input_file:
    input_data = input_file.read()

# iterate through the input data by line
input_lines = input_data.splitlines()
while input_lines:
    # discard lines until the next start delimiter
    while input_lines and not input_lines[0].startswith(start_delimiter):
        input_lines.pop(0)

    # corner case: no delimiter found and no more lines left
    if not input_lines:
        break

    # extract the output filename from the start delimiter
    output_filename = input_lines.pop(0).replace(start_delimiter, "").strip()
    output_path = os.path.join(args.output_dir, output_filename)

    # open the output file
    print("extracting file: {0}".format(output_path))
    with open(output_path, 'w') as output_file:
        # while we have lines left and they don't match the end delimiter
        while input_lines and not input_lines[0].startswith(end_delimiter):
            output_file.write("{0}\n".format(input_lines.pop(0)))

        # remove end delimiter if present
        if not input_lines:
            input_lines.pop(0)

Finally here's how you run it:

$ python3 script.py -i input-file.txt -o ./output-folder/
Lacedaemon answered 19/2, 2017 at 19:33 Comment(0)
V
2

Use csplit if you have it.

If you don't, but you have Python... don't use Perl.

Lazy reading of the file

Your file may be too large to hold in memory all at once - reading line by line may be preferable. Assume the input file is named "samplein":

$ python3 -c "from itertools import count
with open('samplein') as file:
    for i in count():
        firstline = next(file, None)
        if firstline is None:
            break
        with open(f'out{i}', 'w') as out:
            out.write(firstline)
            for line in file:
                out.write(line)
                if line == '-|\n':
                    break"
Vilipend answered 24/10, 2017 at 20:10 Comment(2)
This will read the entire file into memory, which means it will be inefficient or even fail for large files.Accumbent
@Accumbent I have updated the answer to handle very large files.Vilipend
S
1
cat file| ( I=0; echo -n "">file0; while read line; do echo $line >> file$I; if [ "$line" == '-|' ]; then I=$[I+1]; echo -n "" > file$I; fi; done )

and the formated version:

#!/bin/bash
cat FILE | (
  I=0;
  echo -n"">file0;
  while read line; 
  do
    echo $line >> file$I;
    if [ "$line" == '-|' ];
    then I=$[I+1];
      echo -n "" > file$I;
    fi;
  done;
)
Sherlock answered 3/7, 2012 at 15:49 Comment(6)
As ever, the cat is Useless.Accumbent
@Reishin The linked page explains in much more detail how you can avoid cat on a single file in every situation. There is a Stack Overflow question with more discussion (though the accepted answer is IMHO off); https://mcmap.net/q/11740/-useless-use-of-catAccumbent
The shell is typically very inefficient at this sort of thing anyway; if you can't use csplit, an Awk solution is probably much preferrable to this solution (even if you were to fix the problems reported by shellcheck.net etc; note that it doesn't currently find all the bugs in this).Accumbent
@Accumbent but if the task is to do it without awk, csplit and etc - only bash?Nightcap
Then the cat is still useless, and the rest of the script could be simplified and corrected a good deal; but it will still be slow. See e.g. #13763125Accumbent
just outputs one file, the same one but without the .txtTelepathy
L
0

Here is a perl code that will do the thing

#!/usr/bin/perl
open(FI,"file.txt") or die "Input file not found";
$cur=0;
open(FO,">res.$cur.txt") or die "Cannot open output file $cur";
while(<FI>)
{
    print FO $_;
    if(/^-\|/)
    {
        close(FO);
        $cur++;
        open(FO,">res.$cur.txt") or die "Cannot open output file $cur"
    }
}
close(FO);
Linton answered 3/7, 2012 at 16:0 Comment(0)
B
0

This is the sort of problem I wrote context-split for: http://stromberg.dnsalias.org/~strombrg/context-split.html

$ ./context-split -h
usage:
./context-split [-s separator] [-n name] [-z length]
        -s specifies what regex should separate output files
        -n specifies how output files are named (default: numeric
        -z specifies how long numbered filenames (if any) should be
        -i include line containing separator in output files
        operations are always performed on stdin
Bendwise answered 3/7, 2012 at 17:17 Comment(2)
Uh, this looks like essentially a duplicate of the standard csplit utility. See @richard's answer.Accumbent
This is actually the best solution imo. I've had to split a 98G mysql dump and csplit for some reason eats up all my RAM, and is killed. Even though it should only need to match one line at the time. Makes no sense. This python script works much better and doesn't eat up all the ram.Radom
C
0

Try this python script:

import os
import argparse

delimiter = '-|'

parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input-file", required=True, help="input txt")
parser.add_argument("-o", "--output-dir", required=True, help="output directory")

args = parser.parse_args()

counter = 1;
output_filename = 'part-'+str(counter)
with open(args.input_file, 'r') as input_file:
    for line in input_file.read().split('\n'):
        if delimiter in line:
            counter = counter+1
            output_filename = 'part-'+str(counter)
            print('Section '+str(counter)+' Started')
        else:
            #skips empty lines (change the condition if you want empty lines too)
            if line.strip() :
                output_path = os.path.join(args.output_dir, output_filename+'.txt')
                with open(output_path, 'a') as output_file:
                    output_file.write("{0}\n".format(line))

ex:

python split.py -i ./to-split.txt -o ./output-dir

Caines answered 2/12, 2022 at 15:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.