Generate dummy files in bash
Asked Answered
B

8

28

I'd like to generate dummy files in bash. The content doesn't matter, if it was random it would be nice, but all the same byte is also acceptable.

My first attempt was the following command:

rm dummy.zip;
touch dummy.zip;
x=0;
while [ $x -lt 100000 ];
do echo a >> dummy.zip;
  x=`expr $x + 1`;
done;

The problem was its poor performance. I'm using GitBash on Windows, so it might be much faster under Linux but the script is obviously not optimal.

Could you suggest me a quicker and nice way to generate dummy (binary) files of given size?

Bouffant answered 21/3, 2012 at 7:39 Comment(0)
E
54

You can try head command:

$ head -c 100000 /dev/urandom >dummy
Endogenous answered 21/3, 2012 at 7:45 Comment(3)
Is /dev/urandom available on GitBash+Windows?Yon
@Yon As far as I can tell: yes.Broida
But /dev/urandom is slow if you want to create GBs of dataAramen
D
16

You may use dd for this purpose:

dd if=/dev/urandom bs=1024 count=5 of=dummy
  • if:= in file
  • of:= out file
  • bs:= block size

Note, that

 x=`expr $x + 1`;

isn't the most efficient way to calculation in bash. Do arithmetic integer calculation in double round parenthesis:

 x=$((x+1)) 

But for an incremented counter in a loop, there was the for-loop invented:

x=0;
while [ $x -lt 100000 ];
do echo a >> dummy.zip;
  x=`expr $x + 1`;
done;

in contrast to:

for  ((x=0; x<100000; ++x))
do
    echo a 
done >> dummy.zip 

Here are 3 things to note:

  • unlike the [ -case, you don't need the spacing inside the parens.
  • you may use prefix (or postfix) increment here: ++x
  • the redirection to the file is pulled out of the loop. Instead of 1000000 opening- and closing steps, the file is only opened once.

But there is still a more simple form of the for-loop:

for x in {0..100000}
do
    echo a 
done >> dummy.zip 
Detergency answered 21/3, 2012 at 8:2 Comment(2)
dd is unavailable in GitBash on Windows, but thanks for the nice solutionBouffant
Here are some native ports for Win32 of the GNU-utils, very useful things among them (grep, sed, cat, tac, wc, ...), even if you choose to take another solution.Detergency
B
13

This will generate a text file 100,000 bytes large:

yes 123456789 | head -10000 > dummy.file
Baking answered 21/3, 2012 at 13:21 Comment(0)
T
7

If your file system is ext4, btrfs, xfs or ocfs2, and if you don't care about the content you can use fallocate. It's the fastest method if you need big files.

fallocate -l 100KB dummy_100KB_file

See "Quickly create a large file on a Linux system?" for more details.

Titi answered 3/4, 2015 at 14:36 Comment(1)
this is the method. rest (dd cat yes etc) are just workaroundsOb
E
6
$ openssl rand -out random.tmp 1000000
Endanger answered 11/3, 2015 at 13:25 Comment(0)
G
1

Possibly

dd if=/dev/zero of=/dummy10MBfile bs=1M count=10
Guadalupeguadeloupe answered 3/9, 2018 at 13:2 Comment(0)
H
0

echo "To print the word in sequence from the file" c=1 for w in cat file do echo "$c . $w" c = expr $c +1 done

Henotheism answered 14/6, 2013 at 17:9 Comment(0)
M
0

Easy way:

make file test and put one line "test"

Then execute:

 cat test >> test

ctrl+c after a minute will result in plenty of gigabytes :)

Marga answered 3/2, 2017 at 17:35 Comment(2)
I'm guessing the OP was looking for a non-manual way of doing this.Unpolite
This was originally needed for precise testing, "nice way to generate dummy (binary) files of given size". Your solution is creative, but not precise. But thanks :-)Bouffant

© 2022 - 2024 — McMap. All rights reserved.