I don't think there is a unix command which you could just use for the task. But you could create a little shell script around the comm
and grep
commands as shown in the following example:
#!/bin/bash
# Prepare 200 (small) test files
rm data-*.txt
for i in {1..200} ; do
echo "${i}" >> "data-${i}.txt"
# common line
echo "foo common line" >> "data-${i}.txt"
done
# Get the common lines between file1 and file2.
# file1 and file2 can be random files out of the set,
# ideally they are the smallest ones
comm -12 data-1.txt data-2.txt > common_lines
# Now grep through the remaining files for those lines
for file in data-{3..100}.txt ; do
# For each remaining file reduce the common_lines to those
# which are found in that file
grep -Fxf common_lines "${file}" > tmp_common_lines \
&& mv tmp_common_lines > common_lines
done
# Print the common lines
cat common_lines
The same approach can be used for bigger files. It will take longer but the memory consumption stays linear.