Working with input splits(HADOOP)
Asked Answered
L

2

10

I have a .txt file as follows:


This is xyz

This is my home

This is my PC

This is my room

This is ubuntu PC xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxxxxxxxxxxxxxxxxxxx


(ignoring the blank line after each record)

I have set the block size as 64 bytes. What I am trying to check is, whether there exists a situation when a single record is broken into two blocks or not.

Now logically, since the block size is 64 bytes , after uploading the file to HDFS, it should create 3 blocks of size 64,64,27 bytes respectively, which it does. Also since the size of the first block is 64 bytes, it should contain the following data only :


This is xyz

This is my home

This is my PC

This is my room

Th


Now I want to see if the first block is like this or not, if I browse the HDFS via the browser and download the file, it downloads the entire file not a single block.

So I decided to run a map-reduce job which would only display the record values only.( Setting reducers=0, and mapper output as context.write(null,record_value), also changing the default delimiter to "")

Now while running the job the job counters show 3 splits, which is obvious, but after completion when I check the output directory, it shows 3 mapper output files out of which 2 are empty and the first mapper output file has all the content of the file as it is.

Can anyone help me with this? Is there a possibility that the newer versions of hadoop handle incomplete records automatically?

Leister answered 16/3, 2017 at 18:44 Comment(2)
Are you using any combiner class?Duquette
No. Just a mapper which displays the records.. @DuquetteLeister
A
8

Steps followed to reproduce the scenario
1) Created a file sample.txt with the content with total size ~153B

cat sample.txt

This is xyz
This is my home
This is my PC
This is my room
This is ubuntu PC xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxxxxxxxxxxxxxxxxxxx

2) Added the property to hdfs-site.xml

<property>
    <name>dfs.namenode.fs-limits.min-block-size</name>
    <value>10</value>
</property>

and loaded into HDFS with block size as 64B.

hdfs dfs -Ddfs.bytes-per-checksum=16 -Ddfs.blocksize=64 -put sample.txt /

This created three blocks of sizes 64B, 64B and 25B.

Content in Block0:

This is xyz
This is my home
This is my PC
This is my room
This i

Content in Block1:

s ubuntu PC xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xx

Content in Block2:

xx xxxxxxxxxxxxxxxxxxxxx

3) A simple mapper.py

#!/usr/bin/env python

import sys

for line in sys.stdin:
    print line

4) Hadoop Streaming with 0 reducers:

yarn jar hadoop-streaming-2.7.1.jar -Dmapreduce.job.reduces=0 -file mapper.py -mapper mapper.py -input /sample.txt -output /splittest

Job ran with 3 input splits invoking 3 mappers and generated 3 output files with one file holding the entire content of sample.txt and the rest 0B files.

hdfs dfs -ls /splittest

-rw-r--r--   3 user supergroup          0 2017-03-22 11:13 /splittest/_SUCCESS
-rw-r--r--   3 user supergroup        168 2017-03-22 11:13 /splittest/part-00000
-rw-r--r--   3 user supergroup          0 2017-03-22 11:13 /splittest/part-00001
-rw-r--r--   3 user supergroup          0 2017-03-22 11:13 /splittest/part-00002

The file sample.txt is split into 3 splits and these splits are assigned to each mapper as,

mapper1: start=0, length=64B
mapper2: start=64, length=64B
mapper3: start=128, length=25B

This only determines which portion of file has to be read by the mapper, not necessary that it has to be exact. The actual content that is read by a mapper is determined by the FileInputFormat and its boundaries, here TextFileInputFormat.

This uses LineRecordReader to read the content from each split and uses \n as delimiter (line boundary). For a file that isn't compressed, the lines are read by each mapper as explained below.

For the mapper whose start index is 0, the line reading starts from the start of the split. If the split ends with \n the reading ends at the split boundary else it looks for the first \n post the length of the split assigned (here 64B). Such that it does not end up processing a partial line.

For all the other mappers (start index != 0), it checks whether the preceding character from its start index (start - 1) is \n, if yes it reads the content from the start of the split else it skips the content that is present between its start index and the first \n character encountered in that split (as this content is handled by other mapper) and starts to read from the first \n.

Here, mapper1 (start index is 0) starts with Block0 whose split ends at the middle of a line. Thus, it continues to read the line which consumes the entire Block1 and since Block1 does not have a \n character, mapper1 continues to read until it finds a \n which ends with consuming of entire Block2 as well. That is how the entire content of sample.txt ended up in single mapper output.

mapper2 (start index != 0), one character preceding to its start index is not a \n, so skips the line and ends up with no content. Empty mapper output. mapper3 has the identical scenario as mapper2.


Try changing the content of sample.txt like this to see different results
This is xyz
This is my home
This is my PC
This is my room
This is ubuntu PC xxxx xxxx xxxx xxxx 
xxxx xxxx xxxx xxxx xxxx xxxx xxxx 
xxxxxxxxxxxxxxxxxxxxx
Amplexicaul answered 24/3, 2017 at 20:32 Comment(5)
So in a way, does that mean we need not to worry about partial records ?Leister
Yes, it is well handled by the Hadoop FileInputFormats and their corresponding RecordReaders.Amplexicaul
So if we are defining a custom format we need to come up with a logic like this, if the format is "Text" we're good , right ?Leister
Okay. Thank you so much for your input. This'll help a lot. Thanks again !Leister
Give a read of this blog post to implement a custom input format.Amplexicaul
P
1
  1. Use the following command to get the block list for your file on HDFS

    hdfs fsck PATH -files -blocks -locations

where PATH is the full HDFS path where your file is located.

  1. The output (shown below partially) will be something like this (the line numbers 2, 3... ignore)

    Connecting to namenode via http://ec2-54-235-1-193.compute-1.amazonaws.com:50070/fsck?ugi=student6&files=1&blocks=1&locations=1&path=%2Fstudent6%2Ftest.txt FSCK started by student6 (auth:SIMPLE) from /172.31.11.124 for path /student6/test.txt at Wed Mar 22 15:33:17 UTC 2017 /student6/test.txt 22 bytes, 1 block(s): OK 0. BP-944036569-172.31.11.124-1467635392176:blk_1073755254_14433 len=22 repl=1 [DatanodeInfoWithStorage[172.31.11.124:50010,DS-4a530a72-0495-4b75-a6f9-75bdb8ce7533,DISK]]

  2. Copy the bold part of output command (excluding the _14433) as shown in above example output

  3. Go to Linux file system on your datanode to the directory where the blocks are stored (this will be pointed to by dfs.datanode.data.dir parameter of hdfs-site.xml and search in the entire subtree from that location for a filename that has the bold string you just copied. That will tell you which subdirectory under dfs.datanode.data.dir contains a file with that string in its name (exclude any filename with .meta suffix). Once you have located such a file name you can run a Linux cat command on that file name to see your file contents.

  4. Remember although the file is an HDFS file, under the covers the file is actually stored on the Linux file system and each block of the HDFS file is a unique Linux file. The block is identified by the Linux file system with the name as shown in the bold string of step 2

Phono answered 22/3, 2017 at 15:42 Comment(1)
Got the blocks located in the local file system and the division of a single record does take place, but why isn't the mapper output like that ? I mean to say, since its a simple mapper, it should give 3 mapper outputs with the data of the individual block, but that doesn't happen..Leister

© 2022 - 2024 — McMap. All rights reserved.