Amazon Emr - What is the need of Task nodes when we have Core nodes?
Asked Answered
U

4

32

I am learning about Amazon EMR lately, and according to my knowledge the EMR cluster lets us choose 3 nodes.

  1. Master which runs the Primary Hadoop daemons like NameNode,Job Tracker and Resource manager.
  2. Core which runs Datanode and Tasktracker daemons.
  3. Task which only runs TaskTracker only.

My question to you guys in why does EMR provide task nodes? Where as hadoop suggests that we should have Datanode daemon and Tasktracker daemon on the same node. What is Amazon's logic behind doing this? You can keep data in S3 stream it to HDFS on the core nodes, do the processing on HDFS other than sharing data from HDFS to task nodes which will increase IO over head in that case. Because as far as my knowledge in hadoop, TaskTrackers run on DataNodes which have data blocks for that particular task then why have TaskTrackers on different nodes?

Unasked answered 7/1, 2017 at 8:23 Comment(0)
W
22

According to AWS documentation [1]

The node types in Amazon EMR are as follows: Master node: A node that manages the cluster by running software components to coordinate the distribution of data and tasks among other nodes for processing. The master node tracks the status of tasks and monitors the health of the cluster. Every cluster has a master node, and it's possible to create a single-node cluster with only the master node.

Core node: A node with software components that run tasks and store data in the Hadoop Distributed File System (HDFS) on your cluster. Multi-node clusters have at least one core node.

Task node: A node with software components that only runs tasks and does not store data in HDFS. Task nodes are optional.

According to AWS documentation [2]

Task nodes are optional. You can use them to add power to perform parallel computation tasks on data, such as Hadoop MapReduce tasks and Spark executors.

Task nodes don't run the Data Node daemon, nor do they store data in HDFS.

Some Use cases are:

  • You can use Task nodes for processing streams from S3. In this case Network IO won't increase as the used data isn't on HDFS.
  • Task nodes can be added or removed as no HDFS daemons are running. Hence, no data on task nodes. Core nodes have HDFS daemons running and keep adding and removing new nodes isn't a good practice.

Resources:

[1] https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-overview.html#emr-overview-clusters

[2] https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-master-core-task-nodes.html#emr-plan-task

Weever answered 6/1, 2020 at 16:2 Comment(0)
C
4

One use case is if you use spot instances as task nodes. If its cheap enough, it may be worth while to add some compute power to your EMR cluster. This would be mostly for non-sensitive tasks.

Crabbing answered 24/3, 2017 at 14:2 Comment(0)
S
3

The reason why Hadoop suggest that we should have DataNode and Tasktracker Daemons on the same nodes is because it wants our processing power as close to data as possible.

But there also comes Rack level optimization when you have to deal with multi-nodes cluster. In my point of view AWS reducing I/O overhead by providing task nodes in the same rack in which Datanodes exists.

And the reason to provide Task nodes are that we need more processing over our data than to just focusing on storing them on HDFS. We would always want more TaskTracker than the Daemon nodes. So AWS has provided you the opportunity to increase it using a complete node benefiting RackLevel optimization.

And the way you want to get data into your cluster(using S3 and only core nodes) is a good option if you want good performance but using only a transient cluster.

Stanza answered 26/2, 2020 at 8:18 Comment(1)
This advantage is to a degree lost in AWS as much of the data is stored in EBS drives which are not local.Giffie
S
2
  • Traditional Hadoop assumes all your workload requires high I/O, with EMR you can choose instance type based on your workload. For high IO needs example up to 100Gbps go with C type or R type, and you can use placement groups. And keep your Core Nodes to Task nodes ratio to 1:5 or lower, this will keep the I/O optimal and if you want higher throughput select C's or R's as your Core and Task. (edited - explaining barely any perf loss with EMR)

  • Task node's advantage it can scale up/down faster and can minimize compute cost. Traditional Hadoop Cluster it's hard to scale either ways since slaves also part of HDFS. Task nodes are optional since core nodes can run Map and Reduce.

  • Core nodes takes longer to scale up/down depending on the tasks hence given the option of Task node for quicker auto scaling.

Reference: https://aws.amazon.com/blogs/big-data/best-practices-for-resizing-and-automatic-scaling-in-amazon-emr/

Selfinductance answered 15/1, 2020 at 22:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.