hadoop-yarn Questions
2
Spark memory overhead related question asked multiple times in SO, I went through most of them. However, after going through multiple blogs, I got confused.
Below are the questions I have
whether ...
Shrewmouse asked 24/8, 2020 at 12:39
4
Solved
I am running a Spark application (version 1.6.0) on a Hadoop cluster with Yarn (version 2.6.0) in client mode. I have a piece of code that runs a long computation, and I want to kill it if it takes...
Incorporeal asked 25/9, 2016 at 10:17
4
In my case I am using eslint on nextjs with eslint-config-next. I use bun as a package manager but I found instances of this issues on npm and yarn as well.
I didn't add any new package but after r...
Saiff asked 11/11, 2023 at 7:0
2
Solved
I couldn't figure out what is the difference between Spark driver and application master. Basically the responsibilities in running an application, who does what?
In client mode, client machine has...
Topography asked 16/9, 2020 at 6:54
4
I am evaluating YARN for a project. I am trying to get the simple distributed shell example to work. I have gotten the application to the SUBMITTED phase, but it never starts. This is the informati...
Essa asked 30/3, 2018 at 17:59
11
As per the yarn installation for yarn v2, they want you to install using npm install -g yarn. So I ran sudo npm install -g yarn on Ubuntu 20.04. But after I do that, it says command not found.
❯ su...
Weft asked 27/1, 2021 at 19:33
8
I am trying to load a database with 1TB data to spark on AWS using the latest EMR. And the running time is so long that it doesn't finished in even 6 hours, but after running 6h30m , I get some err...
Andantino asked 2/7, 2016 at 0:39
4
I have a Hadoop cluster with 5 nodes, each of which has 12 cores with 32GB memory. I use YARN as MapReduce framework, so I have the following settings with YARN:
yarn.nodemanager.resource.cpu-vco...
Collide asked 1/11, 2015 at 0:33
3
Cannot find module 'nx/src/config/workspaces' on creating a React app on existing Nx project
I am getting the following error and I add the react app to the already existing Nx project on my system:
Require stack:
- C:\Users\HP\OneDrive\Documents\amagi-qtc\node_modules\@nrwl\devkit\index.j...
Asthenosphere asked 20/5, 2022 at 7:52
7
Solved
After reinstall of my Kubuntu 18 I tried to run my @vue/cli 4.0.5 / vuex 3 app
and got error : error Missing list of packages to add to your project
serge@AtHome:/mnt/_work_sdb8/wwwroot/lar/VApps/v...
Jamila asked 30/9, 2020 at 6:8
4
I have Yarn (package manager) already installed on my machine, but I now have to install Apache Hadoop. When I tried doing that with brew install hadoop, I got the error -
Error: Cannot install ha...
Abb asked 1/3, 2019 at 1:20
1
At first, I read this article, which says spark.dynamicAllocation.maxExecutors will have value equal to num-executors if spark.dynamicAllocation.maxExecutors is not explicitly set. However, from th...
Bravissimo asked 7/7, 2020 at 5:28
2
Im trying to execute a spark job in an AWS cluster of 6 c4.2xlarge nodes and I don't know why Spark is killing the executors...
Any help will be appreciated
Here the spark submit command:
. /usr/...
Bartram asked 25/5, 2017 at 14:50
3
Solved
My spark application reads 3 files of 7 MB , 40 MB ,100MB and so many transformations and store multiple directories
Spark version CDH1.5
MASTER_URL=yarn-cluster
NUM_EXECUTORS=15
EXECUTOR_MEMORY...
Danonorwegian asked 28/3, 2018 at 11:22
2
Solved
I'm new to Spark on YARN and don't understand the relation between the YARN Containers and the Spark Executors. I tried out the following configuration, based on the results of the yarn-utils.py sc...
Kokoschka asked 12/7, 2016 at 14:19
0
My issue
I am trying to view my Spark application's logs on Spark History server after they have completed but seeing a "Failed redirect" page on the History UI. My Spark applications run...
Uigur asked 14/3, 2023 at 10:35
5
Solved
I am getting:
Application application_1427711869990_0001 failed 2 times due to AM Container for appattempt_1427711869990_0001_000002 exited with exitCode: -1000 due to: Not able to initialize user...
Unconventionality asked 1/4, 2015 at 17:52
5
Solved
I run my spark application in yarn cluster. In my code I use number available cores of queue for creating partitions on my dataset:
Dataset ds = ...
ds.coalesce(config.getNumberOfCores());
My qu...
Dialectal asked 20/11, 2017 at 18:50
5
Solved
I am a spark/yarn newbie, run into exitCode=13 when I submit a spark job on yarn cluster. When the spark job is running in local mode, everything is fine.
The command I used is:
/usr/hdp/current/...
Bowing asked 10/4, 2016 at 20:50
9
I'm trying to understand the relationship of the number of cores and the number of executors when running a Spark job on YARN.
The test environment is as follows:
Number of data nodes: 3
Data no...
Fictionalize asked 8/7, 2014 at 0:46
1
I'm getting this weird exception. I'm using Spark 1.6.0 on Hadoop 2.6.4 and submitting Spark job on YARN cluster.
16/07/23 20:05:21 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exceptio...
Salford asked 24/7, 2016 at 3:8
4
Solved
I am new to Apache Spark, and I just learned that Spark supports three types of cluster:
Standalone - meaning Spark will manage its own cluster
YARN - using Hadoop's YARN resource manager
Me...
Syck asked 22/2, 2015 at 23:44
5
Solved
I have enabled logs in the xml file: yarn-site.xml, and I restarted yarn by doing:
sudo service hadoop-yarn-resourcemanager restart
sudo service hadoop-yarn-nodemanager restart
I ran my applicat...
Goby asked 9/3, 2017 at 2:46
4
Solved
The spark docs have the following paragraph that describes the difference between yarn client and yarn cluster:
There are two deploy modes that can be used to launch Spark applications on YARN. ...
Collaborationist asked 13/12, 2016 at 15:11
3
Solved
On 3 node Spark/Hadoop cluster which scheduler(Manager) will work efficiently?
Currently I am using Standalone Manager, but for each spark job I have to explicitly specify all resource parameters(e...
Exempt asked 4/8, 2015 at 9:58
1 Next >
© 2022 - 2024 — McMap. All rights reserved.