#StackBounty: #apache-spark #yarn Does reducing the number of executor-cores consume less executor-memory?

Bounty: 50

My Spark job failed with the YARN error Container killed by YARN for exceeding memory limits 10.0 GB of 10 GB physical memory used.

Intuitively, I decreased the number of cores from 5 to 1 and the job ran successfully.

I did not increase the executor-memory because 10g was the max for my YARN cluster.

I just wanted to confirm if my intuition. Does reducing the number of executor-cores consume less executor-memory? If so, why?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.