My Spark job failed with the YARN error
Container killed by YARN for exceeding memory limits 10.0 GB of 10 GB physical memory used.
Intuitively, I decreased the number of cores from
1 and the job ran successfully.
I did not increase the
10g was the max for my YARN cluster.
I just wanted to confirm if my intuition. Does reducing the number of
executor-cores consume less
executor-memory? If so, why?