[Apache Spark] Yarn and preemption and fair schedulers

When running Spark 1.6 on yarn clusters, i ran into problems, when yarn preempted spark containers and then the spark job failed. This happens only sometimes, when yarn used a fair scheduler and other queues with a higher priority submitted a job. After some research i found the solution: dynamic allocation.

Accessing preempted containers

In my understanding each container in spark handles temporary shuffle files on their own until a external shuffle service is used. The external shuffle service is also used, when dynamic allocation is active, which is nothing more than automatic resource handling on the cluster. When there is no external shuffle service and a yarn container used by spark is preempted, you can see something like this in the logs:

As you can see the container gets first preempted and for a while everything seems ok, until some other node is trying to access something on the preempted container, which leads to an IOException. Finally the job will fail.

Dynamic allocation

When Spark uses an external shuffle service, the service has control over all the temporary shuffle files and so containers can be removed safely, without running into these errors. To enable dynamic allocation you can look into the spark documentation.

Here is some further information and it seems like Spark 2.0 is not affected by these issue, so it would may be an option to wait for the Spark 2.0 release and try to fix it with the upgrade.

4.00 avg. rating (86% score) - 1 vote

Related Posts

Leave a reply