Accessing Cloud Data
Also available as:
loading table of contents...

Cleaning up After Failed Jobs

The S3A committers upload data in the tasks, completing the uploads when the job is committed.

Amazon AWS still bill you for all data held in this “pending” state. The hadoop s3guard uploads command can be used to list and cancel such uploads. However, it is simplest to automate cleanup with a bucket lifecycle rule.

  1. Go to the AWS S3 console, .

  2. Find the bucket you are using as a destination of work.

  3. Select the “management” tab.

  4. Select “add a new lifecycle rule”.

  5. Create a rule “cleanup uploads” with no filter, and without any “transitions”; Configure an “Expiration” action of Clean up incomplete multipart uploads.

  6. Select a time limit for outstanding uploads, such as 1 Day.

  7. Review and confirm the lifecycle rule

You need to select a limit of how long uploads can be outstanding. For Hadoop applications, this is the maximum time that either an application can write to the same file and the maximum time which a job may take. If the timeout is shorter than either of these, then programs are likely to fail.

Once the rule is set, the cleanup is automatic.