Desired Skills and Experience
- Manage multiple large scale Hadoop cluster environments, handling all Hadoop environment builds, including design, capacity planning, cluster setup, performance tuning and ongoing monitoring and alerting.
- Contribute to the evolving architecture of our storage service to meet changing requirements for scaling, reliability, performance, manageability, and price.
- Ensure our testing capabilities protect our customers from a rapidly changing infrastructure
- Capacity planning and implementation of new/upgraded hardware and software releases as well as for storage infrastructure.
- 3+ years of professional experience supporting production medium to large scale Linux environments.
- At least 1 + years of experience working with Hadoop (Apache, CDH, or Hortonworks) and related technology stack.
- Experience setting up and running production clusters
- Experience proactively monitoring and fine tuning clusters
- A deep understanding of Hadoop design principals, cluster connectivity, security and the factors that affect distributed system performance.
- Solid understanding of configuration/state management tools (puppet, chef, Ansible).
- Expert experience with at least one of the following languages; python, Perl, ruby, or bash.
Apply