Desired Skills and Experience

  • Working closely with the other technical teams within Data and infrastructure tribes, you will be part of the team responsible for making step changes to the Data Tribe’s live Hadoop cluster, which supports real-time and offline reporting tools.
  • You’ll also be working with our team to manage and enhance our build, repositories, job scheduling and monitoring platforms. Creating tooling to allow development teams to run their own environments. 
  • Availability, stability and performance are key requirements of our systems, you’ll be able to troubleshoot issues at application, cluster and OS level.
  • You’ll have a passion for big data technologies and systems, you keep up to date with the latest open source technology and best practice. Suggest areas where new tools or methods can help the business achieve its goals
  • You’ll have worked in an Agile autonomous team, working on both day to day troubleshooting and automation projects.
  • You’ll have strong RHEL administration skills, experience of docker, AWS and CI tools such as Jenkins and Nexus. We expect you to have setup and used application and server performance monitoring using tools such as Graphite and Nagios/Opsview in anger.
  • You’ll have strong scripting skills in Chef, python and worked with a formal SDLC backed by git version control.
  • We’re happy to train you in the administration of Hadoop toolset e.g. HDFS, hive, hbase, sqoop, yarn, spark and data streaming technologies such as kafka . But if you have experience of those tools that’s fantastic.