Desired Skills and Experience

  • Implement / configure specific solution as part of the Dev/Ops and Automation Strategy
  • Maintain the Dev/Ops process up to date and aligned with the Big Data delivery portfolio and Operating Model
  • Proactively work with the Data Engineering/QA and the other delivery teams as one squad to make sure Dev/Ops practices are in place at the various stages of the Delivery lifecycle
  • Support and contribute, with the central team of Data Engineering/QA/Data Science, to define best practices for the agile development of applications to run on the Big Data Platform in an efficient CI/CD pipeline.
  • Proven experience in designing, building and managing applications to process large amounts of data in a Hadoop ecosystem;
  • Proven experience with performance tuning applications on Hadoop and configuring Hadoop systems to maximise performance;
  • Proven Experience with setting up and running the development life-cycle for agile software development projects in CI/CD pipeline
  • Experience in deployment automation (eg. Ansible), test automation, Cloud implementations and/or containers (eg. Docker/Kubernetes/Openshift)
  • Consistent experience in team working, particularly in a Scrum/Agile way, working in a “multi skill” squad, and  in a multi-cultural environment
  • Strong ability in configuring/programming automation (yaml, Ansible playbook, and scripting)
  • Deep knowledge of CI/CD tools like Jenkins, Go, Mavel, Git Hub
  • Some experience with Cloud implementation and / or use of Virtual Machines/Containers (eg. Docker Kubernetes)
  • Full understanding and consistent experience with the entire software delivery lifecycle and agile methodology.
  • Experience with using Spark, Yarn, Hive and Oozie;

Apply