As part of a broader team of engineers and administrators you will have front-to-back engineering and management ownership for tools and infrastructure used across the company. While this role has more of a Big Data focus, you will also be working with Ansible, Jenkins, AWS, Linux administration and more, on a regular basis.

You will be responsible for keeping systems scaled, patched and available according to our operational and security requirements, while participating in an on-call roster.

To support the automation of all tasks, engineering ad-hoc tools and systems will be a key duty. This role requires you to collaborate with developers to ensure application components produced are compatible and optimised in our production environment and ensure proper workflow tracking (Jira) and documentation (Confluence).

Desired Skills and Experience

  • Understanding of cloud computing platforms (e.g. AWS, Google App Engine, etc.)
  • Multi Data Center Cluster setup, management and troubleshooting related to distributed technologies
  • Good knowledge of DevOps tools such as Chef/Jenkins/Ansible/Puppet/Docker
  • Experience deploying, managing and monitoring Hadoop ecosystems (HDFS,YARN, Spark, Cassandra…)
  • Experience in Hadoop Platform Security and Hadoop Data Governance topics
  • Working knowledge of monitoring tools (YARN, Mesos ,Myriad ,Ambari ,Ankush ,Cloudera Manager)
  • Deep understanding of how to design high-performant data models for multiple NoSQL data stores (file stores, wide column databases, key-value stores, etc.)
  • Hands-on experience with Drill, Hive, Impala, Presto, and similar tools for SQL-like exploration of large-scale data sets
  • Café and restaurant
  • Air-conditioned cinema
  • Free breakfast and lunch buffets, snacks and ice-cold drinks
  • Sports club
  • Fitness studio
  • Many extra perks and benefits