Desired Skills and Experience

  • Maintenance and optimization of the Hadoop infrastructure; contact person for standard IT
  • Administration and support of the various components of the Hadoop stack (Pig, Spark, Impala, Hbase, Hive etc.)
  • Support of tests and commissioning
  • Conception, implementation and maintenance of the internal Hadoop cluster
  • Preparation and support of operational concepts for the uninterruptible service of the cluster
  • Implementation of security functions in the cluster
  • Linking of the cluster to AWS for further scaling
  • Developing and maintaining a stand-alone cloud-based processing chain
  • Experience with Linux systems (preferably Debian/CentOS or Debian-based Linux) and ideally with Linux Shell Scripting/Bash Scripting
  • Experience with standard database systems
  • Basic understanding of Big Data and related leading edge technologies, including Cloud computing, Apache Hadoop platforms (e.g. Cloudera, Hortonworks, Amazon EMR) and tools (Hive, Hbase or Pig), Knowledge of scripting languages for automation (Perl, Python, PowerShell, VB), Experience with HP server operation and management tools such as iLO
  • Analytical way of thinking in complex IT environments
  • Solution-oriented way of working in a team
  • Good command of English, German is an advantage
  • An interesting, diversified and challenging working environment
  • Personal and professional development opportunities from the start
  • Continuous coaching by experienced consultants and participation in professional trainings