DataOps SRE

With Apple in Hyderabad - IN

More jobs from Apple

Posted on April 11, 2020

About this job

Job type: Full-time
Role: System Administrator
Industry: Consumer Electronics
Company size: 10k+ people
Company type: Public

Technologies

apache-spark, hadoop, java

Job description

The Ad Platforms SRE Team is seeking a Data SRE for an exciting opportunity, with a background in large scale data infrastructure engineering across public and private data centre. You will be using modern and distributed data technology stack to automate solutions and optimise outcomes focusing on data infrastructure engineering at massive scale. At Apple, we work every single day to build products that enrich people’s lives. Our Advertising Platforms group makes it possible for people around the world to easily access informative and visionary content on their devices while helping publishers and developers promote and monetize their work. Our technology and services power advertising in Apple News. Our platforms are highly-performant, deployed at scale, and setting new standards for enabling effective advertising while protecting user privacy. The people here at Apple don’t just build products — they build the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that encourage? the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. Imagine what you could do here.

Design, Deploy, Scale and Maintain our Big Data Infrastructure In-depth knowledge of capacity planning and fixing for HDFS, YARN/MapReduce and HBase etc. Manage data in Spark and Hadoop environments using scripts and automation Automate, deploy and operate data pipelines and facilities to monitor all aspects of data pipeline Implement SQL Queries, Presto, Hive and SparkSQL.Communicate and/or address build, deployment and operational issues as they come up Get along with, and support, a variety of different teams (engineering, quality, management, SRE, Participate in an On-call schedule.Work simultaneously on multiple projects competing for your time and understand how to prioritize accordingly.

Skills & requirements

  • Sound knowledge of OS and TCP/IP network fundamentals
  • 5+ Expertise in designing and engineering large (500+ node) clusters and its ecosystem Hive, Pig, Spark, HDFS, HBase, Airflow, Oozie, Sqoop, Flume, Zookeeper etc.
  • Experience in running Big Data Infrastructure on premise and AWS
  • Ability to code well in at least one language (Shell, Ruby, Python, Java, Perl)
  • Experience with Presto, Impala and Spark a preferred.
  • Good knowledge on impala, Kafka, Spark streaming, Cassandra.
  • Cloudera Certified Administrator for Apache Hadoop (CCAH) a plus.
  • Good work attitude and tenacious troubleshooting skills desired.
  • Multi-datacenter deployment experience a plus.

Bachelor's degree in Computer Science or equivalent is required. Master's degree preferred.

    • Excellent verbal, written and communication skills.
    • Ability to perform analysis, define new processes and drive technology initiatives and projects.
    • Experience in developing and executing a well-defined approach to implementing change in a Global environment.

Apply here