Desired Skills and Experience
- You will build, administer and scale data pipelines that process hundreds of billions of messages a day spanning over multiple data centres
- You will be comfortable navigating the following technology stack: Java/Scala, Golang, .NET, .NET Core, Kafka, scripting (Bash/Python), Hadoop, ElasticSearch
- You will develop and expand upon existing frameworks that is used by Teams throughout Agoda to produce messages to the data pipeline
- You will build and manage data ingestion into multiple systems (Hadoop, ElasticSearch, other Distributed Systems)
- You will build tools that monitor high data accuracy SLAs for the data pipeline
- You will fix production problems
- You will profile for performance, self-recovery and stability
- You will collaborate with other teams and departments
- You will automate system tasks via code as needed
- You will explore available new technologies that improve upon our quality of data, processes and data flow
- You will develop quality software through design review, code reviews and test driven development
- B.Sc. in Computer Science / Information Systems / Computer Engineering or related field
- You have two plus years of industry experience, preferred at a tech company
- A passion for Big (Petabytes worth) Data
- Good knowledge of data architecture principles
- You have operational experience debugging production issues
- An experienced coder, who can stand your ground with experience building systems with purpose that are flexible, well-tested, maintainable and scale
- You’re detail oriented considering every outcome of a particular decision
- You have no issues being on-call and working at odd hours as needed
- You can communicate in technical English with fluidity, both verbal and written
- Good understanding of how Kafka works
- Kafka Administrator Experience
- Understands Concepts relating to Schema Registry and Schema Evolution
- Experience working with Serialization Formats either with ProtocolBuffers, Avro or Thrift
- Knows how to use ElasticSearch proficiently
- Experience with data ingestion from Kafka into Hadoop, ElasticSearch, other Distributed Systems
- Strong systems administration skills in Linux
- Worked on or contributed to Open Source Project
Apply