Senior Python Data Engineer

With JPMorgan Chase & Co. in Jersey City NJ US

More jobs from JPMorgan Chase & Co.

Posted on January 13, 2021

About this job

Job type: Full-time
Role: System Administrator
Industry: Financial Services
Company size: 10k+ people
Company type: Public

Technologies

python, apache, continuous-integration

Job description

As an experienced member of our Software Engineering Group we look first and foremost for people who are passionate around solving business problems through innovation & engineering practices. You will be required to apply your depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner continuously with your many stakeholders on a daily basis to stay focused on common goals. We embrace a culture of experimentation and constantly strive for improvement and learning. You'll work in a collaborative, trusting, thought-provoking environment-one that encourages diversity of thought and creative solutions that are in the best interests of our customers globally.

J.P. Morgan Chase is a global institution that prides itself in the power of scale - offering first class financial products and services to its clients (and its clients' clients) across the spectrum of consumer, commercial and institutional needs. For the Corporate & Investment Bank (CIB) specifically, clients range from hedge funds, governments, institutional investors and corporations around the world - each made up of individuals who interact with our employees in large volumes by phone, email and digital chats every day.

The Client Intelligence team's mission is to leverage those large datasets of communications to power cutting-edge applications and analytical capabilities within the CIB. As a Senior Python Data Engineer, you will evolve at the intersection of business analytics, data warehousing and software engineering, and will be responsible for building the foundational data layer critical to the Client Intelligence platform success. You'll take the lead on relevant projects, supported by an organization that provides the support and mentorship you need to learn and grow.
Responsibilities
• Design and architect the next generation data pipelines, lake and warehouse for the Client Intelligence team. Build and communicate a technical vision to the team and the stakeholders

• Build large-scale batch, ETL and real-time data pipelines using cloud and on-premises data technologies, such as Redshift, Athena, DBT, Python, Apache Airflow and Apache Kafka

• Design best practices for data processing, data modeling and warehouse development throughout our team and group

• Develop the vision and map strategy to provide proactive solutions and enable stakeholders to extract insights and value from data

• Understand end to end data interactions and dependencies across complex data pipelines and data transformation and how they impact business decisions.

• Coach and mentor team members as applicable
Qualifications
Preferred qualifications

• Expertise in data warehouse / data lake architectures like Redshift, Snowflake, Big Query, Impala, Presto, Athena

• Experience with workflow orchestration tools such as Apache Airflow

• Knowledge of data transformation and collection tools such as DBT or Fivetran

• Hands-on experience with stream processing platforms such as Kafka, Kinesis, Flink, Beam, Dataflow

• Advanced knowledge of data columnar and serialization formats such as JSON, XML, Arrow, Parquet, Protobuf, Thrift, Avro

• Strong experience with container technologies such as Docker and Kubernetes

• Experience writing infrastructure as code with Terraform

• Experience with CI/CD systems e.g. Jenkins and automation / DevOps best practices

• Advanced knowledge of AWS ecosystem, including S3, Glue, Redshift, Athena, Kinesis, MSK, IAM, Batch, ECS, EKS etc

Minimum requirements

• BS/BA degree or equivalent experience in computer science or engineering

• Significant hands-on experience in building a data warehouse / data lake and data pipelines

• Expert level skills in SQL, data integration, data modeling and data architecture

• Expert level skills in Python, its standard library and its package ecosystem - Pytest, Tox, Pandas, Requests, Pylint, Boto3, Jinja...

Soft skills

• Leadership and ability to influence the team's direction

• Mentoring: help your junior teammates achieve their goals and grow

• Curiosity, creativity, resourcefulness and a collaborative spirit

• Clear and effective verbal and written communication skills

• Demonstrated ability to work on multi-disciplinary teams with diverse backgrounds

• Interest in problems related to the financial services domain (specific past experience in the domain is not required)JPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world's most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.

We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. In accordance with applicable law, we make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as any mental health or physical disability needs.

Equal Opportunity Employer/Disability/Veterans

Apply here