Thursday 30 August 2018

Need Sr Data Engineer with Hadoop, Workflow

My Client is Looking for Senior Data Engineering in Nashville location.

 

Job Title: Senior Data Engineer

Job Location: Nashville - Tennessee

Rate: Open

Experience: 9+ Years

No OPT.
Only H1B/GC should apply.

Need H1B copy,DL and passport number for client submission.

 

Pls send resume with contact details and rate.

 

Experience & Skills Required:

1.    Extensive experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto, Pig, HBase, Parquet)

2.    Proficient in at least one of the SQL languages (MySQL, PostgreSQL, SqlServer, Oracle)

3.    Good understanding of  SQL Engine and able to conduct advanced performance tuning

4.    Strong skills in scripting language (Python, Ruby, Perl, Bash)

5.    Experience with workflow management tools (Airflow, Oozie, Azkaban, UC4)

6.    Comfortable working directly with data analytics to bridge business requirements with data engineering

 

Job Description:

As a Data Engineer you will be a part of an early stage team that builds the data transport, collection, and storage, and exposes services that make data a first-class citizen.

We are looking for a Data Engineer that is passionate and motivated to make an impact in creating a robust and scalable data platform.

In this role, you will have ownership of the company's core data pipeline that powers clients top line metrics; You will also leverage data expertise to help evolve data models in various components of the data stack; You will be working on architecting, building, and launching highly scalable and reliable data pipelines to support clients growing data processing and analytics needs. Your efforts will allow access to business and user behavior insights, leveraging huge amounts of client data to fuel several teams such as Analytics, Data Science, Marketplace and many others.

 

Responsibilities:

1.    Owner of the core company data pipeline, responsible for scaling up data processing flow to meet the rapid data growth.

2.    Consistently evolve data model & data schema based on business and engineering needs

3.    Implement systems tracking data quality and consistency

4.    Develop tools supporting self-service data pipeline management (ETL)

5.    SQL and MapReduce job tuning to improve data processing performance.

 

David

Work:678 899 6878

david@ittstar.com

 

No comments:

Post a Comment