Thursday 31 May 2018

Job requirement for Big Data Engineer

Job Title: Big Data Engineer

Primary Skills: Spark,Hadoop Ecosystem,ETL,python/scala/java code

Location:  Santa Clara CA

Duration : 12+ months

 

 Job Description:

·         At least 3 years of experience working in Hadoop Ecosystem and big data technologies

·         Build data pipelines and ETL using heterogeneous sources to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc.

·         Experience in batch or real-time data streaming

·         Ability to dynamically adapt to conventional big-data frameworks and open source tools if project demands

·         Knowledge of design strategies for developing scalable, resilient, always-on data lake

·         Experience in agile(scrum) development methodology

·         Strong development/automation skills

·         Must be very comfortable with reading and writing Python or Scala or Java code

·         Bachelor's or Master's degree


No comments:

Post a Comment