Friday, November 6, 2020

Big Data (Kafka) Solution Architect

Hi,
 
This is Sravan Kumar from Vedainfo and I have the below requirement with our Esteemed Client so please let me know if you are comfortable or if you have any consultant to submit
 
Title: Big Data (Kafka) Solution Architect
Location: Denver, CO
Duration: 12 months

Solution Architect:
- Key ask is that the resource should be very experience in Kafka and real time streaming.
Experience Level : >15 years and technically hands on person.
Job Duties and Responsibilities
Primary responsibilities fall into the following categories:
· Deploy enterprise-ready, secure and compliant data-oriented solutions leveraging Data Warehouse, Big Data and Machine Learning frameworks
· Optimizing data engineering and machine learning pipelines
· Reviews architectural designs to ensure consistency & alignment with defined target architecture and adherence to established architecture standards
· Support data and cloud transformation initiatives
· Contribute to our cloud strategy based on prior experience
· Understand the latest technologies in a rapidly innovative marketplace
· Independently work with all stakeholders across the organization to deliver point and strategic solutions
· Assist solution providers with the definition and implementation of technical and business strategies
Skills - Experience and Requirements
A successful Solution Lead will have the following:
· Should have prior experience in working as a Data warehouse/Big Data architect.
· Experience in advanced Apache Spark processing framework, spark programming languages such as Scala/Python/Advanced Java with sound knowledge in shell scripting.
· Should have experience in both functional programming and Spark SQL programming dealing with processing terabytes of data
· Specifically, this experience must be in writing Big Data data engineering jobs for large scale data integration in AWS. Prior experience in writing Machine Learning data pipelines using Spark programming language is an added advantage.
· Advanced SQL experience including SQL performance tuning is a must.
· Should have worked on other big data frameworks such as MapReduce, HDFS, Hive/Impala, AWS Athena.
· Experience in logical & physical table design in Big Data environment to suite processing frameworks
· Knowledge of using, setting up and tuning resource management framework such as Yarn, Mesos or standalone spark.
· Experience in writing spark streaming jobs (producers/consumers) using Apache Kafka or AWS Kinesis is required
· Should have knowledge in variety of data platforms such as Redshift, S3, Teradata, Hbase, MySQL/Postgres, MongoDB
· Experience in AWS services such as EMR, Glue, S3, Athena, DynamoDB, IAM, Lambda, Cloud watch and Data pipeline
· Must have used the technologies for deploying specific solutions in the area of Big Data and Machine learning.
· Experience in AWS cloud transformation projects are required.
· Telecommunication experience is an added advantage.Should have prior experience in working as a Data warehouseBig Data architect. Should have indepth experience of handling large volumes of streaming data and should be strong in Kakfa and real time streaming. Deep understanding of the following AWS services from a development and Architecture perspective, including limitations and development best practices with respect to performance and securityoS3, Lambda, DynamoDB, Kinesis, Managed Strea         




Thanks
Sravan Kumar
sravan@us.vedainfo.com
310-929-1147

Certified Women Owned Minority Business Enterprise {WMBE}
3868 Carson Street, Suite 204, Torrance, CA 90503 | Offices: USA, India, Australia

Company Name | Website

No comments:

Post a Comment