Hi, Please find the job description below and let me know if any consultant is available- Client: Altimetrik Job Title: Lead Hadoop Admin Location: San Francisco, California (Hybrid) Position type: Contract Duration: 6 months+ Job Description Responsible for implementation and ongoing administration of Hadoop infrastructure. Responsible for Cluster maintenance, trouble shooting, Monitoring and followed proper backup & Recovery strategies. Provisioning and managing the life cycle of multiple clusters like EMR & EKS. Infrastructure monitoring, logging & alerting with Prometheus/Grafana/Splunk. Performance tuning of Hadoop clusters and Hadoop workloads and capacity planning at application/queue level. Responsible for Memory management, Queue allocation, distribution experience in Hadoop/Cloud era environments. Should be able to scale clusters in production and have experience with 18/5 or 24/5 production environments. Monitor Hadoop cluster connectivity and security, File system (HDFS) management and monitoring. Investigates and analyzes new technical possibilities, tools, and techniques that reduce complexity, create a more efficient and productive delivery process, or create better technical solutions that increase business value. Involved in fixing issues, RCA, suggesting solutions for infrastructure/service components. Responsible for meeting Service Level Agreement (SLA) targets, and collaboratively ensuring team targets are met. Ensure all changes to the Production systems are planned and approved in accordance with the Change Management process. Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required. Maintain central dashboards for all System, Data, Utilization, and availability metrics. JD: · Basic to intermediate experience with Spark · Good experience with SQL. Should be able to understand and implement performance optimization. · Memory management experience · Queue allocation, distribution experience in Hadoop/Cloud era environments · Should be able to scale clusters in production and have experience with 18/5 or 24/5 production environments. · Good with one programming language. · Good exposure to Hive and Hadoop systems with exposure to monitoring tools. Sqoop, Oozie and other external tools. · Familiar with patch upgrades. Abhishek Chellamala Cool-minds INC Office:-310-929-1616 EXT 113 Direct:-+1 310-589-4470 E-Mail:-Abhishek@cool-minds.com
VEDAINFO INC Office:-310-929-1616 EXT 113 Direct:-+1 310-589-4470 E-Mail:-Abhishek@vedainfo.com www.vedainfo.com |
No comments:
Post a Comment