|
8 plus years of experience with the Hadoop ecosystem and Big Data technologies Handson experience with the Hadoop ecosystem HDFS, MapReduce, HBase, Hive, Scala, Spark, Kafka, Presto Experience in Scala is a must Experience with building stream processing systems using solutions such as spark streaming Experience in other open sources like Druid, Elastic Search, Logstash, CICD and cloud based deployments is a plus Ability to dynamically adapt to conventional bigdata frameworks and tools with the use cases. |
Must Have Skills (Top 3 technical skills only) *
1. Handson experience with the Hadoop ecosystem (HDFS, MapReduce, HBase, Hive, Scala, Spark, Kafka, Presto)
2. Build data pipelines and ETL using heterogeneous sources, you will build data ingestion from various source systems to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc.
3. Strong development automation skills. Must be very comfortable with reading and writing Scala, Python or Java code