Design and develop scalable data pipelines and ETL processes to process large datasets efficiently.
Collaborate with data scientists and analysts to gather requirements and deliver data solutions.
Maintain and optimize existing data systems and ensure data cleanliness and integrity.
Implement data security and governance practices across data processing workflows.
Utilize big data technologies such as Hadoop, Spark, or Kafka for data processing.
Perform data modeling and database design to improve data storage and retrieval performance.
Monitor and troubleshoot data processing jobs and ensure timely data delivery.
Prepare documentation for data solutions and architecture diagrams.
Education: Bachelor's/Master's degree in Computer Science, Engineering, or a related field.
Experience: 7-9 years of experience as a Data Engineer or similar role.
Proficiency in SQL and database management systems such as MySQL, PostgreSQL, or Oracle.
Hands-on experience with cloud platforms like AWS, Azure, or GCP.
Familiarity with programming languages such as Python, Java, or Scala.
Solid understanding of data warehousing concepts and data modeling techniques.
Experience with data visualization tools like Tableau, Power BI, etc., is a plus.
Strong problem-solving skills and ability to work in a fast-paced environment.
Excellent communication skills to collaborate with cross-functional teams.