Big Data Developer
Detalhes do emprego
Hiring is happening! Job Title: Data Engineer Location: Kochi/Trivandrum Only Job Type: Full-Time(Hybrid) Experience: 3+ Job Description: As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable and efficient big data processing pipelines and distributed computing systems. You will collaborate with cross-functional teams to understand data requirements, develop data solutions, and ensure the quality and integrity of our data infrastructure. Key Responsibilities: *Design, develop, and maintain scalable and efficient big data processing pipelines and distributed computing systems. *Implement data ingestion, processing, and transformation processes to support various analytical and machine learning use cases. *Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. *Develop data models, schemas, and structures to support business needs. *Optimize and tune data pipelines for performance, scalability, and reliability. *Monitor and troubleshoot pipeline performance issues, identifying and resolving bottlenecks. *Ensure data quality and integrity throughout the pipeline, implementing data validation and error handling mechanisms. *Ensure data quality, integrity, and security across all data solutions. *Document design decisions, technical specifications, and data workflows. Stay updated on emerging technologies and best practices in big data processing and analytics, incorporating them into our data engineering practices. *Expertise in data modeling, data warehousing concepts, data governance best practices, and ETL processes. *Understanding of distributed computing principles and experience with distributed data processing frameworks such as Apache Spark or Hadoop. *Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. *Good understanding of data warehouse platforms like Snowflake, Redshift, etc. Skills: *Proficiency in distributed data processing frameworks such as Apache Spark or Hadoop. *Experience with containerization technologies like Docker and orchestration tools like Kubernetes. *Strong knowledge of data modeling, data warehousing, and ETL processes. *Experience with data warehouse platforms such as Snowflake or Redshift. *Excellent problem-solving skills and the ability to troubleshoot complex data pipeline issues. *Strong communication skills and the ability to work collaboratively with cross-functional teams. Send your applications@sivaraj.c@litmus7.com
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.