Data Engineer
Job details
JD: Databricks Mandate: Advanced (4-6 yrs) Key Responsibilities: * Design, develop, and optimize data pipelines and ETL workflows using Databricks. * Implement scalable data integration solutions for large datasets across diverse data sources. * Build and maintain data architectures for real-time and batch processing. * Collaborate with data scientists, analysts, and stakeholders to ensure the delivery of high-quality data solutions. * Monitor, troubleshoot, and optimize data workflows for performance and cost-efficiency. * Ensure data governance, security, and compliance in all processes. Qualifications: * Should have very clear understanding of Data Lake/Data Warehouse/Lakehouse Concepts * strong background in data engineering practices, cloud platforms, big data technologies and big data processing frameworks. * Strong hands-on experience with Databricks, including Spark, Delta Lake * 3+ Year Hands-on experience on Spark is mandatory using Databricks, Azure/AWS and associated data services. * 3+ Years Hands-on experience in SQL, Unix & advanced Unix Shell Scripting, Kafka * Expertise in Python/Java/Scala * Experience in GIT, SVN, Build Tools like Ant, Maven etc., CI/CD pipelines * Hands on file transfer mechanism (NDM, sFTP etc) * Knowledge of Schedulers like airflow, TWS etc. * Strong problem-solving skills and the ability to work in an agile environment. Educational Background: * Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Shift - 1pm to 10pm/3 pm to 12 am Pune - Hybrid
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.