Home Online Azure Big Data Engineer- New

Home Online Azure Big Data Engineer- New

Azure Big Data Engineer- New

Full time at Tiger Analytics in Online
Posted on April 26, 2024

Job details

Job Description Tiger Analytics is a global AI and analytics consulting firm. With data and technology at the core of our solutions, our 2800+ tribe is solving problems that eventually impact the lives of millions globally. Our culture is modeled around expertise and respect with a team-first mindset. Headquartered in Silicon Valley, you'll find our delivery centers across the globe and offices in multiple cities across India, the US, UK, Canada, and Singapore, including a substantial remote global workforce. We're Great Place to Work-Certified™. Working at Tiger Analytics, you'll be at the heart of an AI revolution. You'll work with teams that push the boundaries of what is possible and build solutions that energize and inspire. Job Title: Azure Big Data Engineer Chennai | Hyderabad|Bangalore Curious about the roleWhat your typical day would look like As a Big Data Engineer (Azure) , you will build and learn about a variety of analytics solutions & platforms, data lakes, modern data platforms, data fabric solutions, etc. using different Open Source, Big Data, and Cloud technologies on Microsoft Azure. On a typical day, you might . Design and build scalable & metadata-driven data ingestion pipelines (For Batch and Streaming Datasets) . Conceptualize and execute high-performance data processing for structured and unstructured data, and data harmonization . Schedule, orchestrate, and validate pipelines . Design exception handling and log monitoring for debugging . Ideate with your peers to make tech stack and tools-related decisions . Interact and collaborate with multiple teams (Consulting/Data Science & App Dev) and various stakeholders to meet deadlines, to bring Analytical Solutions to life. What do we expect . 4 to 9 years of total IT experience with 2+ years in big data engineering and Microsoft Azure . Experience in implementing Data Lake with technologies like Azure Data Factory (ADF), PySpark, Databricks, ADLS, Azure SQL Database . A comprehensive foundation with working knowledge of Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview . A passion for writing high-quality code and the code should be modular, scalable, and free of bugs (debugging skills in SQL, Python, or Scala/Java). . Enthuse to collaborate with various stakeholders across the organization and take complete ownership of deliverables. . Experience in using big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, Neo4J, Elastic Search . Adept understanding of different file formats like Delta Lake, Avro, Parquet, JSON, and CSV . Good knowledge of building and designing REST APIs with real-time experience working on Data Lake or Lakehouse projects. . Experience in supporting BI and Data Science teams in consuming the data in a secure and governed manner . Certifications like Data Engineering on Microsoft Azure (DP-203) or Databricks Certified Developer (DE) are valuable addition. You are important to us, let's stay connected! Every individual comes with a different set of skills and qualities so even if you don't tick all the boxes for the role today we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust,respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry. Job Requirement

  • Mandatory : Azure Data Factory (ADF), PySpark, Databricks, ADLS, Azure SQL Database
  • Optional: Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview.
  • Strong programming, unit testing & debugging skills in SQL, Python or Scala/Java.
  • Some experience of using big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, Neo4J, Elastic Search.
  • Good Understanding of different file formats like Delta Lake, Avro, Parquet, JSON and CSV.
  • Experience of working in Agile projects and following DevOps processes with technologies like Git, Jenkins & Azure DevOps.
  • Good to have:
  • Experience of working on Data Lake & Lakehouse projects
  • Experience of building REST services and implementing service-oriented architectures.
  • Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner.
  • Certifications like Data Engineering on Microsoft Azure (DP-203) or Databricks Certified Developer (DE)

Apply safely

To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.

Share this job
See All Azure Jobs
Feedback Feedback