Home India Data Engineer

Home India Data Engineer

Data Engineer

Full time at ISSC in India
Posted on February 21, 2025

Job details

Job Title: Data Engineer Website: Location: Udyog Vihar, Phase-V, Gurugram Job type: Full-Time Employment Type: 5 Days Working( No Hybrid/Work from Home) Compensation: As per Industry Standards Address: Udyog Vihar, Phase – V, Gurgaon Company Overview: With the world constantly and rapidly changing, the future will be full of realigned priorities. You are keen to strengthen your firms profitability and reputation by retaining existing clients and winning more in the market. We at ISSC have the right resources to ensure your team has access to right skills to deliver effective assurance and IT Advisory whilst you build and scale your team onshore to meet the client’s broader assurance needs. By offshoring part of the routine and less complex auditing work to ISSC, you will free up capacity in your own organization which can be utilized in areas which requires more face time with your clients including your quest to win new clients. Having the right team on your side at ISSC will be vital as you follow your exciting growth plans and it is in this role your ISSC team stands apart. We offer a compelling case in becoming your key partner for the future. Position Summary: We are seeking a skilled and detail-oriented Data Engineer to join our team. As a Data Engineer, you will be responsible for developing and optimizing data pipelines, managing data architecture, and ensuring the data is easily accessible, reliable, and secure. You will work closely with data scientists, analysts, and other stakeholders to gather requirements and deliver data solutions that support business intelligence and analytics initiatives. The ideal candidate should possess strong data manipulation skills, a keen eye for detail, and the ability to work with diverse datasets. This role plays a crucial part in ensuring the quality and integrity of our data, enabling informed decision-making across the organization. Responsibilities: Data Pipeline Development:

  • Design, develop, and maintain scalable data pipelines to process, transform, and move large datasets across multiple platforms.
  • Ensure data integrity, reliability, and quality across all pipelines.
Data Architecture and Infrastructure:
  • Architect and manage the data infrastructure, including databases, warehouses, and data lakes.
  • Implement solutions to optimize storage and retrieval of both structured and unstructured data.
Data Integration and Management:
  • Integrate data from various sources (e.g., APIs, databases, third-party providers) into a unified system.
  • Manage ETL (Extract, Transform, Load) processes to clean, enrich, and make data ready for analysis.
Data Security and Compliance:
  • Ensure data governance, privacy, and compliance with security standards (e.g., GDPR, HIPAA).
  • Implement robust access controls and encryption protocols.
Collaboration:
  • Work closely with data scientists, analysts, and business stakeholders to gather requirements and deliver high-performance data solutions.
  • Collaborate with DevOps and software engineering teams to deploy and maintain the data infrastructure in a cloud or on-premises environment.
Performance Tuning:
  • Monitor and improve the performance of databases and data pipelines to ensure low-latency data availability.
  • Troubleshoot and resolve issues in the data infrastructure.
Documentation and Best Practices:
  • Maintain detailed documentation of data pipelines, architecture, and processes.
  • Follow industry best practices for data engineering, including version control and continuous integration.
Skills/ Requirements: Technical Skills:
  • Proficiency in programming languages such as Python , or SQL.
  • Good experience with big data technologies like Apache Spark , Hadoop, Kafka, Flink, etc.
  • Experience with cloud data platforms (AWS, Azure).
  • Familiarity with databases (SQL and NoSQL), data warehousing solutions (e.g., Snowflake, Redshift), and ETL tools (e.g., Airflow , Talend).
Data Modeling and Database Design:
  • Expertise in designing data models and relational database schemas.
Problem-Solving:
  • Strong analytical and problem-solving skills, with the ability to handle complex data issues.
Version Control and Automation:
  • Experience with CI/CD pipelines and version control tools like Git.
Professional Qualifications: • 5 - 7 years of relevant experience. • BTech, Statistics, Information Technology, or a related field. Other Benefits: • Free Meal • 1 Happy Hour Every week • 3 Offsite in a year • 1 Spa every week

Apply safely

To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.

Share this job
Improve your chance to get this job. Do an online course on HADOOP starting now. Claim $10 promo towards online courses. See all courses
See All Data Jobs
Feedback Feedback