AWS Engineer
Job details
The ideal candidate is passionate about learning, data, scalability, and agility. You'll leverage your strong collaboration skills and data architecture expertise, along with a deep understanding of cloud technologies, to design and manage complex data systems. By ensuring these systems are scalable and efficient, you'll transform data into actionable intelligence. If you love creating robust data solutions and working with cutting-edge cloud technologies, this role is for you Primary Skills We are seeking a Aws Data Engineer with Python, PostgreSQL. About Role: Experience – 4+ years Location- Bangalore/Pune/Hyderabad Job Description: AWS Data Engineer Job Summary : We are seeking a Data Engineer with expertise in PostgreSQL database development, AWS RDS, and ETL pipelines using AWS Glue and Lambda. The ideal candidate will be responsible for designing, optimizing, and managing data pipelines, database schemas, and data transformations to support our analytics and business applications. Key Responsibilities:
- Database Development & Optimization:
- Design, develop, and optimize PostgreSQL databases to ensure high performance and scalability. Write complex SQL queries, stored procedures, functions, and triggers for data processing. Perform query performance tuning, indexing, and partitioning strategies to improve database efficiency. Implement database monitoring, backups, and maintenance for high availability and disaster recovery.
- AWS RDS & Cloud Infrastructure:
- Manage and optimize AWS RDS PostgreSQL instances, including replication, scaling, and automated failover. Implement database security best practices, including IAM roles, encryption, and access controls. Automate database maintenance tasks using AWS Lambda, Step Functions, or scheduled jobs.
- ETL Development with AWS Glue & Lambda:
- Develop and maintain serverless ETL pipelines using AWS Glue (PySpark, Python). Work with Glue Crawlers, Glue Catalog, and Data Lake architectures to process structured and semi-structured data. Use AWS Lambda for event-driven transformations, real-time processing, and automation. Optimize Glue jobs for cost-efficiency and performance by tuning memory allocation, partitions, and job parameters.
- Data Integration & Automation:
- Integrate data from multiple sources such as S3, DynamoDB, APIs, and external databases. Design CDC (Change Data Capture) strategies for near real-time data processing. Implement error handling, logging, and monitoring for ETL processes using CloudWatch, SNS, and Step Functions. Work with DevOps teams to automate deployments using CI/CD pipelines (CodePipeline, Terraform, or CloudFormation).
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.