Job details
Job Title : Big Data Engineer (Python & PySpark) Location : Bangalore Experience : 5+ years Key Responsibilities :
- Develop scalable data pipelines using Python and PySpark .
- Build and optimize Apache Spark jobs for data transformation and aggregation.
- Work with Big Data tools: Hadoop , HDFS , Hive , Kafka , and large datasets.
- Implement cloud-based solutions on Google Cloud Platform (GCP) .
- Ensure performance tuning and scalability of Big Data applications.
- Proficiency in Python and PySpark .
- Strong understanding of Apache Spark (RDDs, DataFrames, tuning).
- Experience with Big Data technologies and GCP .
- Background in handling large-scale, distributed systems.
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.