Big data engineer
Job details
Job Title: Big Data Developer Responsibilities: Good understanding of concurrent software systems and building them in a way that is scalable, maintainable, and robust. Experience in designing application solutions in the Hadoop ecosystem. Deep understanding of the concepts in Hive, HDFS, Yarn, Spark, Spark SQL, Scala, and PySpark. HDFS file formats and their use cases (e.g., Parquet, ORC, Sequence, etc.). Good knowledge in data warehousing systems. Experienced in any scripting language (SHELL, PYTHON). HortonWorks distribution and understanding of SQL engines (Tez, MR). Java/Rest Services/Maven experience is a value addition. Control-M development and monitoring resource utilization using Grafana Tool. Good skill set to create automation scripts in Jenkins and knowledge in working on automating builds, test frameworks, app configuration, etc. Experience in implementing scalable applications with fully automated deployment and control using Bitbucket, Jenkins, ADO, etc. Skillset: Mandatory: Big Data – Hive, HDFS, Spark, Scala, PySpark. Good to have: Schedulers (Control-M), ETL Tool – Dataiku, Unix/Shell scripting, Knowledge of Integration services of FileIT/MQ and others, CI/CD tools – Jenkins, Jira, ADO DevOps tools suite, Oracle. #J-18808-Ljbffr
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.