VP, Data & Cloud Engineer, Team Lead, Data Technology, Technology & Operations
تفاصيل الوظيفة
Business Function Group Technology and Operations (T&O) enables and empowers the bank with an efficient, nimble and resilient infrastructure through a strategic focus on productivity, quality & control, technology, people capability and innovation. In Group T&O, we manage the majority of the Bank's operational processes and inspire to delight our business partners through our multiple banking delivery channels. Responsibilities As a Data & Cloud Engineer, you'll be a Data Engineering Specialist and help us discover the information hidden in vast amounts of data. You'll be part of the team which builds a hybrid cloud data platform that supports dynamical data replication across different environments and leverages cloud-native technologies.
- Design and implement key components for highly scalable, distributed data collection and analysis system built for handling petabytes of data in the cloud.
- Work with architects of the analytics system and help in adopting best practices in backend infrastructure and distributed computing, that supports Machine Learning and GenerativeAI workloads.
- Managing team of engineers building cloud components of ADA Data Platform
- Implement core practice of Agile, leveraging cloud-native architecture patterns and using Test Driven Development, continuous integration/continuous delivery
- Continuously discover, evaluate and implement new cloud technologies to maximize analytical system performance
- 8+ years of experience in one or more areas of big data, cloud and/or cloud-native application development
- Enterprise experience building complex solutions within AWS and experience with architecting AWS production workloads for Business and IT operations, across a hybrid cloud environment.
- Demonstrable strong cloud architecture or development experience, specifically Amazon Web Services (AWS) with associate or specialist level certification.
- Familarity with Large Language Models and Generative AI (RAG)
- Hands-on experience with containers and containers orchestrators ( kubernetes, openshift )
- Development experience in Java/Python and pride in producing clean, maintainable code.
- Experience in Infrastructure and Infrastructure as Code (IaaC) is a must.
- Experience with Spark and different schema formats (Avro, Parquet, Carbondata)
- Experience using high-throughput, distributed message queueing systems such as Kafka.
- Familiarity with tracing/observability solutions, e.g. OpenTracing, OpenTelemetry, Zipkin
- Mastery of key development tools such as GIT, and familiarity with collaboration tools such as Jira and Confluence or similar tools.
- Experience with distributed databases, such as Cassandra, and the key issues affecting their performance and reliability.
- Experience with SQL engines (e.g. MySQL, PostgreSQL)
- The ability to work with loosely defined requirements, and exercise your analytical skills to clarify questions, share your approach and build/test elegant solutions in weekly sprint/release cycles.
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.