Data and DevOps Engineer
Job details
Job Title: Data & DevOps Engineer About: Our client is a pioneering company that employs AI-driven technology to provide businesses with rapid concept and campaign validation. Our state-of-the-art platform utilizes AI and advanced techniques to generate virtual environments that simulate real-world consumer behavior. We are seeking an experienced Senior Deep Learning Engineer to lead our team, manage projects, and help us scale our unique platform. Responsibilities: 1. Design, build, and maintain scalable data pipelines for generative AI-powered applications and real-time data ingestion systems. 2. Collaborate with the AI engineering teams to build robust and well-architected software systems, ensuring smooth integration with both relational and non-relational databases. 3. Optimize and scale database solutions to handle high-volume data flows, ensuring efficient performance and reliability. 4. Scale and maintain infrastructure using containerization (Docker, Kubernetes) to deploy, manage, and monitor AI applications. 5. Set up and maintain monitoring and observability stacks (e.g., Grafana, Prometheus, ELK stack) to ensure the stability and performance of the data pipelines. 6. Work with cloud engineering services (AWS, GCP, Azure) to scale data pipelines and infrastructure, ensuring cost-effectiveness and reliability. 7. Automate infrastructure deployments using CI/CD pipelines. Requirements: 1. 5-5+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines and database integrations. 2. Proficiency in Python (preferred) or similar programming languages, especially for data processing (e.g., Pandas, Spark). 3. Experience with SQL and relational databases (e.g., PostgreSQL), including performance optimization and scalability. 4. Strong understanding of data engineering principles, including data integration, pipeline reliability, and scaling to handle high-volume data.\ 5. Solid understanding of software engineering principles, including modularity, scalability, version control, and testing for building robust, maintainable systems. 6. Hands-on experience with cloud services (AWS, GCP, or Azure) for data architecture, infrastructure scaling, and cost management. 7. Experience with containerization (Docker) and container orchestration (Kubernetes) for deploying and managing applications. 8. Solid foundation in statistics and data analysis to support data-driven decision-making. 9. Ability to learn new technologies quickly and thrive in a fast-changing startup environment. Nice-to-Have Qualifications: 1. Experience with non-relational databases (e.g., MongoDB). 2. Understanding of MLOps/LLMOps practices, particularly around deploying, monitoring, and scaling machine learning or large language models. 3. Experience with web scraping tools and libraries for real-time data ingestion. 4. Familiarity with monitoring and observability tools (e.g., Grafana, Prometheus, ELK stack) to ensure stability and performance of data-driven systems. 5. Familiarity with modern deep learning techniques, especially transformer models and their applications. 6. Familiarity with infrastructure as code (IaC) tools, such as Terraform or CloudFormation, for automating cloud infrastructure management.
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.