Machine Learning Engineer
Job details
About This Opportunity H2O.ai is at the forefront of the rapidly evolving MLOps landscape. We're leveraging MLOps to transform machine learning models from isolated, engineer-specific tools into robust, cloud-native services that are scalable and consistently available. Our approach is firmly rooted in Kubernetes, positioning our team at the cutting edge of cloud solutions. We are seeking a highly skilled and motivated Machine Learning Engineer to join our team in Australia. The successful candidate will have a strong background in machine learning and software engineering, with experience in developing and deploying machine learning models in real-world applications. The ideal candidate will have a passion for working with data, a strong understanding of machine learning algorithms and techniques, cloud technology and the ability to communicate complex technical concepts to stakeholders. What You Will Do
- Deliver technical professional services to the customer.
- Work closely with H2O Data Scientists in advising and developing end to end machine learning solutions (from a data engineering perspective) for the customer requirements.
- Integrate H2O products with customer data sources for model training.
- Integrate machine learning models/pipelines (python and mojo scoring pipelines) with customer systems for scoring (real-time/batch) as well as model monitoring and operations.
- Implement end to end ML data flow pipelines that help streamline data science solutions to a business problem.
- Implement AI driven applications using the open source H2O Wave SDK.
- Provide/gather customer feedback to work with the H2O.ai Engineering team to enhance our products for needed features.
- Ensure the scalability, reliability, and performance of deployed models by implementing appropriate monitoring, testing, and debugging processes.
- Work on GenAI tools - LLMs, Fine tuning engines, RAG, create GenAI Apps.
- Be the trusted solutions advisor for our customers and partners.
- Communicate effectively with a diverse audience of internal and external stakeholders consisting of: Engineers, business people, partners, executives.
- Translate business cases and requirements into value-based technical solutions through the architecture of machine learning workflows and systems from data ingestion to model deployment.
- Bachelor’s or a higher education degree in Computer Science/Engineering, data science, statistics or related field.
- Experience building data pipelines, ETL data sets, preferably on ‘Big Data’.
- Excellent understanding and experience with big data tools like Hadoop and Spark.
- Excellent knowledge of SQL query language and working with relational databases.
- Understanding of various NoSQL database types and their application scenarios.
- Experience in Spark/Kafka and Hadoop ecosystem.
- Proficient in Python or R for data science. Java, Bash scripting, Scala, Go are a plus.
- Experience with writing REST API using microservices frameworks in Python or Java.
- Experience with dockerization of services (i.e. creating docker images).
- Understanding of Kubernetes based application development.
- Experience of visualizing and presenting (EDA) to stakeholders using H2O Wave (plus), or other standard data visualization libraries in the Python and R stacks or using Tableau/PowerBi.
- Experience with post production model monitoring tools like H2O ML Ops (plus) MLFlow etc.
- Understanding of using a variety of machine learning techniques (supervised/unsupervised, clustering, decision tree learning, neural networks, etc.) and their real-world advantages/drawbacks/tuning techniques.
- Understanding of using advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) for practical applications.
- Experience of working in a customer-facing environment, providing technical services.
- Excellent communication skills (proficient in spoken and written English). Additional languages are a plus.
- Amicable attitude. Aptitude to independently investigate and find solutions to technical problems; urge to learn/master new technologies. Maker mindset.
- Experience with big data technologies such as Hadoop, Spark, or NoSQL databases.
- Familiarity with DevOps practices and tools such as Git, Jenkins, ArgoCD.
- Knowledge of data privacy and security principles.
- Experience with natural language processing or computer vision tasks.
- Experience with model interpretability techniques such as feature importance, partial dependence plots, or SHAP values.
- Experience with IaC technologies such as Terraform, AWS Cloud Formation.
- Experience with containerization technologies such as Kubernetes, Docker.
- Experience of debugging/troubleshooting containerized workloads and familiarity with tools such as kubectl, helm etc.
- Market leader in total rewards.
- Remote-friendly culture.
- Flexible working environment.
- Be part of a world-class team.
- Career growth.
Apply safely
To stay safe in your job search, information on common scams and to get free expert advice, we recommend that you visit SAFERjobs, a non-profit, joint industry and law enforcement organization working to combat job scams.