AI Research Engineer (Model Evaluation)
Description
Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions enable seamless integration of reserve-backed tokens across blockchains, empowering businesses worldwide. Transparency and trust are at the core of everything we do.Innovate with Tether Tether Finance: Our product suite features the trusted stablecoin USDT and digital asset tokenization services.Tether Power: We promote sustainable Bitcoin mining using eco-friendly practices.Tether Data: We develop AI and P2P solutions like KEET for secure data sharing.Tether Education: We democratize digital learning to empower individuals globally.Tether Evolution: We push technological boundaries to merge innovation with human potential.Why Join Us?Our remote, global team is passionate about fintech innovation. If you have excellent English skills and want to contribute to a leading platform, Tether is your place.Are you ready to be part of the future?About the job: As part of our AI model team, you will develop evaluation frameworks for AI models across their lifecycle, focusing on metrics like accuracy, latency, and robustness. You will work on models from resource-efficient to multi-modal architectures, collaborating with cross-functional teams to implement evaluation pipelines and dashboards, setting industry standards for AI model quality.Responsibilities: Develop and deploy evaluation frameworks assessing models during pre-training, post-training, and inference, tracking KPIs such as accuracy, latency, and memory usage.Curate datasets and design benchmarks to measure model robustness and improvements.Collaborate with product, engineering, and operations teams to align evaluation metrics with business goals, presenting findings via dashboards and reports.Analyze evaluation data to identify bottlenecks, proposing optimizations for performance and resource efficiency.Conduct experiments to refine evaluation methodologies, staying updated with emerging techniques to enhance model reliability.Minimum requirements: A degree in Computer Science or related field; Ph D in NLP, Machine Learning, or similar is preferred, with a strong R&D record.Experience designing and evaluating AI models at various stages, proficient in evaluation frameworks assessing accuracy, convergence, and robustness.Strong programming skills and experience building scalable evaluation pipelines, familiar with performance metrics like latency, throughput, and memory footprint.Ability to conduct iterative experiments, staying current with new techniques to improve benchmarking practices.Experience working with cross-functional teams, translating technical insights into actionable recommendations. #J-18808-Ljbffr
Posted: 18th June 2025 7.56 am
Application Deadline: N/A
Similar Jobs
Explore more opportunities like this