
Machine Learning Engineer - Trust and Safety
- On-site
- Santa Monica, California, United States
- $120,000 - $160,000 per year
- Trust and Safety
Job description
Who We Are
At Favorited, we are redefining mobile live-streaming as a fully interactive, gamified experience. We’re dedicated to fostering deeper connections between creators and their communities through play, and ensuring that creators are compensated well in the process.
Our App
iOS
Android
About the Role
The Machine Learning Engineer, Trust & Safety will play a crucial role in developing proactive detection and automation systems to enhance safety on our platform. This position requires expertise in machine learning-based content classification, including experience with third-party solutions like Hive, Anthropic, or similar models. You will work closely with engineers and the Trust & Safety team to build scalable solutions that detect harmful content, automate enforcement, and improve user safety in real time.
Apply to this position if you:
Enjoy solving complex problems and building scalable machine learning systems.
Are passionate about creating a safer online environment through AI and automation.
Have experience working with Trust & Safety teams to deploy ML-based moderation tools.
Are comfortable working in a fast-paced startup environment with shifting priorities.
Want to contribute to the development of proactive detection models for harmful content.
Are excited to work in a real time environment.
Who You Are
Passionate about applying machine learning to real-world safety challenges.
Detail-oriented and analytical, with strong problem-solving skills.
Excited by the opportunity to take ownership of mission-critical ML systems.
A quick learner who thrives in an evolving tech landscape.
Comfortable working cross-functionally with engineering, product, and safety teams.
What You Will Do
As a Machine Learning Engineer, Trust & Safety at Favorited, you will play a key role in shaping our content moderation and automation systems. Your work will directly impact how we detect harmful content and protect our community.
Develop and deploy machine learning models for content classification, abuse detection, and proactive moderation.
Implement automated enforcement mechanisms to improve response times and accuracy.
Work with third-party ML tools like Hive, Anthropic, or other AI-powered classification models to enhance detection capabilities.
Optimize existing Trust & Safety automation pipelines for real-time intervention.
Collaborate with engineers and T&S analysts to refine detection thresholds and enforcement logic.
Analyze trends in violative content and user behavior to continuously improve detection accuracy.
Research and implement state-of-the-art ML techniques for Trust & Safety applications.
Improve model performance and scalability to handle high-volume real-time data streams.
Work closely with data scientists, backend engineers, and policy teams to ensure ML models align with platform policies and enforcement strategies.
What We Are Looking For
We are looking for a skilled Machine Learning Engineer who can build, optimize, and deploy models for Trust & Safety applications. The ideal candidate will have a strong technical background in ML-based classification systems and a passion for online safety.
Experience & Skills:
3-5+ years of experience in Machine Learning, AI, or Data Science.
Hands-on experience with ML-based content classification, ideally using tools like Hive, Anthropic, or similar AI moderation models.
Strong programming skills in Python, TensorFlow, PyTorch, or Scikit-learn.
Experience working with large-scale datasets and real-time ML inference pipelines.
Familiarity with NLP, computer vision, or multi-modal AI models for content moderation.
Proficiency in working with cloud-based ML infrastructure (AWS, GCP, or Azure).
Understanding of Trust & Safety challenges, online abuse detection, and content moderation policies.
Strong problem-solving skills and ability to iterate quickly in a fast-paced environment.
Bonus Points:
Experience working in Trust & Safety, moderation, or anti-fraud ML systems.
Knowledge of graph-based ML models or anomaly detection techniques.
Experience with real-time ML inference and streaming data processing (Kafka, Spark, etc.).
Prior work with ethically-aligned AI moderation tools to reduce bias in ML models.
Experience with streaming analytics and optimizing performance of models to be used in near real time and on clients.
Where You’ll Work
This is a full-time on-site position based in Santa Monica, CA.
Benefits
Unlimited PTO to prioritize work-life balance.
401(k) plan to help you invest in your future.
Comprehensive health insurance to support your well-being.
Paid company holidays for time to recharge.
Competitive salary that values your expertise and contributions.
At Favorited, we value the creativity and hard work of every team member. Join us as we redefine mobile live-streaming and build a safer, more engaging platform for creators and communities.
or
All done!
Your application has been successfully submitted!