
Machine Learning Engineer - Trust and Safety
- On-site
- Santa Monica, California, United States
- $120,000 - $160,000 per year
- Trust and Safety
Job description
Machine Learning Engineer - Trust and Safety
At favorited, we believe that digital communities should be more than just spaces to watch content. With backing by a16z and other heavy hitters in the tech and entertainment space, we’re redefining mobile live-streaming as a fully interactive, gamified experience. Our platform is a place to connect, engage, and play, and empowers creators by enhancing audience participation, fostering deeper connections, and ensuring that creators are compensated fairly for their work.
Our work culture is intense and isn’t for everyone. But if you are a self starter who wants to build the future of social interaction alongside others who excel in their disciplines and expect the same from you, there’s no better place to be.
The Problems You’ll Solve
The Machine Learning Engineer, Trust & Safety will play a crucial role in developing proactive detection and automation systems to enhance safety on our platform. This position requires expertise in machine learning-based content classification, including experience with third-party solutions like Hive, Anthropic, or similar models. You will work closely with engineers and the Trust & Safety team to build scalable solutions that detect harmful content, automate enforcement, and improve user safety in real time.
Responsibilities:
Develop and deploy machine learning models for content classification, abuse detection, and proactive moderation.
Implement automated enforcement mechanisms to improve response times and accuracy.
Work with third-party ML tools like Hive, Anthropic, or other AI-powered classification models to enhance detection capabilities.
Optimize existing Trust & Safety automation pipelines for real-time intervention.
Collaborate with engineers and T&S analysts to refine detection thresholds and enforcement logic.
Analyze trends in violative content and user behavior to continuously improve detection accuracy.
Research and implement state-of-the-art ML techniques for Trust & Safety applications.
Improve model performance and scalability to handle high-volume real-time data streams.
Work closely with data scientists, backend engineers, and policy teams to ensure ML models align with platform policies and enforcement strategies.
Apply to this position if you:
Enjoy solving complex problems and building scalable machine learning systems.
Are passionate about creating a safer online environment through AI and automation.
Have experience working with Trust & Safety teams to deploy ML-based moderation tools.
Are comfortable working in a fast-paced startup environment with shifting priorities.
Want to contribute to the development of proactive detection models for harmful content.
Are excited to work in a real time environment.
What We’re Looking For
We are looking for a skilled Machine Learning Engineer who can build, optimize, and deploy models for Trust & Safety applications. The ideal candidate will have a strong technical background in ML-based classification systems and a passion for online safety.
Who You Are
Passionate about applying machine learning to real-world safety challenges.
Detail-oriented and analytical, with strong problem-solving skills.
Excited by the opportunity to take ownership of mission-critical ML systems.
A quick learner who thrives in an evolving tech landscape.
Comfortable working cross-functionally with engineering, product, and safety teams.
Minimum Requirements:
3-5+ years of experience in Machine Learning, AI, or Data Science.
Hands-on experience with ML-based content classification, ideally using tools like Hive, Anthropic, or similar AI moderation models.
Strong programming skills in Python, TensorFlow, PyTorch, or Scikit-learn.
Experience working with large-scale datasets and real-time ML inference pipelines.
Familiarity with NLP, computer vision, or multi-modal AI models for content moderation.
Proficiency in working with cloud-based ML infrastructure (AWS, GCP, or Azure).
Understanding of Trust & Safety challenges, online abuse detection, and content moderation policies.
Strong problem-solving skills and ability to iterate quickly in a fast-paced environment.
Preferred Qualifications:
Experience working in Trust & Safety, moderation, or anti-fraud ML systems.
Knowledge of graph-based ML models or anomaly detection techniques.
Experience with real-time ML inference and streaming data processing (Kafka, Spark, etc.).
Prior work with ethically-aligned AI moderation tools to reduce bias in ML models.
Experience with streaming analytics and optimizing performance of models to be used in near real time and on clients.
Salary & Benefits
Compensation: $120k - $160k
Benefits Include:
Unlimited PTO to prioritize work-life balance.
401(k) plan to invest in your future.
Comprehensive health insurance to support your well-being.
Paid company holidays for time to recharge.
Competitive salary that values your expertise and contributions.
To apply, skip the cover letter. Submit your resume and share a project you've worked on that shows your experience. You can email this to [email protected].
favorited is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.
Employees may be eligible for family and medical leave under the California Family Rights Act (CFRA) or Pregnancy Disability Leave (PDL).
In compliance with the California Equal Pay Act, the salary range for this position is provided above. Actual compensation may vary based on experience, qualifications, and location.
or
All done!
Your application has been successfully submitted!