Job Details
Analyst-Developer of Search Quality Evaluation Systems
Analyst-Developer role focused on training AI models by collecting datasets, conducting benchmarks, and measuring data quality. Responsibilities include evaluating annotation quality, analyzing performer quality, and developing processes and tools. Requires Python, SQL, and statistics knowledge.
To train AI models, it is necessary to collect training datasets, conduct benchmarks, and measure data quality. Our team helps distribute performers across projects, evaluates their work and the quality of annotations. The direction is new — so you can significantly influence our processes and tools. What tasks await you: • Evaluation of annotation quality Measure the quality of final annotations and the work of AI trainers, build metrics, benchmarks, and evaluation systems that influence model training and search relevance. • Performer quality analytics Build performer evaluation and rating systems, analyze their strengths, risk areas, and quality dynamics. Develop more accurate approaches to task distribution to improve data quality and annotation efficiency. • Development of processes and tools Help develop the annotation system and performer training funnel. Find opportunities for automation, make processes scalable, and create analytical tools to better manage data quality.
We expect you to: • Work confidently with Python and SQL • Know mathematical statistics and probability theory • Be able to structure customer requirements and expectations • Be ready to take on new tasks for which there is no ready-made solution
It's easy to grow with us. If you need to improve your language skills for work tasks — we will organize training and pay 50% of the cost. This is not all the bonuses — a full list is here: https://yandex.ru/jobs/pages/benefits?utm_campaign=ya_nanimaet
Don't miss a single job
Subscribe to our Telegram channel