Human Interview for LLM Evaluation
3 days ago

Job summary
Participants will undergo a three-step procedure including informed consent review and a one-on-one text-based chat interview.
Job description
, consectetur adipiscing elit. Nullam tempor vestibulum ex, eget consequat quam pellentesque vel. Etiam congue sed elit nec elementum. Morbi diam metus, rutrum id eleifend ac, porta in lectus. Sed scelerisque a augue et ornare.
Donec lacinia nisi nec odio ultricies imperdiet.
Morbi a dolor dignissim, tristique enim et, semper lacus. Morbi laoreet sollicitudin justo eget eleifend. Donec felis augue, accumsan in dapibus a, mattis sed ligula.
Vestibulum at aliquet erat. Curabitur rhoncus urna vitae quam suscipit
, at pulvinar turpis lacinia. Mauris magna sem, dignissim finibus fermentum ac, placerat at ex. Pellentesque aliquet, lorem pulvinar mollis ornare, orci turpis fermentum urna, non ullamcorper ligula enim a ante. Duis dolor est, consectetur ut sapien lacinia, tempor condimentum purus.
Access all high-level positions and get the job of your dreams.
Similar jobs
LLM Evaluation
1 month ago
Project Overview: We are supporting a client building an AI Shopping Assistant that responds to natural-language user queries by generating product collections and refinement suggestions. · This role focuses on evaluating and improving LLM outputs by reviewing interaction traces, ...
LLM Evaluation Specialist
3 weeks ago
Evaluate LLM-generated responses on their ability to effectively answer user queries. Conduct fact-checking using trusted public sources and external tools. Generate high-quality human evaluation data by annotating response strengths, areas for improvement, and factual inaccuraci ...
LLM Evaluation Specialist
4 weeks ago
Mercor connects elite creative and technical talent with leading AI research labs. · ...
LLM Evaluation, Benchmarking
1 week ago
We're looking for an LLM Evaluation, Benchmarking & Experimentation Engineer to rigorously test our proprietary LLM API and build the infrastructure for systematic model improvement. · ...
LLM Evaluation Specialist
3 weeks ago
+Evaluate LLM-generated responses for effectiveness in answering user queries. Conduct fact-checking using trusted public sources and external tools. · ...
LLM Evaluation and Benchmarking Mentor
1 month ago
+I'm seeking a technical mentor to help deepen my understanding of LLM evaluation and benchmarking, with particular attention to high-stakes applications (e.g., mental health), while developing a generalizable framework for reasoning about model performance across domains. · ...
Finance LLM Reviewer/Evaluator prompt review
1 week ago
To audit train and improve Large Language Models LLMs specialized in finance ensuring accuracy and mitigating risks. · Key Tasks Evaluating LLM outputs for accuracy logicality and reliability in financial tasks e.g trading reporting Developing Ground Truth data for fine-tuning mo ...
We are seeking a talented creative writer who possesses a strong statistical background to develop benchmarks for large language model output evaluation.Develop benchmarks for LLM output evaluation · ...
We're building an internal system that helps B2B teams write non-generic outreach by using structured information pulled from public sources (company websites competitor sites LinkedIn posts YouTube video transcripts etc.). The system should generate actionable outreach suggestio ...
I'm hiring a full-time (40 hours/week) assistant to help me execute and scale finance-focused RLHF / LLM evaluation contracts (model grading, rubric design, golden responses QA and reviewer feedback) · This work blends finance (valuation accounting markets) with AI evaluation (co ...
AI Researcher
2 days ago
We are looking for a part-time Research Advisor or Consultant to help guide our methodology and provide high-level input as we scale our evaluation framework and data programs. · ...
We're building an LLM-powered assistant (agent) that can answer Korean healthcare-domain questions accurately and safely. · ...
Senior AI Engineer
2 weeks ago
We are building a serious AI product focused on transforming real-world business conversations into structured intelligence insights and automation. · AI pipelines that analyze recorded conversations speech text structured insights · LLM-based systems for summarization classifi ...
Writer/Content creator for LLM Data
1 month ago
We are looking for a hands-on LLM data/evaluation practitioner to create accurate and credible marketing content. · We need a technical writer who truly understands how LLM training and evaluation projects work in practice. · This is not a generic prompt engineer role. We need so ...
LLM Prompt Engineering
3 weeks ago
We are building a GenAI-driven recommendation engine that generates structured recommendations by passing user context + prompts to LLMs and evaluating the output. · ...
+We are seeking a PhD-level expert in Psychology or related field, strong knowledge of consciousness and theory of Mind, who also possesses understanding of Large Language Models (LLMs). · +Evaluating theories related to consciousness and contributing insights on how LLMs can be ...
We need an experienced engineer who can build a LLM-driven classification system that reads incoming text and produces structured consistent outputs for internal decision-making. · ...
We are seeking a PhD-level expert in Psychology, AI, · Neuroscience or related field. · Must have knowledge of consciousness and theory · of Mind, · ...
We need an experienced PyTorch ML engineer to modify our LLM routing framework to add response-aware routing capabilities. · The system currently makes routing decisions before any LLM generates a response; it only evaluates the incoming prompt to predict difficulty. · ...
ELO Scoring and LLM Integration for ShinkaEvolve
1 month ago
Job summary · Implementation of an evaluation combining ELO scoring and an LLM-as-a-judge for ShinkaEvolve (an open-source implementation of AlphaEvolve). · The project requires strong proficiency in Python and solid experience in software design and architecture. · ...