Literary Critic for LLM Benchmarks
1 month ago

Job Summary
We seek a literary critic to develop benchmarks for LLM-generated content. Assess coherence, creativity, quality; create criteria to measure these aspects.Qualifications
- Content Writing
- Creative Writing
Job description
, consectetur adipiscing elit. Nullam tempor vestibulum ex, eget consequat quam pellentesque vel. Etiam congue sed elit nec elementum. Morbi diam metus, rutrum id eleifend ac, porta in lectus. Sed scelerisque a augue et ornare.
Donec lacinia nisi nec odio ultricies imperdiet.
Morbi a dolor dignissim, tristique enim et, semper lacus. Morbi laoreet sollicitudin justo eget eleifend. Donec felis augue, accumsan in dapibus a, mattis sed ligula.
Vestibulum at aliquet erat. Curabitur rhoncus urna vitae quam suscipit
, at pulvinar turpis lacinia. Mauris magna sem, dignissim finibus fermentum ac, placerat at ex. Pellentesque aliquet, lorem pulvinar mollis ornare, orci turpis fermentum urna, non ullamcorper ligula enim a ante. Duis dolor est, consectetur ut sapien lacinia, tempor condimentum purus.
Access all high-level positions and get the job of your dreams.
Similar jobs
We're looking for an LLM Evaluation, Benchmarking & Experimentation Engineer to rigorously test our proprietary LLM API and build the infrastructure for systematic model improvement. · ...
1 week ago
+I'm seeking a technical mentor to help deepen my understanding of LLM evaluation and benchmarking, with particular attention to high-stakes applications (e.g., mental health), while developing a generalizable framework for reasoning about model performance across domains. · ...
1 month ago
Technical Creative Writing Benchmark Developer for LLMs
Only for registered members
We are seeking a skilled Technical Creative Writing Benchmark Developer to help us benchmark large language models (LLMs) with 30 hours per week. · Mandatory skills: · Creative Writing · Content Writing · Search Engine OptimizationWriting ...
1 month ago
AI Engineer for LLM Benchmark Development and Creative Storytelling
Only for registered members
+We are seeking an AI Engineer with expertise in creating benchmarks for large language models (LLMs) who also has a passion for creative writing and story development. · +Creative Writing li>, Mandatory skills: Creative Writing, Writing, Content Writing li>, Adobe Illustrator li ...
1 month ago
We are seeking a skilled professional to create a comprehensive Discovery and Architecture Document for LLM Robot Workers. · This project will include cost and quality benchmarking, · along with guidelines for secure deployment. · The ideal candidate will have experience in LLM t ...
2 weeks ago
Creative Writer with Statistical Expertise Needed for LLM Evaluation
Only for registered members
We are seeking a talented creative writer who possesses a strong statistical background to develop benchmarks for large language model output evaluation.Develop benchmarks for LLM output evaluation · ...
3 weeks ago
Expert Needed for Sorting with LLM Project Integration
Only for registered members
We are seeking an expert to assist with our sorting with LLM project. The ideal candidate will have experience integrating benchmark datasets and connecting to open-source models using the existing codebase. · ...
1 month ago
We are seeking an expert in AI chatbot development with a strong understanding of language models. · ...
1 month ago
AI/ML Data Scientist — Compliance AI Evaluation
Only for registered members
We need a data scientist to prove our AI works — with data, not marketing. You'll design and run evaluations that compare ZeroDrift's compliance detection against raw LLMs (GPT-4, Claude). ...
1 week ago
Bond Studio makes software that lets users capture their space with their phone and see a visualization of how that room could look if it were remodeled. Users can browse products, select them and see them visualized in their space. Part of this experience is search where users c ...
2 weeks ago
We are building a serious AI product focused on transforming real-world business conversations into structured intelligence insights and automation. · AI pipelines that analyze recorded conversations speech text structured insights · LLM-based systems for summarization classifi ...
2 weeks ago
We are seeking an experienced Legal AI Evaluator to join our team at Mercor. The ideal candidate will have a strong background in law and experience working with large language models. · ...
1 month ago
Experienced Developer Needed to Optimize AI Coding Agent Engine
Only for registered members
We are seeking an experienced developer to optimize our AI-powered coding agent engine. · Mandatory skills: Python, LLM Prompt Engineering, AI Agent Development · ...
1 month ago
We are seeking a highly skilled GenAI / AI Engineer to design build and deploy cutting-edge generative AI solutions that address real-world business challenges. · ...
2 weeks ago
GEO / LLM Visibility Expert (SaaS + EdTech) — AI Search Audit
Only for registered members
We're looking for a GEO (Generative Engine Optimization) expert with proven experience in SaaS and/or Education platforms to conduct an audit of how a large international EdTech SaaS product is currently represented and recommended inside LLMs and AI-powered search experiences. · ...
1 month ago
This is a freelance opportunity to build an AI workflow for strategy reports. · The client has existing assessment logic and narrative frameworks in place. · ...
1 week ago
We are looking for an experienced Mechanical Engineering Consultant to join our team. · ...
1 month ago
Evaluate LLM-generated responses for effectiveness in answering user queries. · Evaluate model responses align with expected conversational behavior and system guidelines. ...
1 month ago
+Mercor connects elite creative and technical talent with leading AI research labs. · +Bachelor's degree · Native speaker or ILR 5/primary fluency (C2 on the CEFR scale) in French · +,valid_job:1} ...
3 weeks ago
About Mercor connects elite creative and technical talent with leading AI research labs. We are looking for an experienced Conversational AI Evaluator to join our team. · Evaluate LLM-generated responses for effectiveness in answering user queries. Conduct fact-checking using tru ...
1 month ago