SaidGig

Software Engineering and Data Science Evaluator

$60–$100/hr

RemoteContracttechnologyUpdated Apr 16, 2026
Apply Now

About this role

Role Overview

This position offers an opportunity to collaborate with leading AI teams to enhance the quality, usefulness, and reliability of conversational AI systems. These systems are crucial in various everyday and professional contexts, where their effectiveness hinges on their ability to respond clearly, accurately, and helpfully to user inquiries. The focus of this project is to evaluate and improve how models reason about code, generate solutions, and explain technical concepts across diverse programming tasks and complexity levels.

Key Responsibilities
  • Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness.
  • Conduct fact-checking using trusted public sources and authoritative references.
  • Perform accuracy testing by executing code and validating outputs using appropriate tools.
  • Annotate model responses by identifying strengths, areas for improvement, and factual or conceptual inaccuracies.
  • Assess code quality, readability, algorithmic soundness, and explanation quality.
  • Ensure model responses align with expected conversational behavior and system guidelines.
  • Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines.
Qualifications
  • Bachelor''s, Master''s, or PhD in Computer Science or a closely related field.
  • Significant (3+ years) real-world experience in software engineering or related technical roles.
  • Expertise in at least two relevant programming languages (e.g., Python, Java, C++, C, JavaScript, Go, Rust, Ruby, SQL, Powershell, Bash, Swift, Kotlin, R, TypeScript, HTML/CSS).
  • Ability to solve HackerRank or LeetCode Medium and Hard-level problems independently.
  • Experience contributing to well-known open-source projects, including merged pull requests.
  • Significant experience using LLMs while coding and understanding their strengths and failure modes.
  • Strong attention to detail and comfort in evaluating complex technical reasoning, identifying subtle bugs or logical flaws.
Nice-to-Have Specialties
  • Prior experience with RLHF, model evaluation, or data annotation work.
  • Track record in competitive programming.
  • Experience reviewing code in production environments.
  • Familiarity with multiple programming paradigms or ecosystems.
  • Experience explaining complex technical concepts to non-expert audiences.
What Success Looks Like
  • Identifying incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions.
  • Your feedback improves the correctness, robustness, and clarity of AI coding outputs.
  • Delivering reproducible evaluation artifacts that strengthen model performance.
  • Building trust with customers in AI systems to assist reliably with real-world coding tasks.
Why Join

This remote role allows experienced software engineers to directly influence how AI systems reason about and generate code, applying technical expertise to high-impact AI development work that enhances systems utilized by developers globally.

Related Jobs