Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy
Jobs / Job page
Entry Level Annotation Judge (Data Quality Assessor) image - Rise Careers
Job details

Entry Level Annotation Judge (Data Quality Assessor)

Entry Level Annotation Judge (Data Quality Assessor)Our Client: An American tech real estate marketplace company founded by former Microsoft executivesDuration: 10 months (potential to extend or convert to perm)Pay: $21/hr on a W2 (set rate)Fully Remote40 hours per weekOur Client's AI Team is harnessing the power of Generative AI to build services and product experiences for a seamless and convenient real estate transaction experience. The AI Platform team is building platform services that enable usage of LLMs and multimodal models across our client's group businesses.Our client is looking for an entry level Annotation Judge to join the Human In The Loop Operations team and contribute to their growing set of Natural Language Processing (NLP) projects. The Annotation Judge will play a crucial role in fine-tuning and calibrating AI/ML processes that handle verbal interactions, between home buyers and real estate agents.This is a great opportunity to collaborate with experienced Operations Managers and Natural Language experts. You will be taught everything you need to know! Once ramped up, you will be responsible for ensuring the quality of a number of possible elements related to projects, from the accuracy and humanlike traits of automated chatbots answers, to the classification of elements of a conversation annotated automatically or the review of search results based on understanding of complex users queries.Will play an essential role in defining clear guidelines which may be used by non-native English speakers. Attention to detail and natural curiosity for language subtleties will be essential for the long term scaling of solutions. Ideal candidate is a responsible and reliable self-starter who can be comfortable with ambiguity and is willing to learn new things.Day In The Life:• Review and/or evaluate annotated data to ensure it meets quality standards and project requirements.• Collaborate with internal teams to optimize model performance based on annotated data.• Help develop and maintain annotation guidelines and best practices.• Annotate training data to establish Ground Truth data.• Identify gaps in training content and suggest improvements.Who You Are:• Excellent attention to detail and analytical skills• Enthusiasm for the AI space and willingness to learn• Ability to work in ambiguous situations• Ability to work effectively in a collaborative team environment.Bonus Qualifications:• Experience or affinity for linguistics• Experience with copywriting, UX authoring, content development around voice and brand guidelines.• Bachelor's degree in English, Linguistics or a related field.• Experience with Machine-generated Speech evaluation, Prompt Engineering, Search Results Evaluation, Search Query Categorization• Experience with tools and software used for NLP annotation is a plus

We partner with companies to get marketing, creative, and digital work done by providing the right talent, innovation, and insights. We drive meaningful impact by helping navigate today's evolving environment.

91 jobs
MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
September 12, 2024

Subscribe to Rise newsletter

Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!
Other jobs
Company
Posted 11 months ago
Company
Posted last month
Company
Posted 19 days ago
Company
Posted 27 days ago