Are you ready to explore the vast landscape of language comprehension? This role involves developing challenging questions that assess the comprehension abilities of a Large Language Model (LLM) assistant. You will focus on two core tasks: ranking content based on localization and generating complex, non-straightforward questions that test an LLM's understanding of provided content.
Responsibilities:
Stage 1: Localization Ranking
- Review content from various articles.
- Assign a localization rating (1-5) based on the relevance of information to specific regions or audiences.
- Apply localization ranking criteria accurately and consistently.
Stage 2: Question Generation and Answering
- Create challenging questions based on the provided content that require synthesis and comprehension, not mere text extraction.
- Follow established guidelines to ensure question quality and relevance.
- Provide concise, clear answers to the questions generated.
Requirements:
- Must be native speaker of Korean
- Must be located in Korea, Republic of
- Attention to Detail: Ability to accurately evaluate content and apply localization criteria.
- Critical Thinking: Strong analytical skills to create complex questions that challenge LLM comprehension.
- Language Proficiency: Excellent command of the target language for your workflow.
- Consistency: Ability to maintain high consistency in applying guidelines and crafting questions.
Preferred Qualifications:
- Experience in Content Annotation: Prior experience with content evaluation or question creation.
- Familiarity with LLMs: Understanding of LLM functionality and common comprehension challenges.
Ready to turn your expertise into a paid opportunity? Apply now!