Research Scientist - Multimodal Language Models

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Significant experience in multimodal language models and AI algorithms., Expertise in Python and Pytorch, with a full development pipeline background., Hands-on experience with large-scale text and multimodal data., Familiarity with LLMs, Vision Language Models, and generative video models..

Key responsibilities:

  • Design and implement novel AI algorithms for multimodal language models.
  • Build tools for evaluating and benchmarking these models.
  • Develop large-scale AI training and inference methods.
  • Collaborate with teams to transfer research into products and services.

Luma AI logo
Luma AI https://lumalabs.ai/dream-machine
11 - 50 Employees
See all jobs

Job description

Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision and audio. So, we are working on training and scaling up multimodal foundation models for systems that can see, hear and understand, show and explain, and eventually interact with our world to effect change.


We are looking for researchers with significant experience solving hard problems in multimodal language models. You will work end–to–end on cutting edge multimodal language models with strong emphasis on audio and visual data. Your contributions will be pivotal in shaping various research projects and product roadmaps.

Responsibilities

  • Design and implement novel AI algorithms and architectures for multimodal language models.

  • Build tools to evaluate and benchmark multimodal language models.

  • Develop large-scale AI training and inference methods.

  • Ensure efficient implementation of models & systems for data processing and training.

  • Build tools to analyze and process multimodal data.

  • Collaborate with research and engineering teams across Luma to transfer research to products and services.

  • Implement cutting-edge product prototypes based on multimodal generative AI.

Experience

  • Expertise in Python & Pytorch, including practical experience working with the full development pipeline from data processing & data loading to training, inference, and optimization.

  • Experience working with large-scale text data, or (bonus) interleaved data spanning audio, video, image, and/or text.

  • Hands-on experience in developing or benchmarking at least one of the following topics: LLMs, Vision Language Models, Audio Language Models, generative video models .

Compensation

  • The pay range for this position in California is $200,000 - $300,000yr; however, base pay offered may vary depending on job-related knowledge, skills, candidate location, and experience. We also offer competitive equity packages in the form of stock options and a comprehensive benefits plan. 

Your application is reviewed by real people.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Training And Development
  • Collaboration
  • Problem Solving

Related jobs