We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

VLM Data Scientist

Ampcus, Inc
United States, Wisconsin, Waukesha
May 06, 2025

Ampcus Inc. is a certified global provider of a broad range of Technology and Business consulting services. We are in search of a highly motivated candidate to join our talented Team.

Job Title: VLM Data Scientist

Location(s): Waukesha, WI

Job Description:

We are seeking a highly skilled and detail-oriented Vision-Language Models (VLM) Data Scientist/ Vision Data Analyst to join our team.



  • The ideal candidate will have a strong background in computer vision, natural language processing, data analysis, and machine learning.
  • This role involves developing and deploying multimodal AI solutions that integrate vision and language capabilities, analyzing visual data to extract meaningful insights, and collaborating with cross-functional teams to improve our products and services.


Key Responsibilities:


  • VLM Development & Deployment:

    • Design, train, and deploy efficient Vision-Language Models (e.g., VILA, Isaac Sim) for multimodal applications.
    • Explore cost-effective methods such as knowledge distillation, modal-adaptive pruning, and LoRA fine-tuning to optimize training and inference.


  • Multimodal AI Solutions:

    • Develop solutions that integrate vision and language capabilities for applications like image-text matching, visual question answering (VQA), and document data extraction.
    • Leverage interleaved image-text datasets and advanced techniques (e.g., cross-attention layers) to enhance model performance.


  • Healthcare Domain Expertise:

    • Apply VLMs to healthcare-specific use cases such as medical imaging analysis, position detection, motion detection, and measurements.
    • Ensure compliance with healthcare standards while handling sensitive data.


  • Efficiency Optimization:

    • Evaluate trade-offs between model size, performance, and cost using techniques like elastic visual encoders or lightweight architectures.
    • Benchmark different VLMs (e.g., GPT-4V, Claude 3.5) for accuracy, speed, and cost-effectiveness on specific tasks.


  • Data Analysis:

    • Analyze large sets of visual data to identify patterns, trends, and anomalies.


  • Algorithm Development:

    • Develop and implement computer vision algorithms to process and interpret visual data.


  • Machine Learning:

    • Apply machine learning techniques to improve the accuracy and efficiency of vision-based systems.


  • Reporting:

    • Create detailed reports and visualizations to communicate findings to both technical and non-technical audiences.




Qualification:


  • Education:

    • Master's or Ph.D. in Computer Science, Data Science, Machine Learning, Electrical Engineering, or a related field.


  • Experience:

    • 5+ years of experience in machine learning or data science roles with a focus on vision-language models and computer vision.
    • Proven expertise in deploying production-grade multimodal AI solutions.


  • Technical Skills:

    • Proficiency in Python and ML frameworks (e.g., PyTorch, TensorFlow).
    • Hands-on experience with VLMs such as VILA, Isaac Sim, or VSS.
    • Strong understanding of image processing techniques and tools.


  • Analytical Skills:

    • Excellent problem-solving skills and the ability to analyze complex data sets.


  • Communication:

    • Strong written and verbal communication skills.
    • Ability to present complex information clearly and concisely.


  • Teamwork:

    • Ability to work effectively in a collaborative team environment.


  • Soft Skills:

    • Experience with cloud computing platforms such as AWS or Azure.
    • Familiarity with data visualization tools like Tableau or Power BI.
    • Knowledge of statistical analysis and data mining techniques.




Ampcus is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, protected veterans or individuals with disabilities.
Applied = 0

(web-94d49cc66-r6t7c)