CV Builder Interview Copilot

AI/ML Engineer – Generative AI Specialist

  • Chennai
  • 2 weeks ago
  • Full-time

About the Role:

We are looking for a highly motivated AI/ML Engineer with 3–5 years of experience to join our growing

team. This role focuses on building and optimizing solutions using Generative AI . You’ll work at the

intersection of AI research and real-world application development, helping us deliver next-generation

intelligent systems for enterprise use cases.

 

Responsibilities:

Design, build ML models with a focus on LLMs and Generative AI.

Conduct research that contributes to the state-of-the-art in LLM architectures.

Expertise in full and parameter-efficient fine-tuning (LoRA, QLoRA, Adapter, Prefix Tuning), instruction tuning, RLHF, and multi-task learning.

Proficient in model compression (AWQ, GPTQ, GPTQ-for-LLaMA) and optimized inference engines (vLLM, DeepSpeed, FP6-LLM).

Develop and maintain scalable APIs to serve model outputs in production environments.

Collaborate with cross-functional teams (product, data and engineering) to identify AI opportunities and deliver solutions.

Implement Retrieval-Augmented Generation (RAG) pipelines and integrate vector databases (e.g., FAISS, Pinecone).

Database Knowledge: Experience with PostgreSQL.

Research and apply the latest techniques in prompt engineering, model compression, and optimization.

Monitor, evaluate, and continuously improve model performance and reliability.

 

Required Skills & Experience:

Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related field.

3–5 years of hands-on experience in AI/ML development and Expertise in Agentic approach

Strong programming skills in Python and experience with ML frameworks such as Proficiency in TensorFlow, PyTorch, Keras

Experience in distributed training, NVIDIA AI platforms, and cloud/on-premise infrastructure.

Familiarity with integrating MCP-based architectures into broader system workflows, ensuring semantic consistency and interoperability.

Experience with performance tuning, caching strategies, and cost optimization, particularly in the context of production-grade LLM deployments.

Experience with LLMs, including fine-tuning, prompt engineering, and using frameworks like Hugging Face Transformers.

Familiarity with GenAI techniques and tools such as LangChain, LangGraph, LLMOps, LoRA, and PEFT.

Experience deploying models using FastAPI, Flask, Docker, or on cloud platforms

(AWS/GCP/Azure).

Understanding of NLP concepts, deep learning architectures, and transformer models.

 

Preferred Qualifications:

B Tech or MTech in AI -Data Science – Gen AI -Computer Science, Data Science with Real Gen AI Project/Product Development Experience/Fine Tuned LLM Experience is an added advantage.

Knowledge of AI guardrails, compliance frameworks (e.g., Microsoft AI Guidance), and responsible AI practices.

Previous work on chatbots, copilots, or AI assistants.

 

Publications, contributions to open-source projects, or participation in AI competitions.