AI OPPORTUNITIES
Key Responsibilities:
• Lead the design and development of GenAI and Agentic AI-based applications using Python.
• Architect scalable and secure solutions integrating LLMs, vector databases, and orchestration frameworks.
• Collaborate with data scientists, ML engineers, and product teams to translate business needs into AI-driven solutions.
• Manage project timelines, resources, and deliverables across agile teams.
• Ensure adherence to best practices in AI development, MLOps, and cloud deployment.
• Stay updated with emerging trends in GenAI, multi-agent systems, and open-source AI tooling.
Required Skills & Technologies:
Strong proficiency in Python and AI/ML libraries (e.g., LangChain, Hugging Face, OpenAI SDKs).
Experience with Agentic AI frameworks and multi-agent orchestration.
Hands-on experience with LLMs, RAG pipelines, vector databases (e.g., FAISS, Pinecone, Weaviate).
Familiarity with Databricks, PySpark, and SQL for data engineering.
Cloud experience with AWS and containerization tools (e.g., Docker, Kubernetes).
Knowledge of CI/CD, Git, and DevOps best practices.
Exposure to Scaled Agile environments.
What You’ll Do
• Work with the latest AI and ML models (OpenAI, Anthropic, LLaMA, etc.)
• Integrate AI capabilities into our existing products and new features
• Use AI-assisted coding tools to build and optimize solutions
• Experiment with AI workflows to solve real-world problems for our customers
• Collaborate with the product and engineering team to deploy AI-driven features
What We’re Looking For
• Solid understanding of current AI/ML models and architectures
• Hands-on experience with AI APIs, frameworks, and libraries (Python, LangChain, TensorFlow, PyTorch, etc.)
• Ability to leverage AI tools for faster and smarter coding
• Strong problem-solving skills and a “let’s figure it out” mindset
• Bonus: Experience in SaaS products or B2B data tools
Key Responsibilities :
• Develop and integrate APIs, services, and front-end components for GenAI applications.
• Collaborate with data scientists and ML engineers to operationalize LLM and vector database solutions.
• Containerize and deploy services using Docker and cloud-native practices.
• Ensure robust monitoring, security, and CI / CD automation.
• Deliver scalable, secure, and high-performance GenAI systems in enterprise environments
Key Skills :
• Python, Node.js
• FastAPI / Flask, Streamlit / Gradio / Dash
• RESTful API Development
• LangChain, Prompt Engineering, Vector Databases
• Azure / AWS Cloud, Docker, CI / CD
• Monitoring & Logging Tools
• Security & Privacy Compliance
• Model Deployment & Integration
• AI Software Engineering software engineering experience. Rock-solid backend development skills with expertise in Python and designing scalable APIs/services. Experience building and deploying systems on AWS or similar cloud platforms is required (including familiarity with cloud infrastructure and distributed computing) . Strong system design abilities with a track record of designing robust, maintainable architectures is a must.
• LLM/AI Application Experience: Proven experience building applications that leverage large language models or generative AI. You have spent time prompting and integrating language models into real products (e.g. building chatbots, semantic search, AI assistants) and understand their behavior and failure modes . Demonstrable projects or work in LLM-powered application development – especially using techniques like RAG or building LLM-driven agents – will make you stand out .
• AI/ML
Knowledge: Prioritize applied LLM product engineering over traditional ML pipelines. Strong chops in prompt design, function calling/structured outputs, tool use, context-window management, and the RAG levers that matter (document parsing/chunking, metadata, re-ranking, embedding/model selection). Make pragmatic model/provider choices (hosted vs. open) using latency, cost, context length, safety, and rate-limit trade-offs; know when simple prompting/config changes beat fine-tuning, and when lightweight adapters or fine-tuning are justified. Design evaluation that mirrors product outcomes: golden sets, automated prompt unit tests, offline checks, and online A/Bs for helpfulness/correctness/safety; track production proxies like retrieval recall and hallucination rate. Solid understanding of embeddings, tokenization, and vector search fundamentals, plus working literacy in transformers to reason about capabilities/limits. Familiarity with agent patterns (planning, tool orchestration, memory) and guardrail/safety techniques.
AI Software engineering
• Tooling & Frameworks: Hands-on experience with the AI/LLM tech stack and libraries. This includes proficiency with LLM orchestration libraries such as LangChain, LlamaIndex, etc., for building prompt pipelines . Experience working with vector databases or semantic search (e.g. Pinecone, Chroma, Milvus) to enable retrieval-augmented generation is highly desired.
• Cloud & DevOps: Own the productionization of LLM/RAG-backed services as high-availability, low-latency backends. Expertise in AWS (e.g., ECS/EKS/Lambda, API Gateway/ALB, S3, DynamoDB/Postgres, OpenSearch, SQS/SNS/Step Functions, Secrets Manager/KMS, VPC) and infrastructure-as-code (Terraform/CDK). You’re comfortable shipping stateless APIs, event-driven pipelines, and retrieval infrastructure (vector stores, caches) with strong observability (p95/p99 latency, distributed tracing, retries/circuit breakers), security (PII handling, encryption, least-privilege IAM, private networking to model endpoints), and progressive delivery (blue/green, canary, feature flags). Build prompt/config rollout workflows, manage token/cost budgets, apply caching/batching/streaming strategies, and implement graceful fallbacks across multiple model providers.
• Product and Domain Experience: Experience building enterprise (B2B SaaS) products is a strong plus . This means you understand considerations like user experience, scalability, security, and compliance. Past exposure to these types of products will help you design AI solutions that cater to a range of end-users.
Preferred AI Qualifications
• Full-Stack & Frontend Skills: While this is primarily a backend/AI role, having a deep full-stack background (especially modern frontend frameworks or Node.js) is beneficial . It will help you collaborate with frontend teams and build end-to-end solutions.
• Advanced AI Techniques: Familiarity with advanced techniques like fine-tuning LLMs (e.g. using LoRA adapters), reinforcement learning from human feedback (RLHF), or other emerging ML methodologies . Experience working with open-source LLMs (such as LLaMA, Mistral etc.) and their tooling (e.g. model quantization, efficient inference libraries) is a plus .
• Multi-Modal and Agents: Experience developing complex agentic systems using LLMs (for example, multi-agent systems or integrating LLMs with tool networks) is a bonus . Similarly, knowledge of multi-modal AI (combining text with vision or other data) could be useful as we expand our product capabilities.
• Startup/Agile Environment: Prior experience in an early-stage startup or similarly fast-paced environment where you’ve worn multiple hats and adapted to rapid changes . This role will involve quick iteration and evolving requirements, so comfort with ambiguity and agility is valued.
• Community/Research Involvement: Active participation in the AI community (open-source contributions, research publications, or blogging about AI advancements) is appreciated. It demonstrates passion and keeps you at the cutting edge. If you have published research or have a portfolio of AI side projects
Key Responsibilities:
– Design & build GenAI/agentic systems such as chat copilots, workflow/graph agents, and tool use
– Implement chunking, hybrid search, vector stores, re-ranking, feedback loops, and continuous data quality/evaluation
– Select, integrate, and finetune LLMs & multimodal models
– Apply prompt-engineering techniques to specific use cases and types
– Work on solutions based on LLM, NLP, DL, ML, object detection/classification, etc.
– Have a clear understanding of CI/CD and configuring guardrails, PII redaction
– Collaborate with clients/stakeholders from multiple geographies
– Stay informed about the latest advancements in Gen AI, machine learning, and AI technologies
Key Responsibilities
• Develop and train computer vision models for defect detection (cracks, corrosion, surface anomalies) using image and thermal datasets.
• Build AI inference APIs (FastAPI / Flask) to integrate models with the AeroAI Cloud Dashboard.
• Design predictive models for vibration/sensor data (FFT-based anomaly detection, signal processing).
• Preprocess and clean real inspection data (images, videos, and CSV logs) for training and testing.
• Integrate and optimize model pipelines for speed, accuracy, and low latency inference.
• Evaluate and fine-tune pre-trained models (e.g. YOLO, EfficientNet, ViT, MobileNet) for inspection use cases.
• Work closely with the Full Stack Developer to ensure seamless integration with the dashboard.
• Prepare technical documentation, datasets, and AI experiment logs.
Role Description
• Experienced in ML Ops and LLM Ops. Experienced in evaluating models and continuous performance monitoring of both ML/DL and LLMs. Experienced in applying security measures in GEN AI solutions, implementing guard rails.
• Performance Optimization – Continuously monitor, refine, and optimize AI / GEN AI models for accuracy, efficiency, and speed, leveraging ML Ops and LLM Ops best practices.
• AI Research & Innovation – Stay updated with the latest AI/ML/ GEN AI advancements, exploring new technologies and methodologies to enhance solution effectiveness.
• Compliance & Security – Ensure AI implementations adhere to healthcare industry regulations, ethical AI principles, and data privacy standards.
• Automation & Workflow Enhancement – Identify opportunities to automate workflows and optimize business processes using AI-driven solutions.
• Front End Development – Streamlit is a must. Good to have React, Angular or Vue.js
Required Skills:
• Python with Git
• LLMs / SLMs (Transformer architecture, fine-tuning)
• AI frameworks (PyTorch, TensorFlow, Hugging Face)
• RAG pipelines (vectors, embeddings)
• Agent fine-tuning and benchmarking
About Us
We are on a mission to empower students with the skills they need to create real-world AI agents, master automation tools, and become experts in prompt engineering. Through our carefully designed Summer Camp, we provide hands-on learning, mentorship, and internship opportunities to help students transform their ideas into intelligent solutions.
Our program is built by industry professionals passionate about teaching the next generation of AI creators. By the end of the camp, every student will have real projects, a prestigious certification, and a 3-month internship letter to showcase their abilities to the world.
Our philisophy
High Demand: AI and automation are among the fastest-growing job sectors worldwide.
Smart Work, Not Hard Work: AI lets you automate boring tasks and focus on creative, meaningful work.
Stand Out from Others: If you know how to build AI agents or automate tasks, you immediately shine in interviews and job applications.
Why us?
We don’t just teach AI — we prepare you to build it, innovate with it, and lead with it.
