Description
Campus for good Pre-training
Tokenization
Scaling
Conditioning
How to try out these models
What are text-to-image models?
How can we mitigate LLM limitations?
What is an LLM app?
What is LangChain?
Exploring key components of LangChain
What are chains?Ā What are agents?
What is memory?
What are tools?
How does LangChain work?
Comparing LangChain with other frameworks
Getting Started with LangChain
pip
Poetry
Conda
Docker
Exploring API model integrations
OpenAI
Hugging Face
Google Cloud Platform
Jina AI
ReplicateĀ Others
Azure
Anthropic
Exploring local models
Hugging Face Transformers
llama.cpp
GPT4All
Building an application for customer service
Hallucinations
Prompt templates
Chain of density
Map-Reduce pipelines
Monitoring token usage
Extracting information from documents
Information retrieval with tools
Building a visual interface
Exploring reasoning strategies
Understanding retrieval and vectors
Embeddings
Vector storage
Vector indexing
Vector libraries
Vector databases
Loading and retrieving in LangChain
Document loaders
Retrievers in LangChain
kNN retriever
PubMed retriever
Custom retrievers
Implementing a chatbot
Document loader
Vector storage
Memory
Conversation buffers
Remembering conversation summaries
Storing knowledge graphs
Combining several memory mechanisms
Long-term persistence
Moderating responses
Developing Software with Generative AI
Software development and AI
Code LLMs
Writing code with LLMs
StarCoder
StarChat
Llama
Small local model
Automating software development
LLMs for Data Science
The impact of generative models on data science
Automated data science
Data collection
Visualization and EDA
Preprocessing and feature extraction
AutoML
Using agents to answer data science questions
Data exploration with LLMs
Customizing LLMs and Their Output
Conditioning LLMs
Methods for conditioning
Reinforcement learning with human feedback
Low-rank adaptation
Inference-time conditioning
Fine-tuning
Setup for fine-tuning
Open-source models
Commercial models
Prompt engineering
Prompt techniques
Zero-shot prompting
Few-shot learning
Chain-of-thought prompting
Self-consistency
Tree-of-thought
Generative AI in Production
How to get LLM apps ready for production
Terminology
How to evaluate LLM apps
Comparing two outputs
Comparing against criteria
String and semantic comparisons
Running evaluations against datasets
How to deploy LLM apps
FastAPI web server
Ray
How to observe LLM apps
Tracking responses
Observability tools
LangSmith
PromptWatch






Reviews
There are no reviews yet.