Skip to main content
🤖

AI

LLMs, RAG, embeddings, and agents. The landscape moves fast — focus on the fundamentals that stay stable.

Topics

LLMs & Prompting

How LLMs work, tokenization, and prompting techniques.

RAG

Retrieval Augmented Generation architecture and components.

Embeddings

Vector embeddings, similarity search, and vector databases.

AI Agents

Agentic patterns, tool use, and multi-agent systems.
The Stack
Most AI product interviews test: LLM API usage → embeddings → RAG → agents. Know how these stack together into a working system, not just each piece in isolation.
💡 Interview Tip
When discussing LLM-based systems, always mention latency, cost, and accuracy trade-offs. Choosing a model, chunking strategy, or retrieval method without mentioning these trade-offs signals shallow understanding.