> chromadb
Assists with storing, searching, and managing vector embeddings using ChromaDB. Use when building RAG pipelines, semantic search engines, or recommendation systems. Trigger words: chromadb, chroma, vector database, embeddings, semantic search, similarity search, vector store, rag.
curl "https://skillshub.wtf/TerminalSkills/skills/chromadb?format=md"ChromaDB
Overview
ChromaDB is an open-source vector database for storing, searching, and managing embeddings. It provides a simple API for document ingestion, semantic similarity search, and metadata filtering, supporting both Python and JavaScript/TypeScript clients with embedded, server, and cloud deployment options.
Instructions
- When initializing, use
get_or_create_collectionfor idempotent collection setup, choosePersistentClientfor development andHttpClientfor production server connections. - When adding documents, batch
add()calls in chunks of 5,000 documents, always store source metadata (filename, URL, page number) for RAG citations, and useupsert()for incremental updates to avoid duplicates. - When querying, use
collection.query(query_texts=..., n_results=...)for text-based search, combine metadatawherefilters to narrow results before semantic search, and setn_resultsbased on the LLM's context window (5-10 for most RAG pipelines). - When choosing embeddings, use the default Sentence Transformers for local development without API keys, OpenAI or Cohere embedding functions for production, or pass pre-computed vectors directly.
- When filtering metadata, use operators like
$eq,$gt,$inwith$and/$orlogical operators, and combine withwhere_documentfor content-based filtering alongside semantic similarity. - When deploying, use the embedded
PersistentClientfor single-node applications, Docker for server mode, or Chroma Cloud for managed hosting with multi-tenancy support. - When tuning performance, configure HNSW parameters (
hnsw:M,hnsw:construction_ef,hnsw:search_ef) for the quality-speed tradeoff and choosecosinedistance for normalized embeddings (OpenAI, Cohere).
Examples
Example 1: Build a document Q&A pipeline
User request: "Set up a RAG pipeline with ChromaDB for answering questions about our docs"
Actions:
- Load documents and split into chunks with metadata (source, page)
- Create a collection with OpenAI embedding function
- Batch-add document chunks with
upsert()for idempotent ingestion - Query with
collection.query()and pass retrieved chunks as context to the LLM
Output: A semantic search pipeline that retrieves relevant document chunks for LLM-powered Q&A.
Example 2: Add filtered semantic search to an application
User request: "Implement product search that combines text similarity with category filters"
Actions:
- Create a collection with product descriptions and category metadata
- Implement search combining
query_textswithwhere={"category": "electronics"} - Return results with distances for relevance ranking
- Add price range filtering with
$gteand$lteoperators
Output: A filtered semantic search that narrows by metadata before ranking by text similarity.
Guidelines
- Use
get_or_create_collectionfor idempotent collection initialization; it is safe for restarts. - Batch
add()calls in chunks of 5,000 documents to manage memory usage. - Always store source metadata (filename, URL, page number); it is essential for RAG citations.
- Use
upsert()for incremental updates to avoid duplicate documents when re-ingesting. - Set
n_resultsbased on the LLM's context window: 5-10 results for most RAG pipelines. - Use metadata filtering to narrow results before semantic search to reduce noise.
- Choose
cosinedistance for normalized embeddings (OpenAI, Cohere) andl2for unnormalized.
> related_skills --same-repo
> zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
> zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
> zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
> zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.