found 57 skills in registry
Use when defining or implementing Go interfaces, designing abstractions, creating mockable boundaries for testing, or composing types through embedding. Also use when deciding whether to accept an interface or return a concrete type, or using type assertions or type switches, even if the user doesn't explicitly mention interfaces. Does not cover generics-based polymorphism (see go-generics).
Zoom Virtual Agent Android integration via WebView. Use for Java/Kotlin bridge callbacks, native URL handling, support_handoff relay, and lifecycle-safe embedding.
Zoom Meeting SDK for Electron desktop applications. Use when embedding Zoom meetings in an Electron app with the Node addon wrapper, JWT auth, join/start flows, settings controllers, and raw data integration.
Zoom Meeting SDK for embedding Zoom meetings into web, Android, iOS, macOS, Unreal, React Native, Electron, and Linux applications. Use when you want to integrate the full Zoom meeting experience into your app. Supports Web (JavaScript), React Native (iOS/Android), Electron desktop apps, Linux (C++ headless bots), and native platforms.
Zoom Video SDK for macOS native desktop apps. Use when building custom macOS video sessions with native UI control, tokenized join, and desktop-oriented media/device workflows.
GitHub Copilot is an AI-powered pair programming tool that integrates directly into your editor and terminal. It provides real-time code suggestions, natural language chat for explaining and refactoring code, slash commands for common operations, and workspace-aware agents that can reason across your entire codebase. Copilot accelerates development by turning intent into working code, reducing boilerplate, and helping developers navigate unfamiliar APIs and patterns.
You are an expert in Bolt.new by StackBlitz, the AI-powered full-stack development environment that runs entirely in the browser. You help developers go from idea to deployed app in minutes using natural language prompts — Bolt generates complete applications with frontend, backend, database, and deployment, all running in a WebContainer without local setup.
You are an expert in Cloudflare Workers AI, the serverless AI inference platform running on Cloudflare's global network. You help developers run LLMs, embedding models, image generation, speech-to-text, and translation models at the edge with zero cold starts, pay-per-use pricing, and integration with Workers, Pages, and Vectorize — enabling AI features without managing GPU infrastructure.
Assists with storing, searching, and managing vector embeddings using ChromaDB. Use when building RAG pipelines, semantic search engines, or recommendation systems. Trigger words: chromadb, chroma, vector database, embeddings, semantic search, similarity search, vector store, rag.
Metabase is an open-source business intelligence tool for creating dashboards and visualizations. Learn Docker deployment, database connections, creating questions and dashboards, embedding analytics, and API usage.
Integrate OpenAI APIs into applications. Use when a user asks to add GPT or ChatGPT to an app, generate text with OpenAI, build a chatbot, use GPT-4 or o1 models, generate embeddings, use function calling, stream chat completions, build AI features, moderate content, generate images with DALL-E, transcribe audio with Whisper API, or integrate any OpenAI model. Covers Chat Completions, Assistants API, function calling, embeddings, streaming, vision, DALL-E, Whisper, and moderation.
Collect, categorize, and synthesize user feedback from multiple channels into actionable product insights. Use when tasks involve analyzing support tickets, app store reviews, NPS survey responses, social media mentions, user interviews, feature request prioritization, sentiment analysis, churn prediction from feedback patterns, or building voice-of-customer reports. Covers multi-channel feedback aggregation and data-driven product decisions.
Add persistent memory to AI coding agents — file-based, vector, and semantic search memory systems that survive between sessions. Use when a user asks to "remember this", "add memory to my agent", "persist context between sessions", "build a knowledge base for my agent", "set up agent memory", or "make my AI remember things". Covers file-based memory (MEMORY.md), SQLite with embeddings, vector databases (ChromaDB, Pinecone), semantic search, memory consolidation, and automatic context injection.
You are an expert in Lovable (formerly GPT Engineer), the AI app builder that generates production-ready full-stack applications from natural language descriptions. You help developers and non-technical founders create React + Supabase applications with authentication, database, file storage, and deployment — going from idea to production URL in under an hour.
You are an expert in Stagehand by BrowserBase, the AI-powered browser automation framework that lets you control web pages using natural language instructions. You help developers build web automations that act, extract data, and observe pages using plain English commands instead of brittle CSS selectors — powered by GPT-4o or Claude for visual understanding of page layouts.
Work with Hugging Face's ecosystem for machine learning — transformers library, model hub, tokenizers, inference pipelines, and fine-tuning. Covers downloading pre-trained models, running inference, training custom models, and publishing to the Hub.
PandasAI enables natural language queries on pandas DataFrames using LLMs. Learn to ask questions in plain English, generate charts, clean data, and integrate with OpenAI and local models for conversational data analysis.
Store and search vector embeddings in PostgreSQL with pgvector — no separate vector database needed. Use when someone asks to "vector search in Postgres", "store embeddings", "pgvector", "similarity search", "RAG with Postgres", "semantic search in existing database", or "add AI search to my app without a separate vector DB". Covers vector columns, indexing (IVFFlat, HNSW), similarity search, and integration with ORMs.
Pinecone is a managed vector database for AI and machine learning applications. Learn to create indexes, upsert embeddings, query by similarity, use namespaces and metadata filtering for semantic search and RAG pipelines.
Assists with building, training, and deploying neural networks using PyTorch. Use when designing architectures for computer vision, NLP, or tabular data, optimizing training with mixed precision and distributed strategies, or exporting models for production inference. Trigger words: pytorch, torch, neural network, deep learning, training loop, cuda.