> tensorflow
You are an expert in TensorFlow, Google's open-source machine learning framework. You help developers build, train, and deploy neural networks using Keras (TensorFlow's high-level API), custom training loops, TensorFlow Serving for production inference, TFLite for mobile/edge deployment, and TensorFlow.js for browser ML — from prototyping to production-scale distributed training.
curl "https://skillshub.wtf/TerminalSkills/skills/tensorflow?format=md"TensorFlow — Deep Learning Framework
You are an expert in TensorFlow, Google's open-source machine learning framework. You help developers build, train, and deploy neural networks using Keras (TensorFlow's high-level API), custom training loops, TensorFlow Serving for production inference, TFLite for mobile/edge deployment, and TensorFlow.js for browser ML — from prototyping to production-scale distributed training.
Core Capabilities
Keras API (High-Level)
import tensorflow as tf
from tensorflow import keras
# Sequential model for simple architectures
model = keras.Sequential([
keras.layers.Input(shape=(784,)),
keras.layers.Dense(256, activation="relu"),
keras.layers.Dropout(0.3),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation="softmax"),
])
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
# Train
history = model.fit(
x_train, y_train,
epochs=20,
batch_size=64,
validation_split=0.2,
callbacks=[
keras.callbacks.EarlyStopping(patience=3, restore_best_weights=True),
keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=2),
keras.callbacks.ModelCheckpoint("best_model.keras", save_best_only=True),
],
)
Functional API (Complex Architectures)
# Multi-input, multi-output model
text_input = keras.Input(shape=(None,), dtype="int32", name="text")
image_input = keras.Input(shape=(224, 224, 3), name="image")
# Text branch
x = keras.layers.Embedding(vocab_size, 128)(text_input)
x = keras.layers.LSTM(64)(x)
# Image branch
y = keras.applications.EfficientNetV2B0(include_top=False, pooling="avg")(image_input)
y = keras.layers.Dense(128, activation="relu")(y)
# Combine
combined = keras.layers.Concatenate()([x, y])
combined = keras.layers.Dense(64, activation="relu")(combined)
# Multiple outputs
category = keras.layers.Dense(num_categories, activation="softmax", name="category")(combined)
sentiment = keras.layers.Dense(1, activation="sigmoid", name="sentiment")(combined)
model = keras.Model(
inputs=[text_input, image_input],
outputs=[category, sentiment],
)
Custom Training Loop
# Fine-grained control over training
@tf.function # JIT compile for performance
def train_step(model, optimizer, x, y):
with tf.GradientTape() as tape:
predictions = model(x, training=True)
loss = loss_fn(y, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
# Training loop
for epoch in range(num_epochs):
for batch_x, batch_y in train_dataset:
loss = train_step(model, optimizer, batch_x, batch_y)
# Validation
val_loss = tf.reduce_mean([
loss_fn(y, model(x, training=False))
for x, y in val_dataset
])
print(f"Epoch {epoch}: loss={loss:.4f}, val_loss={val_loss:.4f}")
Deployment
# Save model
model.save("my_model.keras") # Keras format
model.export("saved_model/") # SavedModel format (TF Serving)
# TFLite for mobile
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT] # Quantize
tflite_model = converter.convert()
with open("model.tflite", "wb") as f:
f.write(tflite_model)
# TensorFlow Serving (Docker)
# docker run -p 8501:8501 --mount type=bind,source=/models,target=/models \
# -e MODEL_NAME=my_model tensorflow/serving
# REST API inference
import requests
response = requests.post(
"http://localhost:8501/v1/models/my_model:predict",
json={"instances": x_test[:5].tolist()},
)
predictions = response.json()["predictions"]
Installation
pip install tensorflow # CPU + GPU (auto-detects)
pip install tensorflow-metal # macOS GPU (Apple Silicon)
# GPU requires CUDA 12.x + cuDNN 8.x
Best Practices
- Keras first — Use
keras.Sequentialor Functional API; drop to custom training loops only when needed - tf.data for pipelines — Use
tf.data.Datasetfor data loading;.batch().prefetch(tf.data.AUTOTUNE)for performance - Mixed precision —
keras.mixed_precision.set_global_policy("mixed_float16")for 2x speedup on modern GPUs - Transfer learning — Start from pre-trained models (EfficientNet, ResNet, BERT); fine-tune top layers first
- Callbacks — EarlyStopping prevents overfitting, ReduceLROnPlateau adapts learning rate, ModelCheckpoint saves best model
- @tf.function — Decorate custom training steps; TF compiles the graph for 2-5x speedup
- TFLite for edge — Convert and quantize for mobile deployment; INT8 quantization reduces size 4x
- TensorBoard —
keras.callbacks.TensorBoard(log_dir)for training visualization;tensorboard --logdir logs
> related_skills --same-repo
> zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
> zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
> zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
> zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.