> adobe-performance-tuning

Optimize Adobe API performance with token caching, async job batching, connection pooling, and response caching for Firefly, PDF Services, and Photoshop API workflows. Trigger with phrases like "adobe performance", "optimize adobe", "adobe latency", "adobe caching", "adobe slow", "adobe batch".

fetch
$curl "https://skillshub.wtf/jeremylongshore/claude-code-plugins-plus-skills/adobe-performance-tuning?format=md"
SKILL.mdadobe-performance-tuning

Adobe Performance Tuning

Overview

Optimize Adobe API performance across Firefly Services, PDF Services, and Photoshop APIs. Key bottlenecks include IMS token generation, async job polling overhead, and cold-start latency on serverless platforms.

Prerequisites

  • Adobe SDK installed and functional
  • Understanding of which APIs your app uses most
  • Redis or in-memory cache available (optional)
  • Performance monitoring in place

Latency Benchmarks (Real-World)

OperationP50P95P99
IMS Token Generation200ms500ms1s
Firefly Text-to-Image (sync)5s12s20s
Firefly Text-to-Image (async poll)8s15s25s
PDF Extract (10-page doc)3s8s15s
PDF Create from HTML2s5s10s
Photoshop Remove Background4s10s18s
Lightroom Auto Tone3s8s15s

Instructions

Optimization 1: Cache IMS Access Tokens (Biggest Win)

The IMS token endpoint returns tokens valid for 24 hours. Never re-generate per request:

// WRONG: generates new token every call (adds 200-500ms each time)
async function makeRequest() {
  const token = await getAccessToken(); // hits IMS every time
}

// RIGHT: cache token and only refresh when expiring
let tokenCache: { token: string; expiresAt: number } | null = null;

async function getCachedToken(): Promise<string> {
  if (tokenCache && tokenCache.expiresAt > Date.now() + 300_000) {
    return tokenCache.token; // Cache hit — 0ms
  }
  const res = await fetch('https://ims-na1.adobelogin.com/ims/token/v3', {
    method: 'POST',
    headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
    body: new URLSearchParams({
      client_id: process.env.ADOBE_CLIENT_ID!,
      client_secret: process.env.ADOBE_CLIENT_SECRET!,
      grant_type: 'client_credentials',
      scope: process.env.ADOBE_SCOPES!,
    }),
  });
  const data = await res.json();
  tokenCache = { token: data.access_token, expiresAt: Date.now() + data.expires_in * 1000 };
  return tokenCache.token;
}

Optimization 2: Parallel Async Job Submission

Firefly and Photoshop APIs are async — submit all jobs first, then poll all:

// SLOW: sequential (total = sum of all job times)
for (const prompt of prompts) {
  const result = await generateImageSync(prompt); // 5-20s each
}

// FAST: parallel submit + parallel poll (total = max job time)
async function batchFireflyGenerate(prompts: string[]) {
  const token = await getCachedToken();

  // 1. Submit all jobs simultaneously
  const jobSubmissions = await Promise.all(
    prompts.map(prompt =>
      fetch('https://firefly-api.adobe.io/v3/images/generate-async', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${token}`,
          'x-api-key': process.env.ADOBE_CLIENT_ID!,
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ prompt, n: 1, size: { width: 1024, height: 1024 } }),
      }).then(r => r.json())
    )
  );

  // 2. Poll all jobs in parallel
  const results = await Promise.all(
    jobSubmissions.map(job => pollUntilDone(job.statusUrl, token))
  );

  return results;
}

Optimization 3: Response Caching for Repeated Operations

import { LRUCache } from 'lru-cache';

// Cache PDF extraction results (same PDF = same output)
const extractionCache = new LRUCache<string, any>({
  max: 100,
  ttl: 3600_000, // 1 hour
});

async function cachedPdfExtract(pdfHash: string, pdfPath: string) {
  const cached = extractionCache.get(pdfHash);
  if (cached) {
    console.log('PDF extraction cache hit');
    return cached;
  }

  const result = await extractPdfContent(pdfPath);
  extractionCache.set(pdfHash, result);
  return result;
}

Optimization 4: Connection Keep-Alive

import { Agent } from 'https';

// Reuse TCP connections to Adobe endpoints
const adobeAgent = new Agent({
  keepAlive: true,
  maxSockets: 10,
  maxFreeSockets: 5,
  timeout: 60_000,
});

// Use with node-fetch or undici
const response = await fetch(url, {
  // @ts-ignore — agent option supported by node-fetch
  agent: adobeAgent,
  headers: { ... },
});

Optimization 5: Smart Polling Intervals

// Adaptive polling: start fast, slow down over time
async function adaptivePoll(statusUrl: string, token: string) {
  const intervals = [1000, 2000, 3000, 5000, 5000, 10000]; // ms
  let attempt = 0;

  while (true) {
    const res = await fetch(statusUrl, {
      headers: {
        'Authorization': `Bearer ${token}`,
        'x-api-key': process.env.ADOBE_CLIENT_ID!,
      },
    });
    const status = await res.json();

    if (status.status === 'succeeded') return status;
    if (status.status === 'failed') throw new Error(status.error?.message);

    const delay = intervals[Math.min(attempt, intervals.length - 1)];
    await new Promise(r => setTimeout(r, delay));
    attempt++;
  }
}

Output

  • IMS token cached for 24h (eliminates 200-500ms per request)
  • Parallel job submission for batch operations
  • LRU response caching for repeated extractions
  • Connection keep-alive reducing TLS handshake overhead
  • Adaptive polling reducing unnecessary API calls

Error Handling

IssueCauseSolution
Stale cached tokenToken revoked mid-lifecycleCatch 401, clear cache, retry once
Parallel rate limitingToo many concurrent jobsAdd p-queue concurrency limit
Cache memory pressureToo many cached resultsSet LRU max size
Connection pool exhaustionToo many parallel requestsLimit maxSockets to 10-20

Resources

Next Steps

For cost optimization, see adobe-cost-tuning.

┌ stats

installs/wk0
░░░░░░░░░░
github stars1.7K
██████████
first seenMar 23, 2026
└────────────

┌ repo

jeremylongshore/claude-code-plugins-plus-skills
by jeremylongshore
└────────────