Skip to main content

Cache Statistics

developer
GET/v1/analytics/cache

Returns cache performance statistics — overall hit rate, total hits and misses, per-endpoint cache effectiveness, and estimated cost savings from cached responses. Cache efficiency directly reduces both latency and billable request costs, making it one of the highest-leverage optimization vectors.

What It Does

Analyzes cache performance for the authenticated organization in the selected period. Computes: overall cache hit rate as a percentage, total cache hits and misses, per-endpoint cache hit rates sorted by opportunity (lowest hit rate first), and estimated cost savings from cached responses. Period options: 7d, 30d, or current (default). Cache is managed at the Cloudflare Workers edge with TTL varying by endpoint type.

Why It's Useful

Caching is typically the single highest-impact optimization for API costs and latency. A 50% cache hit rate effectively halves both your billable requests and p50 latency. Per-endpoint cache breakdown reveals which API calls have room for improvement — an endpoint with 5% cache hit rate that you call frequently is a clear optimization target. Strategies include: consolidating duplicate queries, adding consistent query parameters, and implementing client-side caching layers.

Use Cases

DevOps Engineer / Performance Engineer

Cost Reduction via Cache Optimization

Identify endpoints with high request volume but low cache hit rates. Investigate why cache misses occur — often caused by varying query parameters, unique request patterns, or short TTLs. Implement client-side caching for frequently-queried domains to reduce billable requests.

Reduce API costs by increasing cache efficiency — moving from 20% to 60% cache hit rate cuts billable requests by half.

Performance Engineer / Developer

Performance Optimization

Compare response times for cache hits vs misses across endpoints. Identify endpoints where cache hits provide the greatest latency reduction. Prioritize caching optimizations for latency-sensitive code paths in your application.

Improve application response times by maximizing cache utilization for your most latency-sensitive API calls.

Solutions Architect

Application-Level Caching Decisions

Evaluate whether to add application-level caching (Redis, in-memory, local storage) for frequently-accessed API data. Use per-endpoint cache stats to calculate the potential benefit — endpoints with stable data and high request frequency benefit most from client-side caching.

Make informed caching architecture decisions backed by real cache performance data rather than assumptions.

Parameters

NameTypeRequiredDescription
periodstringOptionalTime range: 7d (last 7 days), 30d (last 30 days), or current (current billing period). Default: current.Example: 30d

Response Fields

FieldTypeDescription
hit_ratenumberOverall cache hit rate as percentage (0-100)
hitsnumberTotal cache hits in the period
missesnumberTotal cache misses in the period
by_endpointarrayPer-endpoint cache stats: endpoint, hits, misses, hit_rate — sorted by optimization opportunity
savingsobjectEstimated savings: requests_saved, cost_saved_cents, latency_saved_ms

Code Examples

cURL
curl "https://api.edgedns.dev/v1/analytics/cache" \
  -H "Authorization: Bearer YOUR_API_KEY"
JavaScript
const response = await fetch(
  'https://api.edgedns.dev/v1/analytics/cache',
  {
    headers: {
      'Authorization': 'Bearer YOUR_API_KEY'
    }
  }
);

const data = await response.json();
console.log(data);
Python
import requests

response = requests.get(
    'https://api.edgedns.dev/v1/analytics/cache',
    headers={'Authorization': 'Bearer YOUR_API_KEY'},
    params={

    }
)

data = response.json()
print(data)

Read the full Cache Statistics guide

Why it matters, real-world use cases, parameters, response fields, and how to call it from Claude, ChatGPT, or Gemini via MCP.

Read the guide →

Related Endpoints

External References

Learn more about the standards and protocols behind this endpoint.

Try This Endpoint

Test the Cache Statistics endpoint live in the playground.