Cache Statistics
developer/v1/analytics/cacheReturns cache performance statistics — overall hit rate, total hits and misses, per-endpoint cache effectiveness, and estimated cost savings from cached responses. Cache efficiency directly reduces both latency and billable request costs, making it one of the highest-leverage optimization vectors.
What It Does
Analyzes cache performance for the authenticated organization in the selected period. Computes: overall cache hit rate as a percentage, total cache hits and misses, per-endpoint cache hit rates sorted by opportunity (lowest hit rate first), and estimated cost savings from cached responses. Period options: 7d, 30d, or current (default). Cache is managed at the Cloudflare Workers edge with TTL varying by endpoint type.
Why It's Useful
Caching is typically the single highest-impact optimization for API costs and latency. A 50% cache hit rate effectively halves both your billable requests and p50 latency. Per-endpoint cache breakdown reveals which API calls have room for improvement — an endpoint with 5% cache hit rate that you call frequently is a clear optimization target. Strategies include: consolidating duplicate queries, adding consistent query parameters, and implementing client-side caching layers.
Use Cases
Cost Reduction via Cache Optimization
Identify endpoints with high request volume but low cache hit rates. Investigate why cache misses occur — often caused by varying query parameters, unique request patterns, or short TTLs. Implement client-side caching for frequently-queried domains to reduce billable requests.
Reduce API costs by increasing cache efficiency — moving from 20% to 60% cache hit rate cuts billable requests by half.
Performance Optimization
Compare response times for cache hits vs misses across endpoints. Identify endpoints where cache hits provide the greatest latency reduction. Prioritize caching optimizations for latency-sensitive code paths in your application.
Improve application response times by maximizing cache utilization for your most latency-sensitive API calls.
Application-Level Caching Decisions
Evaluate whether to add application-level caching (Redis, in-memory, local storage) for frequently-accessed API data. Use per-endpoint cache stats to calculate the potential benefit — endpoints with stable data and high request frequency benefit most from client-side caching.
Make informed caching architecture decisions backed by real cache performance data rather than assumptions.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
period | string | Optional | Time range: 7d (last 7 days), 30d (last 30 days), or current (current billing period). Default: current.Example: 30d |
Response Fields
| Field | Type | Description |
|---|---|---|
hit_rate | number | Overall cache hit rate as percentage (0-100) |
hits | number | Total cache hits in the period |
misses | number | Total cache misses in the period |
by_endpoint | array | Per-endpoint cache stats: endpoint, hits, misses, hit_rate — sorted by optimization opportunity |
savings | object | Estimated savings: requests_saved, cost_saved_cents, latency_saved_ms |
Code Examples
curl "https://api.edgedns.dev/v1/analytics/cache" \
-H "Authorization: Bearer YOUR_API_KEY"const response = await fetch(
'https://api.edgedns.dev/v1/analytics/cache',
{
headers: {
'Authorization': 'Bearer YOUR_API_KEY'
}
}
);
const data = await response.json();
console.log(data);import requests
response = requests.get(
'https://api.edgedns.dev/v1/analytics/cache',
headers={'Authorization': 'Bearer YOUR_API_KEY'},
params={
}
)
data = response.json()
print(data)Read the full Cache Statistics guide
Why it matters, real-world use cases, parameters, response fields, and how to call it from Claude, ChatGPT, or Gemini via MCP.
Read the guide →Related Endpoints
External References
Learn more about the standards and protocols behind this endpoint.
Try This Endpoint
Test the Cache Statistics endpoint live in the playground.