linear-rate-limits

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Linear Rate Limits

Linear 速率限制

Overview

概述

Understand and handle Linear API rate limits for reliable integrations.
了解并处理Linear API的速率限制,以实现可靠的集成。

Prerequisites

前提条件

  • Linear SDK configured
  • Understanding of HTTP headers
  • Familiarity with async patterns
  • 已配置Linear SDK
  • 了解HTTP头
  • 熟悉异步模式

Linear Rate Limit Structure

Linear 速率限制结构

Current Limits

当前限制

TierRequests/minComplexity/minNotes
Standard1,500250,000Most integrations
EnterpriseHigherHigherContact Linear
层级请求数/分钟复杂度/分钟说明
标准版1,500250,000适用于大多数集成
企业版更高更高联系Linear团队

Headers Returned

返回的响应头

X-RateLimit-Limit: 1500
X-RateLimit-Remaining: 1499
X-RateLimit-Reset: 1640000000
X-Complexity-Limit: 250000
X-Complexity-Cost: 50
X-Complexity-Remaining: 249950
X-RateLimit-Limit: 1500
X-RateLimit-Remaining: 1499
X-RateLimit-Reset: 1640000000
X-Complexity-Limit: 250000
X-Complexity-Cost: 50
X-Complexity-Remaining: 249950

Instructions

操作步骤

Step 1: Basic Rate Limit Handler

步骤1:基础速率限制处理器

typescript
// lib/rate-limiter.ts
interface RateLimitState {
  remaining: number;
  reset: Date;
  complexityRemaining: number;
}

class LinearRateLimiter {
  private state: RateLimitState = {
    remaining: 1500,
    reset: new Date(),
    complexityRemaining: 250000,
  };

  updateFromHeaders(headers: Headers): void {
    const remaining = headers.get("x-ratelimit-remaining");
    const reset = headers.get("x-ratelimit-reset");
    const complexityRemaining = headers.get("x-complexity-remaining");

    if (remaining) this.state.remaining = parseInt(remaining);
    if (reset) this.state.reset = new Date(parseInt(reset) * 1000);
    if (complexityRemaining) {
      this.state.complexityRemaining = parseInt(complexityRemaining);
    }
  }

  async waitIfNeeded(): Promise<void> {
    // If very low on requests, wait until reset
    if (this.state.remaining < 10) {
      const waitMs = this.state.reset.getTime() - Date.now();
      if (waitMs > 0) {
        console.log(`Rate limit low, waiting ${waitMs}ms...`);
        await new Promise(r => setTimeout(r, waitMs));
      }
    }
  }

  getState(): RateLimitState {
    return { ...this.state };
  }
}

export const rateLimiter = new LinearRateLimiter();
typescript
// lib/rate-limiter.ts
interface RateLimitState {
  remaining: number;
  reset: Date;
  complexityRemaining: number;
}

class LinearRateLimiter {
  private state: RateLimitState = {
    remaining: 1500,
    reset: new Date(),
    complexityRemaining: 250000,
  };

  updateFromHeaders(headers: Headers): void {
    const remaining = headers.get("x-ratelimit-remaining");
    const reset = headers.get("x-ratelimit-reset");
    const complexityRemaining = headers.get("x-complexity-remaining");

    if (remaining) this.state.remaining = parseInt(remaining);
    if (reset) this.state.reset = new Date(parseInt(reset) * 1000);
    if (complexityRemaining) {
      this.state.complexityRemaining = parseInt(complexityRemaining);
    }
  }

  async waitIfNeeded(): Promise<void> {
    // If very low on requests, wait until reset
    if (this.state.remaining < 10) {
      const waitMs = this.state.reset.getTime() - Date.now();
      if (waitMs > 0) {
        console.log(`Rate limit low, waiting ${waitMs}ms...`);
        await new Promise(r => setTimeout(r, waitMs));
      }
    }
  }

  getState(): RateLimitState {
    return { ...this.state };
  }
}

export const rateLimiter = new LinearRateLimiter();

Step 2: Exponential Backoff

步骤2:指数退避

typescript
// lib/backoff.ts
interface BackoffOptions {
  maxRetries?: number;
  baseDelayMs?: number;
  maxDelayMs?: number;
  jitter?: boolean;
}

export async function withBackoff<T>(
  fn: () => Promise<T>,
  options: BackoffOptions = {}
): Promise<T> {
  const {
    maxRetries = 5,
    baseDelayMs = 1000,
    maxDelayMs = 30000,
    jitter = true,
  } = options;

  let lastError: Error | undefined;

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error: any) {
      lastError = error;

      // Only retry on rate limit errors
      const isRateLimited =
        error?.extensions?.code === "RATE_LIMITED" ||
        error?.response?.status === 429;

      if (!isRateLimited || attempt === maxRetries - 1) {
        throw error;
      }

      // Calculate delay with exponential backoff
      let delay = Math.min(baseDelayMs * Math.pow(2, attempt), maxDelayMs);

      // Add jitter to prevent thundering herd
      if (jitter) {
        delay += Math.random() * delay * 0.1;
      }

      // Check Retry-After header if available
      const retryAfter = error?.response?.headers?.get?.("retry-after");
      if (retryAfter) {
        delay = Math.max(delay, parseInt(retryAfter) * 1000);
      }

      console.log(
        `Rate limited, attempt ${attempt + 1}/${maxRetries}, ` +
        `retrying in ${Math.round(delay)}ms...`
      );

      await new Promise(r => setTimeout(r, delay));
    }
  }

  throw lastError;
}
typescript
// lib/backoff.ts
interface BackoffOptions {
  maxRetries?: number;
  baseDelayMs?: number;
  maxDelayMs?: number;
  jitter?: boolean;
}

export async function withBackoff<T>(
  fn: () => Promise<T>,
  options: BackoffOptions = {}
): Promise<T> {
  const {
    maxRetries = 5,
    baseDelayMs = 1000,
    maxDelayMs = 30000,
    jitter = true,
  } = options;

  let lastError: Error | undefined;

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error: any) {
      lastError = error;

      // Only retry on rate limit errors
      const isRateLimited =
        error?.extensions?.code === "RATE_LIMITED" ||
        error?.response?.status === 429;

      if (!isRateLimited || attempt === maxRetries - 1) {
        throw error;
      }

      // Calculate delay with exponential backoff
      let delay = Math.min(baseDelayMs * Math.pow(2, attempt), maxDelayMs);

      // Add jitter to prevent thundering herd
      if (jitter) {
        delay += Math.random() * delay * 0.1;
      }

      // Check Retry-After header if available
      const retryAfter = error?.response?.headers?.get?.("retry-after");
      if (retryAfter) {
        delay = Math.max(delay, parseInt(retryAfter) * 1000);
      }

      console.log(
        `Rate limited, attempt ${attempt + 1}/${maxRetries}, ` +
        `retrying in ${Math.round(delay)}ms...`
      );

      await new Promise(r => setTimeout(r, delay));
    }
  }

  throw lastError;
}

Step 3: Request Queue

步骤3:请求队列

typescript
// lib/queue.ts
type QueuedRequest<T> = {
  fn: () => Promise<T>;
  resolve: (value: T) => void;
  reject: (error: Error) => void;
};

class RequestQueue {
  private queue: QueuedRequest<any>[] = [];
  private processing = false;
  private requestsPerSecond = 20; // Conservative rate

  async add<T>(fn: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
      this.queue.push({ fn, resolve, reject });
      this.process();
    });
  }

  private async process(): Promise<void> {
    if (this.processing) return;
    this.processing = true;

    while (this.queue.length > 0) {
      const request = this.queue.shift()!;

      try {
        const result = await request.fn();
        request.resolve(result);
      } catch (error) {
        request.reject(error as Error);
      }

      // Throttle requests
      await new Promise(r =>
        setTimeout(r, 1000 / this.requestsPerSecond)
      );
    }

    this.processing = false;
  }

  get pending(): number {
    return this.queue.length;
  }
}

export const requestQueue = new RequestQueue();
typescript
// lib/queue.ts
type QueuedRequest<T> = {
  fn: () => Promise<T>;
  resolve: (value: T) => void;
  reject: (error: Error) => void;
};

class RequestQueue {
  private queue: QueuedRequest<any>[] = [];
  private processing = false;
  private requestsPerSecond = 20; // Conservative rate

  async add<T>(fn: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
      this.queue.push({ fn, resolve, reject });
      this.process();
    });
  }

  private async process(): Promise<void> {
    if (this.processing) return;
    this.processing = true;

    while (this.queue.length > 0) {
      const request = this.queue.shift()!;

      try {
        const result = await request.fn();
        request.resolve(result);
      } catch (error) {
        request.reject(error as Error);
      }

      // Throttle requests
      await new Promise(r =>
        setTimeout(r, 1000 / this.requestsPerSecond)
      );
    }

    this.processing = false;
  }

  get pending(): number {
    return this.queue.length;
  }
}

export const requestQueue = new RequestQueue();

Step 4: Batch Operations

步骤4:批量操作

typescript
// lib/batch.ts
import { LinearClient } from "@linear/sdk";

interface BatchConfig {
  batchSize: number;
  delayBetweenBatches: number;
}

export async function batchProcess<T, R>(
  items: T[],
  processor: (item: T) => Promise<R>,
  config: BatchConfig = { batchSize: 10, delayBetweenBatches: 1000 }
): Promise<R[]> {
  const results: R[] = [];
  const batches: T[][] = [];

  // Split into batches
  for (let i = 0; i < items.length; i += config.batchSize) {
    batches.push(items.slice(i, i + config.batchSize));
  }

  for (let i = 0; i < batches.length; i++) {
    const batch = batches[i];
    console.log(`Processing batch ${i + 1}/${batches.length}...`);

    // Process batch in parallel
    const batchResults = await Promise.all(batch.map(processor));
    results.push(...batchResults);

    // Delay between batches (except last)
    if (i < batches.length - 1) {
      await new Promise(r => setTimeout(r, config.delayBetweenBatches));
    }
  }

  return results;
}

// Usage example
async function updateManyIssues(
  client: LinearClient,
  updates: { id: string; priority: number }[]
) {
  return batchProcess(
    updates,
    ({ id, priority }) => client.updateIssue(id, { priority }),
    { batchSize: 10, delayBetweenBatches: 2000 }
  );
}
typescript
// lib/batch.ts
import { LinearClient } from "@linear/sdk";

interface BatchConfig {
  batchSize: number;
  delayBetweenBatches: number;
}

export async function batchProcess<T, R>(
  items: T[],
  processor: (item: T) => Promise<R>,
  config: BatchConfig = { batchSize: 10, delayBetweenBatches: 1000 }
): Promise<R[]> {
  const results: R[] = [];
  const batches: T[][] = [];

  // Split into batches
  for (let i = 0; i < items.length; i += config.batchSize) {
    batches.push(items.slice(i, i + config.batchSize));
  }

  for (let i = 0; i < batches.length; i++) {
    const batch = batches[i];
    console.log(`Processing batch ${i + 1}/${batches.length}...`);

    // Process batch in parallel
    const batchResults = await Promise.all(batch.map(processor));
    results.push(...batchResults);

    // Delay between batches (except last)
    if (i < batches.length - 1) {
      await new Promise(r => setTimeout(r, config.delayBetweenBatches));
    }
  }

  return results;
}

// 使用示例
async function updateManyIssues(
  client: LinearClient,
  updates: { id: string; priority: number }[]
) {
  return batchProcess(
    updates,
    ({ id, priority }) => client.updateIssue(id, { priority }),
    { batchSize: 10, delayBetweenBatches: 2000 }
  );
}

Step 5: Query Optimization

步骤5:查询优化

typescript
// Reduce complexity by limiting fields
const optimizedQuery = `
  query Issues($filter: IssueFilter) {
    issues(filter: $filter, first: 50) {
      nodes {
        id
        identifier
        title
        # Avoid nested connections in loops
      }
    }
  }
`;

// Use SDK efficiently
async function getIssuesOptimized(client: LinearClient, teamKey: string) {
  // Good: Single query with filter
  return client.issues({
    filter: { team: { key: { eq: teamKey } } },
    first: 50,
  });

  // Bad: N+1 queries
  // const teams = await client.teams();
  // for (const team of teams.nodes) {
  //   const issues = await team.issues(); // N queries!
  // }
}
typescript
// Reduce complexity by limiting fields
const optimizedQuery = `
  query Issues($filter: IssueFilter) {
    issues(filter: $filter, first: 50) {
      nodes {
        id
        identifier
        title
        # Avoid nested connections in loops
      }
    }
  }
`;

// Use SDK efficiently
async function getIssuesOptimized(client: LinearClient, teamKey: string) {
  // Good: Single query with filter
  return client.issues({
    filter: { team: { key: { eq: teamKey } } },
    first: 50,
  });

  // Bad: N+1 queries
  // const teams = await client.teams();
  // for (const team of teams.nodes) {
  //   const issues = await team.issues(); // N queries!
  // }
}

Output

输出结果

  • Rate limit monitoring
  • Automatic retry with backoff
  • Request queuing and throttling
  • Batch processing utilities
  • Optimized query patterns
  • 速率限制监控
  • 带退避策略的自动重试
  • 请求排队与流量控制
  • 批量处理工具
  • 优化的查询模式

Error Handling

错误处理

ErrorCauseSolution
429 Too Many Requests
Rate limit exceededUse backoff and queue
Complexity exceeded
Query too expensiveSimplify query structure
Timeout
Long-running queryPaginate or split queries
错误原因解决方案
429 Too Many Requests
超出速率限制使用退避策略和请求队列
Complexity exceeded
查询成本过高简化查询结构
Timeout
查询运行时间过长分页或拆分查询

Resources

参考资源

Next Steps

后续步骤

Learn security best practices with
linear-security-basics
.
通过
linear-security-basics
学习安全最佳实践。