Loading...
Loading...
Complete guide for OpenAI's traditional/stateless APIs: Chat Completions (GPT-5, GPT-4o), Embeddings, Images (DALL-E 3), Audio (Whisper + TTS), and Moderation. Includes both Node.js SDK and fetch-based approaches for maximum compatibility. Use when: integrating OpenAI APIs, implementing chat completions with GPT-5/GPT-4o, generating text with streaming, using function calling/tools, creating structured outputs with JSON schemas, implementing embeddings for RAG, generating images with DALL-E 3, transcribing audio with Whisper, synthesizing speech with TTS, moderating content, deploying to Cloudflare Workers, or encountering errors like rate limits (429), invalid API keys (401), function calling failures, streaming parse errors, embeddings dimension mismatches, or token limit exceeded. Keywords: openai api, chat completions, gpt-5, gpt-5-mini, gpt-5-nano, gpt-4o, gpt-4-turbo, openai sdk, openai streaming, function calling, structured output, json schema, openai embeddings, text-embedding-3, dall-e-3, image generation, whisper api, openai tts, text-to-speech, moderation api, openai fetch, cloudflare workers openai, openai rate limit, openai 429, reasoning_effort, verbosity
npx skill4agent add jackspace/claudeskillz openai-apinpm install openai@6.7.0export OPENAI_API_KEY="sk-...".envOPENAI_API_KEY=sk-...import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [
{ role: 'user', content: 'What are the three laws of robotics?' }
],
});
console.log(completion.choices[0].message.content);const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-5',
messages: [
{ role: 'user', content: 'What are the three laws of robotics?' }
],
}),
});
const data = await response.json();
console.log(data.choices[0].message.content);POST /v1/chat/completions{
model: string, // Model to use (e.g., "gpt-5")
messages: Message[], // Conversation history
reasoning_effort?: string, // GPT-5 only: "minimal" | "low" | "medium" | "high"
verbosity?: string, // GPT-5 only: "low" | "medium" | "high"
temperature?: number, // NOT supported by GPT-5
max_tokens?: number, // Max tokens to generate
stream?: boolean, // Enable streaming
tools?: Tool[], // Function calling tools
}{
id: string, // Unique completion ID
object: "chat.completion",
created: number, // Unix timestamp
model: string, // Model used
choices: [{
index: number,
message: {
role: "assistant",
content: string, // Generated text
tool_calls?: ToolCall[] // If function calling
},
finish_reason: string // "stop" | "length" | "tool_calls"
}],
usage: {
prompt_tokens: number,
completion_tokens: number,
total_tokens: number
}
}const messages = [
{
role: 'system',
content: 'You are a helpful assistant that explains complex topics simply.'
},
{
role: 'user',
content: 'Explain quantum computing to a 10-year-old.'
}
];const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is TypeScript?' },
{ role: 'assistant', content: 'TypeScript is a superset of JavaScript...' },
{ role: 'user', content: 'How do I install it?' }
];
const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: messages,
});openai-responsesconst completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [{ role: 'user', content: 'Solve this complex math problem...' }],
reasoning_effort: 'high', // Deep reasoning
});const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [{ role: 'user', content: 'Explain quantum mechanics' }],
verbosity: 'high', // Detailed explanation
});temperaturetop_plogprobsopenai-responses| Feature | GPT-5 | GPT-4o |
|---|---|---|
| Reasoning control | ✅ reasoning_effort | ❌ |
| Verbosity control | ✅ verbosity | ❌ |
| Temperature | ❌ | ✅ |
| Top-p | ❌ | ✅ |
| Vision | ❌ | ✅ |
| Function calling | ✅ | ✅ |
| Streaming | ✅ | ✅ |
stream: trueconst stream = await openai.chat.completions.create({
model: 'gpt-5',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});import OpenAI from 'openai';
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model: 'gpt-5',
messages: [{ role: 'user', content: 'Write a poem about coding' }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
process.stdout.write(content);
}const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-5',
messages: [{ role: 'user', content: 'Write a poem' }],
stream: true,
}),
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader!.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(line => line.trim() !== '');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
try {
const json = JSON.parse(data);
const content = json.choices[0]?.delta?.content || '';
console.log(content);
} catch (e) {
// Skip invalid JSON
}
}
}
}data: {"id":"chatcmpl-xyz","choices":[{"delta":{"role":"assistant"}}]}
data: {"id":"chatcmpl-xyz","choices":[{"delta":{"content":"Hello"}}]}
data: {"id":"chatcmpl-xyz","choices":[{"delta":{"content":" world"}}]}
data: {"id":"chatcmpl-xyz","choices":[{"finish_reason":"stop"}]}
data: [DONE][DONE]const tools = [
{
type: 'function',
function: {
name: 'get_weather',
description: 'Get the current weather for a location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City name, e.g., San Francisco'
},
unit: {
type: 'string',
enum: ['celsius', 'fahrenheit'],
description: 'Temperature unit'
}
},
required: ['location']
}
}
}
];const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [
{ role: 'user', content: 'What is the weather in San Francisco?' }
],
tools: tools,
});const message = completion.choices[0].message;
if (message.tool_calls) {
// Model wants to call a function
for (const toolCall of message.tool_calls) {
if (toolCall.function.name === 'get_weather') {
const args = JSON.parse(toolCall.function.arguments);
// Execute your function
const weatherData = await getWeather(args.location, args.unit);
// Send result back to model
const followUp = await openai.chat.completions.create({
model: 'gpt-5',
messages: [
...messages,
message, // Assistant's tool call
{
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(weatherData)
}
],
tools: tools,
});
}
}
}async function chatWithTools(userMessage: string) {
let messages = [
{ role: 'user', content: userMessage }
];
while (true) {
const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: messages,
tools: tools,
});
const message = completion.choices[0].message;
messages.push(message);
// If no tool calls, we're done
if (!message.tool_calls) {
return message.content;
}
// Execute all tool calls
for (const toolCall of message.tool_calls) {
const result = await executeFunction(toolCall.function.name, toolCall.function.arguments);
messages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(result)
});
}
}
}const tools = [
{
type: 'function',
function: {
name: 'get_weather',
description: 'Get weather for a location',
parameters: { /* schema */ }
}
},
{
type: 'function',
function: {
name: 'search_web',
description: 'Search the web',
parameters: { /* schema */ }
}
},
{
type: 'function',
function: {
name: 'calculate',
description: 'Perform calculations',
parameters: { /* schema */ }
}
}
];const completion = await openai.chat.completions.create({
model: 'gpt-4o', // Note: Structured outputs best supported on GPT-4o
messages: [
{ role: 'user', content: 'Generate a person profile' }
],
response_format: {
type: 'json_schema',
json_schema: {
name: 'person_profile',
strict: true,
schema: {
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' },
skills: {
type: 'array',
items: { type: 'string' }
}
},
required: ['name', 'age', 'skills'],
additionalProperties: false
}
}
}
});
const person = JSON.parse(completion.choices[0].message.content);
// { name: "Alice", age: 28, skills: ["TypeScript", "React"] }const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [
{ role: 'user', content: 'List 3 programming languages as JSON' }
],
response_format: { type: 'json_object' }
});
const data = JSON.parse(completion.choices[0].message.content);response_formatconst completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{
type: 'image_url',
image_url: {
url: 'https://example.com/image.jpg'
}
}
]
}
]
});import fs from 'fs';
const imageBuffer = fs.readFileSync('./image.jpg');
const base64Image = imageBuffer.toString('base64');
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe this image in detail' },
{
type: 'image_url',
image_url: {
url: `data:image/jpeg;base64,${base64Image}`
}
}
]
}
]
});const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Compare these two images' },
{ type: 'image_url', image_url: { url: 'https://example.com/image1.jpg' } },
{ type: 'image_url', image_url: { url: 'https://example.com/image2.jpg' } }
]
}
]
});POST /v1/embeddingsimport OpenAI from 'openai';
const openai = new OpenAI();
const embedding = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: 'The food was delicious and the waiter was friendly.',
});
console.log(embedding.data[0].embedding);
// [0.0023064255, -0.009327292, ..., -0.0028842222]const response = await fetch('https://api.openai.com/v1/embeddings', {
method: 'POST',
headers: {
'Authorization': `Bearer ${env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'text-embedding-3-small',
input: 'The food was delicious and the waiter was friendly.',
}),
});
const data = await response.json();
const embedding = data.data[0].embedding;{
object: "list",
data: [
{
object: "embedding",
embedding: [0.0023064255, -0.009327292, ...], // Array of floats
index: 0
}
],
model: "text-embedding-3-small",
usage: {
prompt_tokens: 8,
total_tokens: 8
}
}const embedding = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: 'Sample text',
dimensions: 256, // Reduced from 1536 default
});text-embedding-3-largetext-embedding-3-smallconst embeddings = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: [
'First document text',
'Second document text',
'Third document text',
],
});
// Access individual embeddings
embeddings.data.forEach((item, index) => {
console.log(`Embedding ${index}:`, item.embedding);
});dimensions// Get full embedding
const response = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: 'Testing 123',
});
// Truncate to desired dimensions
const fullEmbedding = response.data[0].embedding;
const truncated = fullEmbedding.slice(0, 256);
// Normalize (L2)
function normalizeL2(vector: number[]): number[] {
const magnitude = Math.sqrt(vector.reduce((sum, val) => sum + val * val, 0));
return vector.map(val => val / magnitude);
}
const normalized = normalizeL2(truncated);import OpenAI from 'openai';
const openai = new OpenAI();
// 1. Generate embeddings for knowledge base
async function embedKnowledgeBase(documents: string[]) {
const response = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: documents,
});
return response.data.map(item => item.embedding);
}
// 2. Embed user query
async function embedQuery(query: string) {
const response = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: query,
});
return response.data[0].embedding;
}
// 3. Cosine similarity
function cosineSimilarity(a: number[], b: number[]): number {
const dotProduct = a.reduce((sum, val, i) => sum + val * b[i], 0);
const magnitudeA = Math.sqrt(a.reduce((sum, val) => sum + val * val, 0));
const magnitudeB = Math.sqrt(b.reduce((sum, val) => sum + val * val, 0));
return dotProduct / (magnitudeA * magnitudeB);
}
// 4. Find most similar documents
async function findSimilar(query: string, knowledgeBase: { text: string, embedding: number[] }[]) {
const queryEmbedding = await embedQuery(query);
const results = knowledgeBase.map(doc => ({
text: doc.text,
similarity: cosineSimilarity(queryEmbedding, doc.embedding),
}));
return results.sort((a, b) => b.similarity - a.similarity);
}
// 5. RAG: Retrieve + Generate
async function rag(query: string, knowledgeBase: { text: string, embedding: number[] }[]) {
const similarDocs = await findSimilar(query, knowledgeBase);
const context = similarDocs.slice(0, 3).map(d => d.text).join('\n\n');
const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [
{
role: 'system',
content: `Answer questions using the following context:\n\n${context}`
},
{
role: 'user',
content: query
}
],
});
return completion.choices[0].message.content;
}text-embedding-3-smalltext-embedding-3-largePOST /v1/images/generationsimport OpenAI from 'openai';
const openai = new OpenAI();
const image = await openai.images.generate({
model: 'dall-e-3',
prompt: 'A white siamese cat with striking blue eyes',
size: '1024x1024',
quality: 'standard',
style: 'vivid',
n: 1,
});
console.log(image.data[0].url);
console.log(image.data[0].revised_prompt);const response = await fetch('https://api.openai.com/v1/images/generations', {
method: 'POST',
headers: {
'Authorization': `Bearer ${env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'dall-e-3',
prompt: 'A white siamese cat with striking blue eyes',
size: '1024x1024',
quality: 'standard',
style: 'vivid',
}),
});
const data = await response.json();
const imageUrl = data.data[0].url;"1024x1024""1024x1536""1536x1024""1024x1792""1792x1024""standard""hd""vivid""natural""url""b64_json"n: 1n: 1-10{
created: 1700000000,
data: [
{
url: "https://oaidalleapiprodscus.blob.core.windows.net/...",
revised_prompt: "A pristine white Siamese cat with striking blue eyes, sitting elegantly..."
}
]
}revised_prompt// Standard quality (faster, cheaper)
const standardImage = await openai.images.generate({
model: 'dall-e-3',
prompt: 'A futuristic city at sunset',
quality: 'standard',
});
// HD quality (finer details, costs more)
const hdImage = await openai.images.generate({
model: 'dall-e-3',
prompt: 'A futuristic city at sunset',
quality: 'hd',
});// Vivid style (hyper-real, dramatic)
const vividImage = await openai.images.generate({
model: 'dall-e-3',
prompt: 'A mountain landscape',
style: 'vivid',
});
// Natural style (more realistic, less dramatic)
const naturalImage = await openai.images.generate({
model: 'dall-e-3',
prompt: 'A mountain landscape',
style: 'natural',
});const image = await openai.images.generate({
model: 'dall-e-3',
prompt: 'A cyberpunk street scene',
response_format: 'b64_json',
});
const base64Data = image.data[0].b64_json;
// Convert to buffer and save
import fs from 'fs';
const buffer = Buffer.from(base64Data, 'base64');
fs.writeFileSync('image.png', buffer);POST /v1/images/editsmultipart/form-dataimport fs from 'fs';
import FormData from 'form-data';
const formData = new FormData();
formData.append('model', 'gpt-image-1');
formData.append('image', fs.createReadStream('./woman.jpg'));
formData.append('image_2', fs.createReadStream('./logo.png'));
formData.append('prompt', 'Add the logo to the woman\'s top, as if stamped into the fabric.');
formData.append('input_fidelity', 'high');
formData.append('size', '1024x1024');
formData.append('quality', 'auto');
const response = await fetch('https://api.openai.com/v1/images/edits', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
...formData.getHeaders(),
},
body: formData,
});
const data = await response.json();
const editedImageUrl = data.data[0].url;"gpt-image-1""low""medium""high""auto""standard""high""png""jpeg""webp""transparent""white""black"0100const formData = new FormData();
formData.append('model', 'gpt-image-1');
formData.append('image', fs.createReadStream('./product.jpg'));
formData.append('prompt', 'Remove the background, keeping only the product.');
formData.append('format', 'png');
formData.append('background', 'transparent');
const response = await fetch('https://api.openai.com/v1/images/edits', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
...formData.getHeaders(),
},
body: formData,
});revised_prompt"standard""natural""vivid"POST /v1/audio/transcriptionsimport OpenAI from 'openai';
import fs from 'fs';
const openai = new OpenAI();
const transcription = await openai.audio.transcriptions.create({
file: fs.createReadStream('./audio.mp3'),
model: 'whisper-1',
});
console.log(transcription.text);import fs from 'fs';
import FormData from 'form-data';
const formData = new FormData();
formData.append('file', fs.createReadStream('./audio.mp3'));
formData.append('model', 'whisper-1');
const response = await fetch('https://api.openai.com/v1/audio/transcriptions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
...formData.getHeaders(),
},
body: formData,
});
const data = await response.json();
console.log(data.text);{
text: "Hello, this is a transcription of the audio file."
}POST /v1/audio/speechimport OpenAI from 'openai';
import fs from 'fs';
const openai = new OpenAI();
const mp3 = await openai.audio.speech.create({
model: 'tts-1',
voice: 'alloy',
input: 'The quick brown fox jumped over the lazy dog.',
});
const buffer = Buffer.from(await mp3.arrayBuffer());
fs.writeFileSync('speech.mp3', buffer);const response = await fetch('https://api.openai.com/v1/audio/speech', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'tts-1',
voice: 'alloy',
input: 'The quick brown fox jumped over the lazy dog.',
}),
});
const audioBuffer = await response.arrayBuffer();
// Save or stream the audioconst speech = await openai.audio.speech.create({
model: 'gpt-4o-mini-tts',
voice: 'nova',
input: 'Welcome to our customer support line.',
instructions: 'Speak in a calm, professional, and friendly tone suitable for customer service.',
});// Slow speech (0.5x speed)
const slowSpeech = await openai.audio.speech.create({
model: 'tts-1',
voice: 'alloy',
input: 'This will be spoken slowly.',
speed: 0.5,
});
// Fast speech (1.5x speed)
const fastSpeech = await openai.audio.speech.create({
model: 'tts-1',
voice: 'alloy',
input: 'This will be spoken quickly.',
speed: 1.5,
});// MP3 (most compatible, default)
const mp3 = await openai.audio.speech.create({
model: 'tts-1',
voice: 'alloy',
input: 'Hello',
response_format: 'mp3',
});
// Opus (best for web streaming)
const opus = await openai.audio.speech.create({
model: 'tts-1',
voice: 'alloy',
input: 'Hello',
response_format: 'opus',
});
// WAV (uncompressed, highest quality)
const wav = await openai.audio.speech.create({
model: 'tts-1',
voice: 'alloy',
input: 'Hello',
response_format: 'wav',
});const response = await fetch('https://api.openai.com/v1/audio/speech', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-4o-mini-tts',
voice: 'nova',
input: 'Long text to be streamed as audio chunks...',
stream_format: 'sse', // Server-Sent Events
}),
});
// Stream audio chunks
const reader = response.body?.getReader();
while (true) {
const { done, value } = await reader!.read();
if (done) break;
// Process audio chunk
processAudioChunk(value);
}stream_format: "sse"gpt-4o-mini-ttstts-1tts-1-hdgpt-4o-mini-ttsstream_format: "sse"POST /v1/moderationsimport OpenAI from 'openai';
const openai = new OpenAI();
const moderation = await openai.moderations.create({
model: 'omni-moderation-latest',
input: 'I want to hurt someone.',
});
console.log(moderation.results[0].flagged);
console.log(moderation.results[0].categories);
console.log(moderation.results[0].category_scores);const response = await fetch('https://api.openai.com/v1/moderations', {
method: 'POST',
headers: {
'Authorization': `Bearer ${env.OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'omni-moderation-latest',
input: 'I want to hurt someone.',
}),
});
const data = await response.json();
const isFlagged = data.results[0].flagged;{
id: "modr-ABC123",
model: "omni-moderation-latest",
results: [
{
flagged: true,
categories: {
sexual: false,
hate: false,
harassment: true,
"self-harm": false,
"sexual/minors": false,
"hate/threatening": false,
"violence/graphic": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"harassment/threatening": true,
violence: true
},
category_scores: {
sexual: 0.000011726,
hate: 0.2270666,
harassment: 0.5215635,
"self-harm": 0.0000123,
"sexual/minors": 0.0000001,
"hate/threatening": 0.0123456,
"violence/graphic": 0.0123456,
"self-harm/intent": 0.0000123,
"self-harm/instructions": 0.0000123,
"harassment/threatening": 0.4123456,
violence: 0.9971135
}
}
]
}const thresholds = {
sexual: 0.5,
hate: 0.4,
harassment: 0.5,
'self-harm': 0.3,
'sexual/minors': 0.1, // Lower threshold for child safety
'hate/threatening': 0.3,
'violence/graphic': 0.5,
'self-harm/intent': 0.2,
'self-harm/instructions': 0.2,
'harassment/threatening': 0.3,
violence: 0.5,
};
function isFlagged(result: ModerationResult): boolean {
return Object.entries(result.category_scores).some(
([category, score]) => score > thresholds[category]
);
}const moderation = await openai.moderations.create({
model: 'omni-moderation-latest',
input: [
'First text to moderate',
'Second text to moderate',
'Third text to moderate',
],
});
moderation.results.forEach((result, index) => {
console.log(`Input ${index}: ${result.flagged ? 'FLAGGED' : 'OK'}`);
if (result.flagged) {
console.log('Categories:', Object.keys(result.categories).filter(
cat => result.categories[cat]
));
}
});async function moderateContent(text: string) {
const moderation = await openai.moderations.create({
model: 'omni-moderation-latest',
input: text,
});
const result = moderation.results[0];
// Check specific categories
if (result.categories['sexual/minors']) {
throw new Error('Content violates child safety policy');
}
if (result.categories.violence && result.category_scores.violence > 0.7) {
throw new Error('Content contains high-confidence violence');
}
if (result.categories['self-harm/intent']) {
// Flag for human review
await flagForReview(text, 'self-harm-intent');
}
return result.flagged;
}async function moderateUserContent(userInput: string) {
try {
const moderation = await openai.moderations.create({
model: 'omni-moderation-latest',
input: userInput,
});
const result = moderation.results[0];
// Immediate block for severe categories
const severeCategories = [
'sexual/minors',
'self-harm/intent',
'hate/threatening',
'harassment/threatening',
];
for (const category of severeCategories) {
if (result.categories[category]) {
return {
allowed: false,
reason: `Content flagged for: ${category}`,
severity: 'high',
};
}
}
// Custom threshold check
if (result.category_scores.violence > 0.8) {
return {
allowed: false,
reason: 'High-confidence violence detected',
severity: 'medium',
};
}
// Allow content
return {
allowed: true,
scores: result.category_scores,
};
} catch (error) {
console.error('Moderation error:', error);
// Fail closed: block on error
return {
allowed: false,
reason: 'Moderation service unavailable',
severity: 'error',
};
}
}sexual/minorsflaggedtry {
const completion = await openai.chat.completions.create({ /* ... */ });
} catch (error) {
if (error.status === 429) {
// Rate limit exceeded - implement exponential backoff
console.error('Rate limit exceeded. Retry after delay.');
}
}try {
const completion = await openai.chat.completions.create({ /* ... */ });
} catch (error) {
if (error.status === 401) {
console.error('Invalid API key. Check OPENAI_API_KEY environment variable.');
}
}async function completionWithRetry(params, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await openai.chat.completions.create(params);
} catch (error) {
if (error.status === 429 && i < maxRetries - 1) {
const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ /* ... */ }),
});
console.log(response.headers.get('x-ratelimit-limit-requests'));
console.log(response.headers.get('x-ratelimit-remaining-requests'));
console.log(response.headers.get('x-ratelimit-reset-requests'));// ❌ Bad - API key in browser
const apiKey = 'sk-...'; // Visible to users!
// ✅ Good - Server-side proxy
// Client calls your backend, which calls OpenAIexport OPENAI_API_KEY="sk-..."// Your backend endpoint
app.post('/api/chat', async (req, res) => {
const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: req.body.messages,
});
res.json(completion);
});{
max_tokens: 500, // Don't generate more than needed
}const cache = new Map();
async function getCachedCompletion(prompt) {
if (cache.has(prompt)) {
return cache.get(prompt);
}
const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [{ role: 'user', content: prompt }],
});
cache.set(prompt, completion);
return completion;
}try {
const completion = await openai.chat.completions.create({ /* ... */ });
} catch (error) {
console.error('OpenAI API error:', error);
// User-friendly message
return {
error: 'Sorry, I encountered an issue. Please try again.',
};
}| Use Case | Use openai-api | Use openai-responses |
|---|---|---|
| Simple chat | ✅ | ❌ |
| RAG/embeddings | ✅ | ❌ |
| Image generation | ✅ | ✅ |
| Audio processing | ✅ | ❌ |
| Agentic workflows | ❌ | ✅ |
| Multi-turn reasoning | ❌ | ✅ |
| Background tasks | ❌ | ✅ |
| Custom tools only | ✅ | ❌ |
| Built-in + custom tools | ❌ | ✅ |
npm install openai@6.7.0import OpenAI from 'openai';
import type { ChatCompletionMessage, ChatCompletionCreateParams } from 'openai/resources/chat';OPENAI_API_KEY=sk-.../planning/research-logs/openai-api.md