Loading...
Loading...
Use Neo4j GenAI Plugin ai.text.* functions and procedures for in-Cypher embedding generation, text completion, structured output, chat, tokenization, and batch ingestion. Covers ai.text.embed(), ai.text.embedBatch(), ai.text.completion(), ai.text.structuredCompletion(), ai.text.aggregateCompletion(), ai.text.chat(), ai.text.tokenCount(), ai.text.chunkByTokenLimit(), and provider configuration for OpenAI, Azure OpenAI, VertexAI, and Amazon Bedrock. Requires CYPHER 25. Replaces deprecated genai.vector.encode(). Use when writing pure-Cypher GraphRAG, embedding nodes in-graph, generating structured maps from prompts, or calling LLMs inside Cypher queries. Does NOT handle neo4j-graphrag Python library pipelines — use neo4j-graphrag-skill. Does NOT handle vector index creation/search — use neo4j-vector-index-skill.
npx skill4agent add neo4j-contrib/neo4j-skills neo4j-genai-plugin-skillai.text.embed()ai.text.embedBatch()ai.text.completion()ai.text.structuredCompletion()ai.text.aggregateCompletion()ai.text.chat()ai.text.tokenCount()ai.text.chunkByTokenLimit()neo4j-graphrag-skillneo4j-vector-index-skillneo4j-gds-skillneo4j-cypher-skillai.*// Per-query prefix (self-managed, no admin rights needed):
CYPHER 25 MATCH (n:Chunk) ...
// Per-database default (admin; applies to all sessions):
ALTER DATABASE neo4j SET DEFAULT LANGUAGE CYPHER 25plugins/--env NEO4J_PLUGINS='["genai"]'ai.text.*configuration :: MAP| Provider string | Required keys | Notes |
|---|---|---|
| | |
| | |
| | |
| | Embedding only |
| | Completion only |
vendorOptions :: MAP{ dimensions: 1024 }$paramCYPHER 25
MATCH (c:Chunk)
WHERE c.embedding IS NULL
WITH c
CALL {
WITH c
SET c.embedding = ai.text.embed(c.text, 'openai', {
token: $openaiKey,
model: 'text-embedding-3-small'
})
} IN TRANSACTIONS OF 500 ROWSai.text.embed()VECTORCYPHER 25
MATCH (c:Chunk) WHERE c.embedding IS NULL
WITH collect(c) AS chunks
UNWIND chunks AS c
WITH c.text AS text, c AS node
CALL ai.text.embedBatch(text, 'openai', { token: $openaiKey, model: 'text-embedding-3-small' })
YIELD index, resource, vector
MATCH (c:Chunk {text: resource})
SET c.embedding = vectorCALL ai.text.embedBatch(resource, provider, config) YIELD index, resource, vectorCYPHER 25
CALL ai.text.embed.providers()
YIELD name, requiredConfigType, optionalConfigType, defaultConfig
RETURN name, requiredConfigTypeCYPHER 25
RETURN ai.text.completion(
'Summarize: ' + $text,
'openai',
{ token: $openaiKey, model: 'gpt-4o-mini' }
) AS summarySTRINGCYPHER 25
MATCH (c:Chunk)-[:PART_OF]->(a:Article {id: $articleId})
RETURN ai.text.aggregateCompletion(
c.text,
'Summarize the following article chunks in 3 sentences',
'openai',
{ token: $openaiKey, model: 'gpt-4o-mini' }
) AS summaryvaluetoString()CYPHER 25
WITH ai.text.embed($question, 'openai', { token: $openaiKey, model: 'text-embedding-3-small' }) AS qEmbedding
CALL db.index.vector.queryNodes('chunk_embedding', 10, qEmbedding) YIELD node AS chunk, score
MATCH (chunk)<-[:HAS_CHUNK]-(article:Article)
OPTIONAL MATCH path = shortestPath((article)-[*..3]-(other:Article))
WITH chunk, article, collect(DISTINCT other.title) AS related, score
ORDER BY score DESC LIMIT 5
WITH collect(chunk.text + '\n[Source: ' + article.title + ']') AS context, $question AS question
RETURN ai.text.completion(
'Answer based on context:\n' + reduce(s='', c IN context | s + c + '\n') + '\nQuestion: ' + question,
'openai',
{ token: $openaiKey, model: 'gpt-4o-mini' }
) AS answerMAPCYPHER 25
MATCH (p:Product {id: $productId})
WITH p,
ai.text.structuredCompletion(
'Extract key attributes from: ' + p.description,
{
type: 'object',
properties: {
category: { type: 'string' },
tags: { type: 'array', items: { type: 'string' } },
priceRange: { type: 'string', enum: ['budget', 'mid', 'premium'] }
},
required: ['category', 'tags', 'priceRange'],
additionalProperties: false
},
'openai',
{ token: $openaiKey, model: 'gpt-4o-mini' }
) AS extracted
SET p.category = extracted.category,
p.priceRange = extracted.priceRange
WITH p, extracted.tags AS tags
UNWIND tags AS tag
MERGE (t:Tag {name: tag})
MERGE (p)-[:TAGGED]->(t)CYPHER 25
MATCH (:User {id: $userId})-[:ORDERED]->(o:Order)-[:CONTAINS]->(p:Product)
RETURN ai.text.aggregateStructuredCompletion(
p.name + ': ' + p.category,
'Build a shopping profile for this user',
{
type: 'object',
properties: {
preferredCategories: { type: 'array', items: { type: 'string' } },
spendingTier: { type: 'string', enum: ['economy', 'standard', 'premium'] }
},
required: ['preferredCategories', 'spendingTier']
},
'openai',
{ token: $openaiKey, model: 'gpt-4o-mini' }
) AS profile// Start new conversation (chatId = null → new session)
CYPHER 25
WITH ai.text.chat(
'Hello, who are you?',
null,
'openai',
{ token: $openaiKey, model: 'gpt-4o-mini' }
) AS result
RETURN result.message AS reply, result.chatId AS sessionId
// Continue conversation (pass returned chatId)
CYPHER 25
WITH ai.text.chat(
'What did I just ask you?',
$chatId,
'openai',
{ token: $openaiKey, model: 'gpt-4o-mini' }
) AS result
RETURN result.message AS reply, result.chatId AS sessionIdMAP { message: STRING, chatId: STRING }chatId// Count tokens before sending to LLM
CYPHER 25
RETURN ai.text.tokenCount($text, 'openai', { token: $openaiKey, model: 'gpt-4o-mini' }) AS tokenCount
// Chunk text by token limit (no external dependencies)
CYPHER 25
UNWIND ai.text.chunkByTokenLimit($longText, 512, 'gpt-4', 50) AS chunk
MERGE (c:Chunk { text: chunk })ai.text.chunkByTokenLimit(input, limit, model='gpt-4', overlap=0)modeloverlapSET node.embedding = ai.text.embed(...)SET node.* = ai.text.structuredCompletion(...)MATCH (c:Chunk) WHERE c.embedding IS NULL RETURN count(c)CALL { ... } IN TRANSACTIONS OF 500 ROWS| Old function | Replacement |
|---|---|
| |
| |
| |
| Error | Cause | Fix |
|---|---|---|
| Missing CYPHER 25 prefix OR plugin not installed | Add |
| Using | Upgrade Neo4j; ensure GenAI plugin loaded |
| Provider config map incomplete | Check required keys for provider (see table above) |
| Wrong model name or provider auth failed | Test with |
| Provider string typo (case-sensitive, lowercase) | Use |
| Chat only supported on openai/azure-openai | Switch to openai/azure-openai for chat |
CYPHER 25$parammodel'openai''vertexai''bedrock-titan'IN TRANSACTIONS OF 500 ROWSgenai.vector.encode()ai.text.embed()chatIdadditionalProperties: false