Loading...
Loading...
Initialize GrepAI in a project. Use this skill when setting up GrepAI for the first time in a codebase.
npx skill4agent add yoanbernabeu/grepai-skills grepai-initgrepai initgrepai initcd /path/to/your/project
grepai initgrepai init.grepai/.grepai/
├── config.yaml # Configuration file
├── index.gob # Vector index (created by watch)
└── symbols.gob # Symbol index for trace (created by watch)config.yamlversion: 1
embedder:
provider: ollama
model: nomic-embed-text
endpoint: http://localhost:11434
store:
backend: gob
chunking:
size: 512
overlap: 50
watch:
debounce_ms: 500
trace:
mode: fast
enabled_languages:
- .go
- .js
- .ts
- .jsx
- .tsx
- .py
- .php
- .c
- .h
- .cpp
- .hpp
- .cc
- .cxx
- .rs
- .zig
- .cs
- .pas
- .dpr
ignore:
- .git
- .grepai
- node_modules
- vendor
- target
- __pycache__
- dist
- build| Setting | Default | Purpose |
|---|---|---|
| | Local embedding generation |
| | 768-dimension model |
| | Ollama API URL |
| Setting | Default | Purpose |
|---|---|---|
| | Local file storage |
| Setting | Default | Purpose |
|---|---|---|
| | Tokens per chunk |
| | Overlap for context |
| Setting | Default | Purpose |
|---|---|---|
| | Wait time before re-indexing |
.git.grepainode_modulesvendortargetdistbuild__pycache__.grepai/config.yamlembedder:
provider: openai
model: text-embedding-3-small
api_key: ${OPENAI_API_KEY}store:
backend: postgres
postgres:
dsn: postgres://user:pass@localhost:5432/grepaiignore:
- .git
- .grepai
- node_modules
- "*.min.js"
- "*.bundle.js"
- coverage/
- .nyc_output/cd /path/to/monorepo
grepai initgrepai workspace create my-workspace
grepai workspace add my-workspace /path/to/project1
grepai workspace add my-workspace /path/to/project2# Remove existing config
rm -rf .grepai
# Re-initialize
grepai initgrepai watch# Check config exists
cat .grepai/config.yaml
# Check status (will show no index yet)
grepai status.grepairm -rf .grepai && grepai initgrepai watchollama serve.grepai.grepai/.gitignore# GrepAI
.grepai/✅ GrepAI Initialized
Config: .grepai/config.yaml
Default settings:
- Embedder: Ollama (nomic-embed-text)
- Storage: GOB (local file)
- Chunking: 512 tokens, 50 overlap
Next steps:
1. Ensure Ollama is running: ollama serve
2. Start indexing: grepai watch