Loading...
Loading...
ChatGPT-style deep research strategy with problem decomposition, multi-query generation (3-5 variations per sub-question), evidence synthesis with source ranking, numbered citations, and iterative refinement. Use for complex architecture decisions, multi-domain synthesis, strategic comparisons, technology selection. Keywords: architecture, integration, best practices, strategy, recommendations, comparison.
npx skill4agent add thomasholknielsen/claude-code-config websearch-deepdeep-research-skill-was-executed.md# Deep Research Skill Execution Verification
**Skill**: websearch-deep
**Executed**: {timestamp}
**Research Question**: {the question you researched}
**Mode**: Deep (6-phase methodology)
This file was created by the deep-researcher agent to verify that the websearch-deep Skill was successfully loaded and executed.
**Phases Applied**:
1. ✓ Problem Decomposition (3-5 sub-questions)
2. ✓ Multi-Query Generation (3-5 variations per sub-question)
3. ✓ Evidence Synthesis with Source Ranking (credibility/freshness/relevance)
4. ✓ Citation Transparency (numbered [1][2][3])
5. ✓ Structured Output (token-efficient template)
6. ✓ Iterative Refinement (max 5 iterations)
**Sub-Questions Generated**: {count}
**Queries Executed**: {count}
**Sources Consulted**: {count} ({authoritative_count} authoritative, {recent_count} recent)
**Iterations Performed**: {count}
**Output Format**: Token-efficient template (Executive Summary + Research Overview + Findings + Synthesis + Recommendations + Sources with URLs)Primary: "What's the best architecture for integrating Salesforce with SQL Server in 2025?"
Sub-Questions:
1. What are Salesforce's current integration capabilities and APIs (2025)?
2. What are SQL Server's integration patterns and best practices?
3. What middleware or integration platforms are commonly used?
4. What security and compliance considerations matter?
5. What scalability and performance factors should influence choice?site:domain.comfiletype:pdfintitle:"keyword"inurl:keywordafter:2024"exact phrase"Sub-Q1: Salesforce Integration Capabilities
- site:salesforce.com "API" "integration" "2025"
- "Salesforce REST API" "rate limits" after:2024
- "Salesforce Bulk API 2.0" "best practices"
- filetype:pdf "Salesforce integration guide" 2025
- "Salesforce API" "breaking changes" after:2024Official Docs: site:docs.{vendor}.com "{topic}" "architecture patterns" OR "design patterns"
Best Practices: "{topic}" "best practices" "production" after:2024
Comparisons: "{topic}" vs "{alternative}" "comparison" "pros cons"
Limitations: "{topic}" "limitations" OR "drawbacks" OR "challenges"
Recent Updates: site:{vendor}.com "{topic}" "updates" OR "changes" after:2024Official Docs: site:docs.{framework}.com "{feature}" "guide" OR "documentation"
Community: site:stackoverflow.com "{framework}" "{feature}" "how to"
Real-World: "{framework}" "{feature}" "production" OR "case study" after:2024
Performance: "{framework}" "performance" OR "benchmarks" OR "optimization"
Ecosystem: "{framework}" "ecosystem" OR "plugins" OR "extensions" 2025Industry Analysis: "{topic}" "market analysis" OR "industry trends" 2024 2025
Vendor Comparison: "{vendor A}" vs "{vendor B}" "comparison" "review"
ROI/Benefits: "{solution}" "ROI" OR "benefits" OR "value proposition"
Implementation: "{solution}" "implementation guide" OR "getting started"
Case Studies: "{solution}" "case study" OR "customer success" after:2024Fundamentals: "{topic}" "introduction" OR "beginner guide" OR "explained"
Advanced: "{topic}" "advanced" OR "deep dive" OR "internals"
Tutorials: "{topic}" "tutorial" OR "step by step" after:2024
Common Mistakes: "{topic}" "common mistakes" OR "anti-patterns" OR "pitfalls"
Resources: "{topic}" "learning resources" OR "courses" OR "books" 2025Standards: "{topic}" "{standard}" "compliance" (e.g., "GDPR", "SOC2", "HIPAA")
Security: "{topic}" "security" "best practices" OR "vulnerabilities" after:2024
Official Guidance: site:{regulator}.gov "{topic}" "guidance" OR "requirements"
Audit: "{topic}" "audit" OR "checklist" OR "certification"
Tools: "{topic}" "{compliance}" "tools" OR "automation" 2025Benchmarks: "{topic}" "benchmark" OR "performance" "comparison" after:2024
Bottlenecks: "{topic}" "bottleneck" OR "slow" OR "performance issues"
Optimization: "{topic}" "optimization" OR "tuning" OR "best practices"
Monitoring: "{topic}" "monitoring" OR "observability" OR "metrics"
Scaling: "{topic}" "scalability" OR "high traffic" OR "production scale"site:anthropic.comsite:docs.{vendor}.com# Step 1: Generate all queries first
all_queries = []
for sub_question in sub_questions:
queries = generate_query_variations(sub_question) # 3-5 queries per sub-Q
all_queries.extend(queries)
# Total: 15-25 queries across all sub-questions
# Step 2: Execute in parallel batches
batch_size = 5 # Adjust 5-10 based on query complexity
for i in range(0, len(all_queries), batch_size):
batch = all_queries[i:i+batch_size]
# Step 3: Execute ALL queries in batch SIMULTANEOUSLY in single message
# Example: If batch = [q1, q2, q3, q4, q5], call:
# WebSearch(q1)
# WebSearch(q2)
# WebSearch(q3)
# WebSearch(q4)
# WebSearch(q5)
# ALL FIVE in the SAME message as parallel tool uses
results = execute_parallel_batch(batch)
process_batch_results(results) # Collect sources immediatelyGenerated 25 queries across 5 sub-questions
Batch 1 (5 queries - executed in parallel):
WebSearch("site:salesforce.com 'API' 'integration' '2025'")
WebSearch("'Salesforce REST API' 'rate limits' after:2024")
WebSearch("'Salesforce Bulk API 2.0' 'best practices'")
WebSearch("filetype:pdf 'Salesforce integration guide' 2025")
WebSearch("'Salesforce API' 'breaking changes' after:2024")
→ Batch completes in ~1s, 5 results returned
Batch 2 (5 queries - executed in parallel):
WebSearch("'SQL Server ETL' 'best practices' 'real-time'")
WebSearch("site:docs.microsoft.com 'SQL Server' 'integration'")
...
→ Batch completes in ~1s, 5 results returned
Total: 5 batches × 1s each = ~5s (vs 25s sequential)# Collect results from all batches
all_results = []
all_results.extend(batch1_results) # 5 results from batch 1
all_results.extend(batch2_results) # 5 results from batch 2
all_results.extend(batch3_results) # 5 results from batch 3
all_results.extend(batch4_results) # 5 results from batch 4
all_results.extend(batch5_results) # 5 results from batch 5
# Total: ~25 results (before deduplication)
# Deduplicate by URL
unique_sources = deduplicate_by_url(all_results)
# After dedup: ~15-20 unique sources (duplicates removed)
# Rank all unique sources
ranked_sources = rank_sources(unique_sources) # Apply scoring belowText with claim from [OpenAI: GPT-4](https://url "GPT-4 Technical Report (OpenAI, 2023-03-14)") and [Anthropic: Claude](https://url2 "Introducing Claude (Anthropic, 2023-03-14)"). Multiple sources: [Google DeepMind: Gemini](https://url3 "Gemini Model (Google DeepMind, 2023-12-06)"), [Meta: LLaMA](https://url4 "LLaMA Paper (Meta AI, 2023-02-24)").[Organization: Topic](full-URL "Full Title (Publisher, YYYY-MM-DD)")[Org: Topic][OpenAI: GPT-4][OpenAI: DALL-E][OpenAI: Whisper][Stack Overflow: OAuth Implementation][Medium: React Patterns]## References
### Official Documentation
- **OpenAI: GPT-4** (2023-03-14). "GPT-4 Technical Report". https://openai.com/research/gpt-4
- **Anthropic: Claude** (2023-03-14). "Introducing Claude". https://www.anthropic.com/claude
### Blog Posts & Articles
- **Google DeepMind: Gemini** (2023-12-06). "Gemini: A Family of Highly Capable Models". https://deepmind.google/technologies/gemini
- **Meta: LLaMA** (2023-02-24). "Introducing LLaMA". https://ai.meta.com/blog/llama
### Academic Papers
- **Attention Is All You Need** (2017-06-12). Vaswani et al. https://arxiv.org/abs/1706.03762
### Community Resources
- **Stack Overflow: OAuth Implementation** (2024-08-15). https://stackoverflow.com/questions/12345"Full Title (Publisher, YYYY-MM-DD)"Salesforce provides three primary API types according to [Salesforce: API Docs](https://developer.salesforce.com/docs/apis "Salesforce API Documentation (Salesforce, 2025-01-15)"): REST API for standard operations, [Salesforce: Bulk API 2.0](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/ "Bulk API 2.0 Guide (Salesforce, 2024-11-20)") for large data volumes (>10k records), and [Salesforce: Streaming API](https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/ "Streaming API Guide (Salesforce, 2024-10-05)") for real-time updates. Recent 2025 updates introduced enhanced rate limiting (100k requests/24hrs for Enterprise) and improved error handling as noted in [Salesforce Blog: API Updates](https://developer.salesforce.com/blogs/2025/01/api-updates "API Error Handling Improvements (Salesforce Blog, 2025-01-10)").
## References
### Official Documentation
- **Salesforce: API Docs** (2025-01-15). "Salesforce API Documentation". https://developer.salesforce.com/docs/apis
- **Salesforce: Bulk API 2.0** (2024-11-20). "Bulk API 2.0 Developer Guide". https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/
- **Salesforce: Streaming API** (2024-10-05). "Streaming API Developer Guide". https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/
### Blog Posts & Articles
- **Salesforce Blog: API Updates** (2025-01-10). "API Error Handling Improvements". https://developer.salesforce.com/blogs/2025/01/api-updates# Deep Research: {Question}
## Executive Summary
{2-3 paragraph synthesis covering:
- What was researched and why it matters
- Key findings with citations [Org: Topic]
- Strategic recommendation with rationale}
Example length: ~150-200 words total across 2-3 paragraphs.
## Research Overview
- **Sub-Questions Analyzed**: {count}
- **Queries Executed**: {count} queries
- **Sources**: {count} total ({authoritative_count} authoritative / {auth_pct}%, {recent_count} recent / {recent_pct}%)
- **Iterations**: {count}
## Findings
### 1. {Sub-Question 1}
{Opening paragraph: What this sub-question addresses and why it's important}
{2-4 paragraphs of synthesized narrative with inline citations [1][2][3]. Each paragraph covers a specific aspect or theme. Include:
- Core concepts and definitions with citations
- How different sources approach the topic
- Practical implications and examples
- Performance characteristics or trade-offs where relevant}
**Key Insights**:
- {Insight 1: Specific, actionable statement} - {Why it matters and implications} [Org: Topic], [Org: Topic]
- {Insight 2: Specific, actionable statement} - {Why it matters and implications} [Org: Topic]
- {Insight 3: Specific, actionable statement} - {Why it matters and implications} [Org: Topic]
{Optional: **Common Patterns** or **Best Practices** subsection if relevant with 2-3 bullet points}
### 2. {Sub-Question 2}
{Repeat the same structure: Opening paragraph + 2-4 narrative paragraphs + 3-5 Key Insights}
{...continue for all sub-questions...}
## Synthesis
{2-3 paragraphs integrating findings across sub-questions. Show how the pieces fit together and what the big picture reveals.}
**Consensus** (3+ sources agree):
- {Consensus point 1 with source count} [Org: Topic], [Org: Topic], [Org: Topic]
- {Consensus point 2 with source count} [Org: Topic], [Org: Topic], [Org: Topic], [Org: Topic]
**Contradictions** *(if present)*:
- **{Topic}**: {Source A perspective [Org: Topic]} vs {Source B perspective [Org: Topic]}. {Resolution or context explaining difference}
**Research Gaps** *(if any)*:
- {Gap 1}: {What wasn't found and why it matters}
## Recommendations
### Critical (Do First)
1. **{Action}** - {Detailed rationale explaining why this is critical, what happens if not done, and expected impact} [Org: Topic], [Org: Topic]
2. **{Action}** - {Detailed rationale} [Org: Topic]
3. **{Action}** - {Detailed rationale} [Org: Topic]
### Important (Do Next)
4. **{Action}** - {Rationale with evidence and expected benefit} [Org: Topic]
5. **{Action}** - {Rationale with evidence} [Org: Topic]
6. **{Action}** - {Rationale with evidence} [Org: Topic]
### Optional (Consider)
7. **{Action}** - {Rationale and when/why you might skip this} [Org: Topic]
8. **{Action}** - {Rationale} [Org: Topic]
## References
### Official Documentation
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
### Blog Posts & Articles
- **{Org: Topic}** ({YYYY-MM-DD}). "{Full Title}". {Full URL}
### Academic Papers
- **{Paper Title}** ({YYYY-MM-DD}). {Authors}. {Full URL}
### Community Resources
- **{Platform: Topic}** ({YYYY-MM-DD}). {Full URL}gaps = []
completeness_score = 100
# Check citation coverage
for sub_q in sub_questions:
citation_count = count_citations(sub_q)
if citation_count < 3:
gaps.append(f"Sub-Q{i}: Only {citation_count} citations (need 3+)")
completeness_score -= 10
# Check for contradictions exploration
if contradictions_section_empty():
gaps.append("No contradictions explored - search for '{topic} criticisms' OR '{topic} limitations'")
completeness_score -= 10
# Check authoritative source coverage
auth_sources = count_authoritative_sources() # credibility >= 8
if auth_sources < total_sources * 0.5:
gaps.append(f"Only {auth_sources} authoritative sources ({round(auth_sources/total_sources*100)}%) - need 50%+")
completeness_score -= 10
# Check recency
recent_sources = count_recent_sources() # within 6 months
if recent_sources < total_sources * 0.3:
gaps.append(f"Only {recent_sources} recent sources ({round(recent_sources/total_sources*100)}%) - need 30%+")
completeness_score -= 5
# Check recommendation depth
if critical_recommendations < 3:
gaps.append(f"Only {critical_recommendations} Critical recommendations (need 3)")
completeness_score -= 10
# Check for research gaps section
if research_gaps_section_missing():
gaps.append("Research Gaps section missing - document what wasn't found")
completeness_score -= 5
return completeness_score, gapsiteration_count = 1
completeness_score, gaps = validate_completeness()
# 🔴 MANDATORY: Always perform minimum 2 iterations
# Even if iteration 1 achieves 85%+, iteration 2 improves depth
if iteration_count < 2 or (completeness_score < 85% and iteration_count <= 5):
# Generate targeted re-queries for each gap
requery_list = []
for gap in gaps:
if "citations" in gap:
# Need more sources for specific sub-question
requery_list.append(f"'{sub_question_topic}' 'detailed guide' OR 'comprehensive overview'")
elif "contradictions" in gap:
# Need to explore downsides/criticisms
requery_list.append(f"'{topic}' 'criticism' OR 'limitations' OR 'downsides'")
requery_list.append(f"'{topic}' 'vs' 'alternative' 'when not to use'")
elif "authoritative" in gap:
# Need more official sources
requery_list.append(f"site:docs.{vendor}.com '{topic}' 'official'")
requery_list.append(f"site:{vendor}.com '{topic}' 'documentation'")
elif "recent" in gap:
# Need more recent sources
requery_list.append(f"'{topic}' 'updates' OR 'changes' after:2024")
requery_list.append(f"'{topic}' '2025' OR '2024' 'latest'")
# Execute re-queries in parallel batch (1-5 queries)
# Use smaller batch size for re-queries since they're targeted
requery_batch = requery_list[:5] # Up to 5 re-queries
# Execute ALL re-queries in batch SIMULTANEOUSLY in single message
# Example: If requery_batch = [rq1, rq2, rq3], call:
# WebSearch(rq1)
# WebSearch(rq2)
# WebSearch(rq3)
# ALL THREE in the SAME message as parallel tool uses
execute_parallel_batch(requery_batch)
iteration_count += 1
# Update findings incrementally
append_iteration_findings()
completeness_score, gaps = validate_completeness()
else:
# Either complete (≥85%) or max iterations reached
if completeness_score < 85%:
note_limitations_in_research_gaps_section(gaps)
finalize_output()### 1. {Sub-Question}
{Original findings from iteration 1}
**Iteration 2 Additions**:
{New findings from re-queries, with citations [Org: Topic], [Org: Topic], [Org: Topic]}
**Key Insights**:
- {Original insight 1} [Org: Topic]
- {Original insight 2} [Org: Topic]
- {NEW insight from iteration 2} [Org: Topic], [Org: Topic]## Research Gaps
Due to iteration limit, the following gaps remain:
- {Gap 1}: {What's missing and why it matters}
- {Gap 2}: {What's missing and suggested follow-up approach}Sub-Q1: Salesforce integration capabilities (2025)?
Sub-Q2: SQL Server integration patterns?
Sub-Q3: Middleware options?
Sub-Q4: Security considerations?
Sub-Q5: Scalability factors?Generated 25 queries across 5 sub-questions
Batch 1 (5 queries - executed in parallel):
WebSearch("site:salesforce.com 'API' 'integration' '2025'")
WebSearch("'Salesforce REST API' 'rate limits' after:2024")
WebSearch("'Salesforce Bulk API 2.0' 'best practices'")
WebSearch("filetype:pdf 'Salesforce integration guide' 2025")
WebSearch("'Salesforce API' 'breaking changes' after:2024")
→ Batch completes in ~1s, 5 results returned
Batch 2 (5 queries - executed in parallel):
WebSearch("'SQL Server ETL' 'best practices' 'real-time'")
WebSearch("site:docs.microsoft.com 'SQL Server' 'integration'")
WebSearch("'SQL Server Always On' 'high availability'")
WebSearch("'SQL Server CDC' 'change data capture'")
WebSearch("'SQL Server linked servers' 'performance'")
→ Batch completes in ~1s, 5 results returned
Batch 3-5 (15 more queries across 3 batches):
... (middleware, security, scalability queries)
→ Each batch completes in ~1s
Execution Time:
- 5 batches × ~1s each = ~5s total
- Sequential would be: 25 queries × 1s = 25s
- Speedup: 5x faster18 sources identified
12 ranked as authoritative (credibility ≥ 8)
3 contradictions (real-time vs batch approaches)[1] Salesforce API Guide (Cred: 10, Fresh: 10, Rel: 10, Overall: 10.0)
[2] MuleSoft Patterns (Cred: 9, Fresh: 8, Rel: 9, Overall: 8.9)Executive Summary: 2 paragraphs
Findings: 5 sub-sections with 28 citations
Recommendations: 3 critical, 4 important, 2 enhancementsIteration 1: Identified gap in disaster recovery
Iteration 2: Re-queried "Salesforce SQL backup strategies"
Iteration 3: Completeness 92% → finalized1. Scalability characteristics for e-commerce?
2. Team size and DevOps implications?
3. Transaction patterns differences?
4. Deployment complexity trade-offs?
5. Real-world e-commerce case studies?"microservices e-commerce" "scalability" after:2024
"monolith vs microservices" "team size" "best practices"
site:aws.amazon.com "e-commerce architecture" "patterns"15 sources (10 authoritative)
Consensus: Team size <20 → monolith, >50 → microservices
Contradiction: Database approach (shared vs distributed).agent/Session-{name}/context/research-web-analyst.md