Prompt Details
Model
Chat - (gpt-5.2)
Token size
622
Example input
[PDFs / Docs / Notion / Website / APIs / Mixed]: PDFs (Compliance Manuals), Notion (Internal SOPs), Website (Marketing Pages), API (Pricing & Inventory), Support Docs
[Small / Medium / Large]: Large
[Static / Weekly / Real-Time]: Mixed (Website weekly, API real-time, PDFs static, Notion weekly)
[Simple / Multi-step / Analytical]: Multi-step & Analytical
[Real-time / Moderate / Async]: Moderate (2β4 seconds acceptable)
Example output
1οΈβ£ Source Classification Layer
A. Reliability Tiering
Tier 1 (Authoritative, Structured, High Trust)
API (Pricing & Inventory β Real-time)
Compliance PDFs (Legal-approved)
Tier 2 (Operational Knowledge)
Notion SOPs (Team-managed)
Tier 3 (Contextual / Marketing)
Website pages
Support docs
B. Freshness Index
Real-time: API
Weekly: Notion, Website
Static: PDFs
C. Namespace Design
compliance.*
operations.*
marketing.*
realtime.inventory.*
support.*
Each namespace enforces metadata tagging:
source_type
timestamp
version
authority_score
freshness_score
---
2οΈβ£ Query Routing Logic
Step 1: Query Intent Detection
1. Pricing / availability β Route to realtime.inventory.*
2. Policy / regulation β Route to compliance.*
3. How-to / process β Route to operations.*
4. Promotional / feature description β Route to marketing.*
5. Multi-domain query β Trigger composite routing
Priority Rules
Real-time queries always call API first
Compliance overrides marketing if conflict detected
Operational SOP overrides support articles
Parallel vs Sequential Retrieval
Parallel:
When multi-step query spans multiple namespaces
Sequential:
When API confirmation is required before deeper reasoning
---
3οΈβ£ Cross-Source Relevance Scoring
Hybrid Ranking Formula
Final Score =
(Semantic Similarity Γ 0.5) +
(Authority Score Γ 0.2) +
(Freshness Score Γ 0.2) +
(Context Overlap Score Γ 0.1)
Confidence Weighting
API: 0.95 base confidence
Compliance PDFs: 0.90
Notion SOPs: 0.80
Website: 0.65
Support Docs: 0.60
Context Overlap Detection
Cross-document entity matching
Temporal alignment check
Version consistency comparison
---
4οΈβ£ Conflict Resolution Framework
Conflict Handling Logic
1. Detect semantic contradiction
2. Compare authority_score
3. Compare freshness_score
4. Apply tie-break rule:
Regulatory > Operational > Marketing
Escalation Triggers
Equal authority and conflicting timestamps
High business critical query + low confidence
Repeated conflict pattern within 24 hours
Escalation Path:
Flag for human validation
Log to conflict monitoring system
---
5οΈβ£ Latency Optimization Strategy
Source Batching
Pre-batch compliance + operations embeddings
API called only when intent detected
Cache Strategy
Cache:
Frequent operational queries (TTL 24h)
Compliance answers (TTL 7 days)
No cache for real-time inventory
Fallback Thresholds
If API > 1.5s delay β Return cached last-known value with disclaimer
If secondary retrieval > 3s β Return partial answer + async enrichment
---
6οΈβ£ Freshness & Update Handling
Real-Time Handling
API always live call
Metadata stamped at response time
Incremental Indexing Workflow
1. Detect document change
2. Chunk-level diffing
3. Re-embed only changed segments
4. Update namespace index
Stale Data Detection
Timestamp decay scoring
If age > threshold β Lower freshness_score dynamically
---
7οΈβ£ Monitoring & Evaluation Metrics
Retrieval Precision
Top-3 relevance match rate
Query-to-source accuracy %
Source Performance
API latency average
Namespace hit rate
Retrieval success rate
Conflict Monitoring
Conflict frequency %
Escalation rate
False conflict detection rate
---
8οΈβ£ Risk & Failure Analysis
Over-Retrieval Risk
Too many low-authority docs dilute signal
Mitigation:
Minimum authority threshold
Top-k dynamic adjustment
Under-Context Risk
Missing operational nuance
Mitigation:
Context expansion step if confidence < 0.75
API Dependency Risk
Downtime
Rate limits
Mitigation:
Graceful degradation
Cached fallback layer
---
9οΈβ£ Scalability Roadmap
Phase 1: Controlled Multi-Source Setup
Manual routing rules
Static authority weights
Basic conflict resolution
Phase 2: Dynamic Routing Intelligence
Query classifier refinement
Adaptive authority weighting
Automated conflict detection clustering
Phase 3: Adaptive Self-Improving Orchestration
Feedback loop from user corrections
Reinforcement scoring adjustments
Predictive source pre-fetching
---
π Executive Orchestration Summary
Core Routing Principle
Route based on intent β Rank by authority β Adjust by freshness β Validate with conflict logic.
Highest Risk Factor
Cross-source contradiction under high-criticality queries.
One Improvement for Retrieval Precision
Implement adaptive authority re-weighting based on historical correctness feedback.
---
System Outcome
This multi-source RAG system ensures:
Right source selection through intent-aware routing
Reliable answers via authority-weighted scoring
Controlled latency through smart batching
Conflict resilience through structured resolution
Designed to retrieve the right information from the right source at the right time.
By purchasing this prompt, you agree to our terms of service
GPT-5.2
Connecting multiple data sources without orchestration leads to irrelevant answers and retrieval conflicts.
This prompt designs a structured multi-source RAG system that intelligently routes queries across documents, databases, APIs, and live sources.
Buyer Benefits
π Cross-source routing logic
π Structured source prioritization
π Retrieval conflict resolution
β‘ Latency-aware orchestration
π‘ Reliability & fallback planning
π Use this prompt before scaling RAG beyond a single data sourc
...more
Added over 1 month ago
