01 — Technical Documentation
Deep dive into the architecture, security model, and the secret sauce behind our limit parsing engine. Transparency builds trust.
02 — Architecture
A clean, modular architecture that processes your PRs in seconds. Here's how the data flows from your GitHub webhook to a deployment decision.
Step 1
Webhook triggers on PR
When a PR is opened or updated, GitHub fires a webhook to our One-Backend orchestration layer. The entire codebase is fetched and queued for analysis.
Step 2
Orchestrates the pipeline
The central brain that coordinates all Jataka services. It manages job queues, handles OAuth authentication, and routes tasks to the appropriate engines.
Step 3
Analyzes code & generates tests
Our proprietary LLM layer that understands Salesforce patterns. It analyzes Apex code, identifies potential limit breaches, and generates test cases for uncovered paths.
Step 4
Executes API/UI testing
The execution engine that spins up isolated Sandbox pods, runs actual transactions, parses Debug Logs, and returns real Governor Limit metrics.
Input
GitHub webhook fires on PR creation/update. Jataka receives the payload, authenticates with your Salesforce org via OAuth, and queues the analysis job in our distributed task queue.
Output
Pass/Fail status posted to GitHub PR checks. Detailed limit report attached as PR comment with line-by-line attribution. Deployment blocked if thresholds exceeded.
03 — Security & Data Privacy
Built for US Enterprise sales. Your code is your IP. Here's exactly how we protect it—no vague promises, just specifics.
We use Enterprise Zero-Retention APIs. Your code is processed but never stored or used for training. Once the analysis completes, all code artifacts are immediately purged from memory.
Unlike other AI tools that may cache your code for model improvement, Jataka's enterprise agreements with our LLM providers guarantee that your proprietary Salesforce code is processed in-memory only. No disk writes. No persistent storage. No training on your IP.
All OAuth tokens are AES-256 encrypted at rest. Your Salesforce credentials are never exposed in logs, dashboards, or debug output.
Every OAuth refresh token is encrypted with a unique key derived from your organization's master key. Even if our database were compromised, attackers would see only encrypted blobs. We rotate encryption keys quarterly.
Jataka does not train AI models on your proprietary Salesforce code. Your IP stays yours. We don't learn from your codebase to improve our models.
This is a contractual guarantee, not just a technical implementation. Our enterprise agreements explicitly prohibit using customer code for any model training or improvement. Your competitive advantage remains yours alone.
Our infrastructure meets SOC 2 Type II standards for security, availability, and confidentiality. Annual audits verify our controls.
We undergo annual SOC 2 Type II audits by an independent CPA firm. The audit covers access controls, encryption practices, incident response, and change management. Reports available under NDA for enterprise prospects.
04 — How We Parse Limits
Transparency builds trust. Here's exactly how Jataka detects Governor Limit breaches with precision. No black boxes.
01
We query Salesforce Tooling API to inspect Apex classes, triggers, and dependencies in real-time. This gives us the symbol tables and metadata needed for static analysis.
The Tooling API provides access to the SymbolTable of every Apex class, revealing method signatures, variable types, and cross-references. We build a complete dependency graph before executing a single line of code.
GET /services/data/v58.0/tooling/query/?q= SELECT Id, Name, SymbolTable, Body FROM ApexClass WHERE NamespacePrefix = NULL
02
Every API call returns limit consumption headers. We parse these to build your real-time limit profile during Sandbox execution.
Salesforce includes limit information in every API response header. We intercept these headers during Sandbox execution to track exactly how many SOQL queries, DML statements, and CPU milliseconds each transaction consumes.
Sforce-Limit-Info: api-usage=95/500 Sforce-Limit-Info: api-max=500 Sforce-Limit-Info: per-app-api-usage=42/100
03
We execute your code in Sandbox and parse Debug Logs to extract actual runtime metrics. This is where we catch the real limit breaches.
After executing the transaction in an isolated Sandbox, we analyze the execution trace to extract exact runtime metrics. This gives us precise measurements: 97 SOQL queries, 48,000 query rows, 8,500 DML rows. Not estimates—measured facts.
> Parsing execution trace... > SOQL queries detected: 97/100 > Query rows: 48,000/50,000 > DML statements: 45/150 > CPU time: 8,500ms/10,000ms
04
Using Neo4j graph analysis, we map dependencies and predict impact of code changes before they're deployed.
Every Salesforce component is mapped in our dependency graph. When you change a trigger, we analyze the relationships to identify every downstream component that could be affected.
> Initiating Blast Radius Traversal... > Finding 12 downstream dependencies > Mapping impact across 3 layers > Risk assessment: CRITICAL
We don't guess. We execute your code in a Sandbox and measure real metrics. If we say you're at 97/100 SOQL queries, that's not an estimate—it's a measured fact from the actual execution. Zero false positives.
Ready to Ship?
See the architecture in action. Book a demo and watch Jataka catch real issues in your Salesforce codebase.