Batching 2.0 Changes
Why the Changes?
Instant Batch Creation
Old: Async job took minutes, required manual refresh—and early steps didn’t batch.
New: Fully synchronous, completes in ~1 second for any batch size.
Scalable Step Operations
Run‑step check‑in: 100‑run batch goes from ~20 sec → 1 sec.
Redlining: Once timed out on large DAGs → now ~2 sec (same as single runs).
Batch‑Size‑Independent Performance
Old: Every write slowed linearly—10 runs = 10× slower.
New: Core operations execute in constant time, whether you batch 5 or 500.
Bottom Line
No more “wait & refresh”—you get instant feedback.
Predictable, fast performance at any scale.
Full DAG support unlocks complex workflows without timeouts.
What are the Changes?
🧩 1. Batch Matching Logic: From Visual Similarity to Structural Identity
Before:
Batchable steps were determined primarily by surface-level similarity:
Matching titles, fields, datagrid structures, and child steps.
Now:
Batchable steps are determined by system-defined identities:
Last redline is the same
Standard Step ID is identical (for standard steps)
Or, the origin step ID is the same (for procedure steps)
Step must be in TODO status
What this means:
Your customers now rely less on surface-level matching, minimizing surprises and reducing hidden mismatches in critical workflows.
🔄 2. Batching Impacts Nested Steps and Enforces Parent-Child Hierarchy Matching
Before:
Child steps needed to be identical, but no explicit callout was made about the full parent-child hierarchy.
Now:
The entire step hierarchy—including nested steps—must match for batching to occur.
Nested steps to a depth of more than 1 will now also batch.
Impact:
This prevents partially batched workflows due to hidden structural differences and reinforces step-by-step alignment across runs.
📤 3. Expanded Scope of Data Shared Across Batches
Before:
Data shared included core operational fields like status, scheduled times, field entries, datagrid entries, attachments, etc.
Now:
New and more consistent batch-shared data:
Redlines to existing steps
Adding new steps
Dependencies
Check-in session data
Takeaway:
The system now treats batches more like a living procedural container—including structural changes (like added steps or redlines), not just step updates. This means changes ripple through the batch more comprehensively.
🚧 4. Stricter Rules on Step Status and Batching
New behavior:
You cannot un-batch a run if any of its steps are in a REDLINE status.
Only steps in TODO status are eligible for batching.
To reapply changes to completed steps via batching, users must reset to TODO, batch, then reapply changes.
Why this matters:
This protects data integrity by ensuring batches reflect only modifiable, aligned steps—and keeps post-execution changes deliberate and controlled.
🔗 5. Stronger Enforcement of Dependency Logic in Batches
Before:
Dependency rules were implicit.
Now:
Clear rule: A downstream step in any run can only begin when the equivalent upstream steps in ALL batched runs are complete.
Consequence:
As you may change one run that had an issue to ensure it got repaired before rejoining a batch, this ensures those repairs are completed in advance of that part rejoining the batch, as an example.
Now:
Batches are much, much faster. You can now reliably batch 100 runs and see the same performance when executing the runs as you would when executing just one run. No more waiting long periods of time to see changes occur in a batch and seeing issues of data not being shared when you expect it to be shared.
Converting Old Batches
All batches created after Wednesday, May 14th, 2025 are already using the new architecture. However, if you want to convert batches created before that date, please do the following:
Find the batch and ensure all steps you want to batch are in the TODO status.
Unbatch all runs from the old batch.
Create a brand new batch and add the runs to the batch.
You should now see those runs are on the new batching architecture as described above!
Last updated
Was this helpful?