Batching 2.0 Changes

🧩 1. Batch Matching Logic: From Visual Similarity to Structural Identity

Before:

Batchable steps were determined primarily by surface-level similarity:

  • Matching titles, fields, datagrid structures, and child steps.

Now:

Batchable steps are determined by system-defined identities:

  • Last redline is the same

  • Standard Step ID is identical (for standard steps)

  • Or, the origin step ID is the same (for procedure steps)

  • Step must be in TODO status

What this means:

Your customers now rely less on surface-level matching, minimizing surprises and reducing hidden mismatches in critical workflows.


πŸ”„ 2. Batching Impacts Nested Steps and Enforces Parent-Child Hierarchy Matching

Before:

  • Child steps needed to be identical, but no explicit callout was made about the full parent-child hierarchy.

Now:

  • The entire step hierarchyβ€”including nested stepsβ€”must match for batching to occur.

  • Nested steps to a depth of more than 1 will now also batch.

Impact:

This prevents partially batched workflows due to hidden structural differences and reinforces step-by-step alignment across runs.


πŸ“€ 3. Expanded Scope of Data Shared Across Batches

Before:

Data shared included core operational fields like status, scheduled times, field entries, datagrid entries, attachments, etc.

Now:

New and more consistent batch-shared data:

  • Redlines to existing steps

  • Adding new steps

  • Dependencies

  • Check-in session data

Takeaway:

The system now treats batches more like a living procedural containerβ€”including structural changes (like added steps or redlines), not just step updates. This means changes ripple through the batch more comprehensively.


🚧 4. Stricter Rules on Step Status and Batching

New behavior:

  • You cannot un-batch a run if any of its steps are in a REDLINE status.

  • Only steps in TODO status are eligible for batching.

  • To reapply changes to completed steps via batching, users must reset to TODO, batch, then reapply changes.

Why this matters:

This protects data integrity by ensuring batches reflect only modifiable, aligned stepsβ€”and keeps post-execution changes deliberate and controlled.


πŸ”— 5. Stronger Enforcement of Dependency Logic in Batches

Before:

Dependency rules were implicit.

Now:

Clear rule: A downstream step in any run can only begin when the equivalent upstream steps in ALL batched runs are complete.

Consequence:

As you may change one run that had an issue to ensure it got repaired before rejoining a batch, this ensures those repairs are completed in advance of that part rejoining the batch, as an example.


Now:

  • Batches are much, much faster. You can now reliably batch 100 runs and see the same performance when executing the runs as you would when executing just one run. No more waiting long periods of time to see changes occur in a batch and seeing issues of data not being shared when you expect it to be shared.

Last updated

Was this helpful?