← Back to all posts
5 min readCentrali Team

Bulk vs Batch: Choosing the Right Multi-Record Path

Centrali now offers two symmetric multi-record processing paths — bulk and batch — available from both the HTTP API and compute functions. Here's how to choose the right one.

FeatureProductCompute

When you're working with hundreds or thousands of records at once, the question isn't just "how do I create them?" — it's "what should happen downstream?"

Should your trigger function run once with all 500 record IDs? Or should it run 500 times, once per record?

The answer depends on what your function does. And as of this release, Centrali gives you full control over that decision from any interface.

The Two Paths

Centrali now supports two distinct multi-record processing paths, each available from both the HTTP API and the compute function API:

HTTP APICompute Function APIWhat Happens
BulkPOST /records/bulkapi.bulkCreateRecords()1 aggregate event fires with all record IDs
BatchPOST /records/batchapi.batchCreateRecords()N per-record events fire, one per record

This applies to create, update, and delete operations across both interfaces.

When to Use Bulk

Bulk operations fire a single aggregate event — records_bulk_created, records_bulk_updated, or records_bulk_deleted — containing all affected record IDs in one payload. Your trigger function runs exactly once.

Use bulk when your function processes all records together:

  • Search indexing — index all 500 records in a single call
  • Batch notifications — send one summary email ("500 records imported") instead of 500 individual emails
  • Data exports — generate one CSV file containing all new records
  • Analytics — update dashboard counters with a single increment
javascript
// Trigger on: records_bulk_created async function run() { const { recordIds, count, recordSlug } = executionParams; // One notification for the entire import await api.httpPost("https://slack.com/webhook", { text: `${count} ${recordSlug} records imported successfully` }); }

When to Use Batch

Batch operations fire individual per-record events — record_created, record_updated, or record_deleted — one for each record. Your trigger function runs N times, once per record.

Use batch when your function processes each record individually:

  • Data validation — validate each record's fields against external rules
  • Thumbnail generation — generate an image for each product record
  • Individual notifications — send a welcome email per new user
  • Record enrichment — call an external API to fill in missing fields per record
javascript
// Trigger on: record_created async function run() { const { recordId, data } = executionParams; // Enrich each record with external data const enriched = await api.httpGet( `https://api.clearbit.com/v1/companies/find?domain=${data.data.domain}` ); await api.updateRecord(recordId, { companyInfo: enriched.data }); }

What Changed

Previously, the two paths were split across interfaces:

  • The HTTP API only had bulk operations (aggregate events)
  • Compute functions only had batch operations (per-record events)

This meant you were forced to choose your API based on the event behavior you wanted, not the interface that fit your use case. An external integration sending records via HTTP couldn't get per-record triggers. A compute function couldn't fire aggregate events.

Now both paths are available everywhere. The behavior is the same whether you call from an external webhook or from inside a compute function.

New Event Types

We also wired up two events that were publishing internally but never reached triggers:

  • record_restored — fires when a soft-deleted record is restored
  • record_expired — fires when a record's TTL expires

Plus two new aggregate events:

  • records_bulk_updated — fires after a bulk update operation
  • records_bulk_deleted — fires after a bulk delete operation

All 8 event types are now available in the trigger creation UI.

The Complete Event Matrix

EventTypeFired By
record_createdPer-recordSingle create, batch create
record_updatedPer-recordSingle update, batch update
record_deletedPer-recordSingle delete, batch delete
record_restoredPer-recordRecord restore
record_expiredPer-recordTTL expiration
records_bulk_createdAggregateBulk create
records_bulk_updatedAggregateBulk update
records_bulk_deletedAggregateBulk delete

Loop Prevention

All new events and endpoints respect the existing recursive trigger loop prevention. Bulk operations are naturally rate-limit-safe — one aggregate event means one trigger execution, regardless of how many records are affected. Batch operations are subject to the same per-record rate limits as single-record operations.

Getting Started

Update your triggers in the console UI — the event type dropdown now shows all 8 options. In your compute functions, the new bulk methods are available immediately:

javascript
// Aggregate events — function runs once await api.bulkCreateRecords('orders', records); await api.bulkUpdateRecords('orders', ids, { status: 'shipped' }); await api.bulkDeleteRecords('orders', ids); // Per-record events — function runs per record await api.batchCreateRecords('orders', records); await api.batchUpdateRecords('orders', updates); await api.batchDeleteRecords('orders', ids);

Check out the Triggers documentation and Event Payloads reference for full details.

Building something with Centrali and want to share feedback about this feature?

Email feedback@centrali.io