When you're working with hundreds or thousands of records at once, the question isn't just "how do I create them?" — it's "what should happen downstream?"
Should your trigger function run once with all 500 record IDs? Or should it run 500 times, once per record?
The answer depends on what your function does. And as of this release, Centrali gives you full control over that decision from any interface.
The Two Paths
Centrali now supports two distinct multi-record processing paths, each available from both the HTTP API and the compute function API:
| HTTP API | Compute Function API | What Happens | |
|---|---|---|---|
| Bulk | POST /records/bulk | api.bulkCreateRecords() | 1 aggregate event fires with all record IDs |
| Batch | POST /records/batch | api.batchCreateRecords() | N per-record events fire, one per record |
This applies to create, update, and delete operations across both interfaces.
When to Use Bulk
Bulk operations fire a single aggregate event — records_bulk_created, records_bulk_updated, or records_bulk_deleted — containing all affected record IDs in one payload. Your trigger function runs exactly once.
Use bulk when your function processes all records together:
- Search indexing — index all 500 records in a single call
- Batch notifications — send one summary email ("500 records imported") instead of 500 individual emails
- Data exports — generate one CSV file containing all new records
- Analytics — update dashboard counters with a single increment
// Trigger on: records_bulk_created
async function run() {
const { recordIds, count, recordSlug } = executionParams;
// One notification for the entire import
await api.httpPost("https://slack.com/webhook", {
text: `${count} ${recordSlug} records imported successfully`
});
}When to Use Batch
Batch operations fire individual per-record events — record_created, record_updated, or record_deleted — one for each record. Your trigger function runs N times, once per record.
Use batch when your function processes each record individually:
- Data validation — validate each record's fields against external rules
- Thumbnail generation — generate an image for each product record
- Individual notifications — send a welcome email per new user
- Record enrichment — call an external API to fill in missing fields per record
// Trigger on: record_created
async function run() {
const { recordId, data } = executionParams;
// Enrich each record with external data
const enriched = await api.httpGet(
`https://api.clearbit.com/v1/companies/find?domain=${data.data.domain}`
);
await api.updateRecord(recordId, {
companyInfo: enriched.data
});
}What Changed
Previously, the two paths were split across interfaces:
- The HTTP API only had bulk operations (aggregate events)
- Compute functions only had batch operations (per-record events)
This meant you were forced to choose your API based on the event behavior you wanted, not the interface that fit your use case. An external integration sending records via HTTP couldn't get per-record triggers. A compute function couldn't fire aggregate events.
Now both paths are available everywhere. The behavior is the same whether you call from an external webhook or from inside a compute function.
New Event Types
We also wired up two events that were publishing internally but never reached triggers:
record_restored— fires when a soft-deleted record is restoredrecord_expired— fires when a record's TTL expires
Plus two new aggregate events:
records_bulk_updated— fires after a bulk update operationrecords_bulk_deleted— fires after a bulk delete operation
All 8 event types are now available in the trigger creation UI.
The Complete Event Matrix
| Event | Type | Fired By |
|---|---|---|
record_created | Per-record | Single create, batch create |
record_updated | Per-record | Single update, batch update |
record_deleted | Per-record | Single delete, batch delete |
record_restored | Per-record | Record restore |
record_expired | Per-record | TTL expiration |
records_bulk_created | Aggregate | Bulk create |
records_bulk_updated | Aggregate | Bulk update |
records_bulk_deleted | Aggregate | Bulk delete |
Loop Prevention
All new events and endpoints respect the existing recursive trigger loop prevention. Bulk operations are naturally rate-limit-safe — one aggregate event means one trigger execution, regardless of how many records are affected. Batch operations are subject to the same per-record rate limits as single-record operations.
Getting Started
Update your triggers in the console UI — the event type dropdown now shows all 8 options. In your compute functions, the new bulk methods are available immediately:
// Aggregate events — function runs once
await api.bulkCreateRecords('orders', records);
await api.bulkUpdateRecords('orders', ids, { status: 'shipped' });
await api.bulkDeleteRecords('orders', ids);
// Per-record events — function runs per record
await api.batchCreateRecords('orders', records);
await api.batchUpdateRecords('orders', updates);
await api.batchDeleteRecords('orders', ids);Check out the Triggers documentation and Event Payloads reference for full details.