Wonder how Salesforce handles batching records in relation to execution context.
For example, if I have a static variable that tracks the records being processed by a trigger:
trigger MyTrigger on MyObject__c (after insert) { Tracker.recordIds.addAll(Trigger.newMap.keySet()); } public class Tracker{ public static List<Id> recordIds = new List<Id>(); }
and my TBW updates 10,000 items, will my Tracker.recordIds
list hold all 10k at the end of the updates?
Answers:
Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.
Method 1
Salesforce has some limitations around the number of records processed by TBW in an hour. Please find the note below from Salesforce documentation –
”
Salesforce limits the number of time triggers an organization can execute per hour. If an organization exceeds the limits for its Edition, Salesforce defers the execution of the additional time triggers to the next hour. For example, if an Unlimited Edition organization has 1,200 time triggers scheduled to execute between 4:00 PM and 5:00 PM, Salesforce processes 1,000 time triggers between 4:00 PM and 5:00 PM and the remaining 200 time triggers between 5:00 PM and 6:00 PM.”
More details can be found here
We had a similar implementation to update a record via TBW and then trigger a call out to external system after the records were updated but as we figured there is a limitation we preferred to write our own custom logic.
So I think your static variable can have a maximum value of 1000 but not more than that.So if you want to keep track of the count, I would recommend, create a Custom setting record and keep updating it every time the batch update finishes executing.
Method 2
Just think about it.
Salesforce executes triggers in batches (For bulk processing), and each batch represents an execution cycle. If we look at static variables, they only maintain state for an execution cycle.
A trigger batch can never exceed 200 records.
Even though salesforce executes those 10k updates in a single instance, trigger execution will break them down to 200 records in a batch. Therefore static variable will hold values upto 200 for each execution.
The above behaviour can be achieved by implementing Stateful interface with batch classes.
Method 3
After seeing some discussion around static persistence across execution batches, I decided I should just test this out myself.
Turns out the Id’s ARE tracked through each trigger batch.
Here’s the test I ran:
Static Class
public with sharing class Tracker { public static Integer runTime = 0; public static List<Id> recordIds = new List<Id>(); }
Trigger
trigger StaticTestTrigger on Foo__c (after insert) { Tracker.runTime++; Tracker.recordIds.addAll(Trigger.newMap.keySet()); System.debug(Tracker.runTime); System.debug(Tracker.recordIds.size()); }
Anonymous Apex:
List<Foo__c> foos = new List<Foo__c>(); for(Integer i = 0; i< 2000; i++){ foos.add(new Foo__c(Name = 'test'+i)); if(foos.size() == 200){ insert foos; foos = new List<Foo__c>(); } }
This obviously doesn’t really answer the question I asked but I think should clear up confusion around statics across trigger batches.
Further testing:
Ran a batch of 460 records using the Salesforce Data Import wizard:
Again, it ran in a single execution context (statics were persisted):
All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0