Handling multiple records in bulk is an integral part of the development using Salesforce platform. Salesforce provides extensive logging capability for the developer to log the events on a transaction, but when processing multiple records in a transaction or when processing the record data across multiple transactions, it becomes very challenging to check a flow happened on a record.
With Record Logger App, we are providing a framework for the developers to log on the record. This framework can be used to provide visibility of the functional logic path the system enforces on the record to users. This app can be handy in Field debugging.
Record Logger application provides a framework for developers to log in Apex code. It helps the admin and business users to check whether the system is working as expected with a click of a button. This document will go through the features of this application, and the steps admin must follow to bring the most out of the application.
Record Logger application revolves around following concepts.
- Log Transaction
Log Transaction object represents a transaction in execution. For every transaction that involves log will have one Log Transaction record.
Log object will hold the log for the record. Internally one Log record for the Salesforce record holds multiple Log messages.
The package comes with Logger class to ease adding record logs to the flow. To log, follow these three easy steps
- Get Logger Instance
- Add Log Messages
- Flush at the end
A static getInstance method is provided with Logger to get the logger instance.
SharLog.Logger = SharLog.Logger.getInstance()
eg: SharLog.Logger currLogger = SharLog.Logger.getInstance()
Following methods are provided in the logger instance for logging
- log(Id recId, String message)
- log(SObject obj, String message)
- log(Id recId, String tag, String message)
- log(SObject obj, String tag, String message)
- log(List recIds, String message)
- log(List recIds, String tag, String message)
- log(List objs, String message)
- log(List objs, String tag, String message)
- log(Set recIds, String message)
- log(Set recIds, String tag, String message)
- log(Set objs, String message)
- log(Set objs, String tag, String message)
All the log methods expect record identifier, Tag (Optional) and the message.
e.g., currLogger.log(Trigger.New, ‘EscalationFlow’, ‘Escalation processing start’);
Record Identifier is used to associate the log with the message.
Note: On scenarios where there is no record Identifier, the system automatically uses the user Id to associate the message with the user record. (E.g., Log during the Before insert trigger processing)
The tag is of type String and is an optional param. When Record logger framework is used to capture multiple flows, Developers can use tags to differentiate various flows. When calling log API without tag argument, the system automatically assigns default tag “General.”
The message is of type String.
Note: If the message size exceeds One hundred thousand bytes, they will get automatically truncated.
The flush method indicates the engine end of the logger.
Note: Flush may or may not result in DML operation. Please check “Under the hood” section for more details.
Log engine internally caches the messages for efficiency purposes as well as to stay well within the Salesforce resource governance limit. The internal cache can grow up to 0.5MB. Transaction memory is a shared resource shared between the managed packages and default package code executed within the transaction. For the transaction that is memory intensive and also using Record Logger, use ForceFlush method on logger before memory intensive operations. Force flush method will bring down the log engine memory usage to almost zero by flushing the cached log messages to underlying Salesforce objects and clear the cache.
Note: Use ForceFlush if any Heap memory exceed limit exception is encountered.
During Log method call engine internally stores the message in the cache and returns. If the storing messages to cache cause internal memory limit (0.5 MB) or internal message count limit (4000 messages [Configurable]) to exceed, the engine automatically writes the messages to internal Salesforce objects.
In Salesforce, callouts aren’t allowed after DML operations performed. For transactions that involve callout, use the disableAutoStore method to ensure Log engine won't automatically write messages to internal Salesforce objects.
Use EnableAutoStore method to reenable automatic checks of the Log Engine after callouts.
Under the hood
This section provides additional logic in the managed package that ensures safe operation without hitting any Salesforce enforced governor limits (Memory limit, Number of DML operations limit, Records affected by DML operations limit).
One transaction can involve multiple loggers. E.g., Consider the Flow
- Account trigger processing updates the Contact that causes Contact trigger to execute
- Contact trigger processing updates a custom object that causes custom object trigger
- Custom object trigger processing
Depends on the code arrangements there is a possibility of having multiple loggers involved. Each of the loggers can generate multiple log requests to the engine. And the request can be for a single record or multiple records. When the cache is updated with the message, the engine automatically checks for the message count threshold and message length threshold and accordingly pushes the data to underlying Database Log object. Automatic flush can only happen when a considerable amount of log messages got generated in the flow.
Also since there are multiple loggers involved in the flow, it is possible to have multiple flush requests from the logger as well. It is not possible to flush the cache to the underlying log objects on every flush request as this can result in Governor limit exceptions. The system automatically tracks the first logger instance and handles the flush request only for the first logger, thereby avoiding limits exceptions.
There can be processing intensive flows that generate thousands of log messages. If each message results in separate database records, it could shoot up the storage requirements also it can potentially cause records impacted by DML limits exception. The system before writing messages to the database, messages are consolidated by record Id up to a 100KB and written to log object, thereby minimizing the storage requirements also to be well within limits. So even when the developer introduces 100 log statements in the code, it is possible all got consolidated into a single log object record.