7.6:dev:audit:logging

Audit  

Logging

For logging, the framework with the facade slf4j together with the framework logback is used. Two appenders are configured:

  • an appender configured in a console
  • an asynchronous persistence of the log records in the database - it will share the database connection configured for the application.

The logback enables to configure a wide range of appenders: http://logback.qos.ch/manual/appenders.html.

Conventions for logging in the application:

  • Choosing the right level of logging:
    • error: Application errors (NPE, database unavailable, …) we will be extracting during the monitoring in the future - helpdesk
    • warning: Wrong input from the user, important warnings (e.g. “Application running in development mode” or “Administration console is not secured with a password”) - in the future we will extract these from the log during the monitoring - these will not go to the helpdesk
    • info: information message - the last logged level on production - "Important business process has finished. In an ideal world, the administrator or an advanced user should be able to understand INFO messages and find out quickly what the application is doing. For example, if an application is all about booking airplane tickets, there should be only one INFO statement per each ticket saying “[Who] booked the ticket from [Where] to [Where]”. Another definition of INFO message: each action that changes the state of the application significantly (database update, external system request) source."
    • debug: important information for developers - monitoring the flow of the application
    • trace: logging on the algorithm level - for the finest debugging of the application only
  • Parameters use in the records: log.debug("Found [{}] records matching filter [{}]", records, filter);. Encasing the parameters with square brackets to deal with whitespace in the log. With this option, there is no need to call isDebugEnabled - both the connection and the calling toString on the parameters only take place after testing the level in question.
  • Be careful with NPE in debug records, don't write whole collections to the log (record only their id, the size of the collection) …
  • The log record should contain both the description and the data - it should be clear what is going on with what values.
  • "Log, or wrap and throw back (which one is preferable), never both, otherwise your logs will be confusing."
  • Don't repeat and log class name in the log message
For the future, we are thinking of the subsequent use of logstash + Kibana. I would also like to have a transaction log of the user´s movement through the application, etc.

An agenda over tables logging event and logging event exception was created for better control of logs. Frontend and backend agenda has only read permission for these tables. In FE agenda you can filter by almost all attributes in IdmLoggingEvent entity. In FE, the table has some attributes hidden, but in the future it is possible to unhide those attributes, for better filtering.

On FE detail of logging event it is possible to show a detail with some information from IdmLoggingEventException. The detail of error log is shown in a separate table, each row from error is shown as an independent row in table. Class from eu.bcvsolutions.idm is highlighted with warning class.