Python logging handlers to send logs to Microsoft Azure Storage

michiya michiya Last update: Jul 16, 2022

azure-storage-logging

http://img.shields.io/pypi/v/azure-storage-logging.svg?style=flathttp://img.shields.io/pypi/l/azure-storage-logging.svg?style=flat

azure-storage-logging provides functionality to send output fromthe standard Python logging APIs to Microsoft Azure Storage.

Dependencies

  • azure-storage 0.33 or newer

Installation

Install the package via pip:

pip install azure-storage-logging

Usage

The module azure_storage_logging.handlers in the package containsthe following logging handler classes. Each of them uses a differenttype of Microsoft Azure Storage to send its output to. They all are subclassesof the standard Python logging handler classes, so you can make use of themin the standard ways of Python logging configuration.

In addition tothe standard formats for logging,the special format %(hostname)s is also available in your message formatterfor the handlers. The format is introduced for ease of identifying the sourceof log messages which come from many computers and go to the same storage.

TableStorageHandler

The TableStorageHandler class is a subclass of logging.Handler class,and it sends log messages to Azure table storage and store themas entities in the specified table.

The handler puts a formatted log message from applications in the messageproperty of a table entity along with some system-defined properties(PartitionKey, RowKey, and Timestamp) like this:

PartitionKeyRowKeyTimestampmessage
XXXXXXXXXXXXXXYYYY-MM-DD ...log message
XXXXXXXXXXXXXXYYYY-MM-DD ...log message
XXXXXXXXXXXXXXYYYY-MM-DD ...log message
  • class azure_storage_logging.handlers.TableStorageHandler(account_name=None, account_key=None, protocol='https', table='logs', batch_size=0, extra_properties=None, partition_key_formatter=None, row_key_formatter=None, is_emulated=False)

    Returns a new instance of the TableStorageHandler class.The instance is initialized with the name and the key of yourAzure Storage account and some optional parameters.

    The table specifies the name of the table that stores log messages.A new table will be created if it doesn't exist. The table name mustconform to the naming convention for Azure Storage table, seethe naming convention for tablesfor more details.

    The protocol specifies the protocol to transfer data betweenAzure Storage and your application, http and httpsare supported.

    You can specify the batch_size in an integer if you want to usebatch transaction when creating new log entities. If the batch_sizeis greater than 1, all new log entities will be transferred to thetable at a time when the number of new log messages reaches thebatch_size. Otherwise, a new log entity will be transferred tothe table every time a logging is performed. The batch_size must beup to 100 (maximum number of entities in a batch transaction forAzure Storage table).

    The extra_properties accepts a sequence ofthe formats for logging.The handler-specific one %(hostname)s is also acceptable.The handler assigns an entity property for every format specified inextra_properties. Here is an example of using extra properties:

    import loggingfrom azure_storage_logging.handlers import TableStorageHandler# configure the handler and add it to the loggerlogger = logging.getLogger('example')handler = TableStorageHandler(account_name='mystorageaccountname',                              account_key='mystorageaccountkey',                              extra_properties=('%(hostname)s',                                                '%(levelname)s'))logger.addHandler(handler)# output log messageslogger.info('info message')logger.warning('warning message')logger.error('error message')

    And it will create the log entities, that have the extra propertiesin addition to the regular property message, into the table like this:

    PartitionKey

    RowKey

    Timestamp

    hostname

    levelname

    message

    XXXXX

    XXXXXXXXX

    YYYY-MM-DD ...

    myhost

    INFO

    info message

    XXXXX

    XXXXXXXXX

    YYYY-MM-DD ...

    myhost

    WARNING

    warn message

    XXXXX

    XXXXXXXXX

    YYYY-MM-DD ...

    myhost

    ERROR

    error message

    You can specify an instance of your custom logging.Formattersfor the partition_key_formatter or the row_key_formatterif you want to implement your own keys for the table.The default formatters will be used for partition keys and row keysif no custom formatter for them is given to the handler.The default values for partition keys are provided by the format%(asctime)s and the date format %Y%m%d%H%M (provides a uniquevalue per minute). The default values for row keys are provided by theformat %(asctime)s%(msecs)03d-%(hostname)s-%(process)d-%(rowno)02dand the date format %Y%m%d%H%M%S.

    Note that the format %(rowno)d is a handler-specific one onlyavailable for row keys. It would be formatted to a sequential andunique number in a batch that starts from 0. The format is introducedto avoid collision of row keys generated in a batch, and it wouldalways be formatted to 0 if you don't use batch transaction for loggingto the table.

  • setPartitionKeyFormatter(fmt)

    Sets the handler's formatter for partition keys to fmt.

  • setRowKeyFormatter(fmt)

    Sets the handler's formatter for row keys to fmt.

QueueStorageHandler

The QueueStorageHandler class is a subclass of logging.Handler class,and it pushes log messages to specified Azure storage queue.

You can pop log messages from the queue in other applicationsusing Azure Storage client libraries.

  • class azure_storage_logging.handlers.QueueStorageHandler(account_name=None, account_key=None, protocol='https', queue='logs', message_ttl=None, visibility_timeout=None, base64_encoding=False, is_emulated=False)

    Returns a new instance of the QueueStorageHandler class.The instance is initialized with the name and the key of yourAzure Storage account and some optional parameters.

    The queue specifies the name of the queue that log messages are added.A new queue will be created if it doesn't exist. The queue name mustconform to the naming convention for Azure Storage queue, seethe naming convention for queuesfor more details.

    The protocol specifies the protocol to transfer data betweenAzure Storage and your application, http and httpsare supported.

    The message_ttl specifies the time-to-live interval for the message,in seconds. The maximum time-to-live allowed is 7 days. If thisparameter is omitted, the default time-to-live is 7 days.

    The visibility_timeout specifies the visibility timeout value,in seconds, relative to server time. If not specified, the defaultvalue is 0 (makes the message visible immediately). The new valuemust be larger than or equal to 0, and cannot be larger than 7 days.The visibility_timeout cannot be set to a value later than themessage_ttl, and should be set to a value smaller than themessage_ttl.

    The base64_encoding specifies the necessity for encodinglog text in Base64. If you set this to True, Unicode log textin a message is encoded in utf-8 first and then encoded in Base64.Some of Azure Storage client libraries or tools assume thattext messages in Azure Storage queue are encoded in Base64,so you can set this to True to receive log messages correctlywith those libraries or tools.

BlobStorageRotatingFileHandler

The BlobStorageRotatingFileHandler class is a subclass oflogging.handlers.RotatingFileHandler class. It performslog file rotation and stores the outdated one in Azure blob storagecontainer when the current file reaches a certain size.

  • class azure_storage_logging.handlers.BlobStorageRotatingFileHandler(filename, mode='a', maxBytes=0, encoding=None, delay=False, account_name=None, account_key=None, protocol='https', container='logs', zip_compression=False, max_connections=1, max_retries=5, retry_wait=1.0, is_emulated=False)

    Returns a new instance of the BlobStorageRotatingFileHandlerclass. The instance is initialized with the name and the key of yourAzure Storage account and some optional parameters.

    See RotatingFileHandlerfor its basic usage. The handler keeps the latest log file into thelocal file system. Meanwhile, the handler sends the outdated log fileto the blob container immediately and then removes it from the localfile system.

    The container specifies the name of the blob container that storesoutdated log files. A new container will be created if it doesn't exist.The container name must conform to the naming convention forAzure Storage blob container, seethe naming convention for blob containersfor more details.

    The protocol specifies the protocol to transfer data betweenAzure Storage and your application, http and httpsare supported.

    The zip_compression specifies the necessity for compressingevery outdated log file in zip format before putting it inthe container.

    The max_connections specifies a maximum number of parallelconnections to use when the blob size exceeds 64MB.Set to 1 to upload the blob chunks sequentially.Set to 2 or more to upload the blob chunks in parallel,and this uses more system resources but will upload faster.

    The max_retries specifies a number of times to retryupload of blob chunk if an error occurs.

    The retry_wait specifies sleep time in secs between retries.

    The only two formatters %(hostname)s and %(process)d areacceptable as a part of the filename or the container. You can savelog files in a blob container dedicated to each host or process bynaming containers with these formatters, and also can store log filesfrom multiple hosts or processes in a blob container by naming log fileswith them.

    Be careful to use the %(process)d formatter in the filenamebecause inconsistent PIDs assigned to your application every time itgets started are included as a part of the name of log files to searchfor rotation. You should use the formatter in the filename only whenthe log file is generated by a long-running application process.

    Note that the hander class doesn't take the backupCount parameter,unlike RotatingFileHandler does. The number of outdated log filesthat the handler stores in the container is unlimited, and the filesare saved with the extension that indicates the time in UTC whenthey are replaced with a new one. If you want to keep the amount ofoutdated log files in the container in a certain number, you willneed to do that using Azure management portal or other tools.

BlobStorageTimedRotatingFileHandler

The BlobStorageTimedRotatingFileHandler class is a subclass oflogging.handlers.TimedRotatingFileHandler class. It performslog file rotation and stores the outdated one to Azure blob storagecontainer at certain timed intervals.

  • class azure_storage_logging.handlers.BlobStorageTimedRotatingFileHandler(filename, when='h', interval=1, encoding=None, delay=False, utc=False, account_name=None, account_key=None, protocol='https', container='logs', zip_compression=False, max_connections=1, max_retries=5, retry_wait=1.0, is_emulated=False)

    Returns a new instance of the BlobStorageTimedRotatingFileHandlerclass. The instance is initialized with the name and the key of yourAzure Storage account and some optional parameters.

    See TimedRotatingFileHandlerfor its basic usage. The handler keeps the latest log file into thelocal file system. Meanwhile, the handler sends the outdated log fileto the blob container immediately and then removes it from the localfile system.

    The container specifies the name of the blob container that storesoutdated log files. A new container will be created if it doesn't exist.The container name must conform to the naming convention forAzure Storage blob container, seethe naming convention for blob containersfor more details.

    The protocol specifies the protocol to transfer data betweenAzure Storage and your application, http and httpsare supported.

    The zip_compression specifies the necessity for compressingevery outdated log file in zip format before putting it inthe container.

    The max_connections specifies a maximum number of parallelconnections to use when the blob size exceeds 64MB.Set to 1 to upload the blob chunks sequentially.Set to 2 or more to upload the blob chunks in parallel,and this uses more system resources but will upload faster.

    The max_retries specifies a number of times to retryupload of blob chunk if an error occurs.

    The retry_wait specifies sleep time in secs between retries.

    The only two formatters %(hostname)s and %(process)d areacceptable as a part of the filename or the container. You can savelog files in a blob container dedicated to each host or process bynaming containers with these formatters, and also can store log filesfrom multiple hosts or processes in a blob container by naming log fileswith them.

    Be careful to use the %(process)d formatter in the filenamebecause inconsistent PIDs assigned to your application every time itgets started are included as a part of the name of log files to searchfor rotation. You should use the formatter in the filename only whenthe log file is generated by a long-running application process.

    Note that the hander class doesn't take the backupCount parameter,unlike TimedRotatingFileHandler does. The number of outdated log filesthat the handler stores in the container is unlimited.If you want to keep the amount of outdated log files in the containerin a certain number, you will need to do that using Azuremanagement portal or other tools.

Example

Here is an example of the configurations and the logging that usesthree different types of storage from the logger:

LOGGING = {    'version': 1,    'formatters': {        'simple': {            'format': '%(asctime)s %(message)s',        },        'verbose': {            'format': '%(asctime)s %(levelname)s %(hostname)s %(process)d %(message)s',        },        # this is the same as the default, so you can skip configuring it        'partition_key': {            'format': '%(asctime)s',            'datefmt': '%Y%m%d%H%M',        },        # this is the same as the default, so you can skip configuring it        'row_key': {            'format': '%(asctime)s%(msecs)03d-%(hostname)s-%(process)d-%(rowno)02d',            'datefmt': '%Y%m%d%H%M%S',        },    },    'handlers': {        'file': {            'account_name': 'mystorageaccountname',            'account_key': 'mystorageaccountkey',            'protocol': 'https',            'level': 'DEBUG',            'class': 'azure_storage_logging.handlers.BlobStorageTimedRotatingFileHandler',            'formatter': 'verbose',            'filename': 'example.log',            'when': 'D',            'interval': 1,            'container': 'logs-%(hostname)s',            'zip_compression': False,        },        'queue': {            'account_name': 'mystorageaccountname',            'account_key': 'mystorageaccountkey',            'protocol': 'https',            'queue': 'logs',            'level': 'CRITICAL',            'class': 'azure_storage_logging.handlers.QueueStorageHandler',            'formatter': 'verbose',        },        'table': {            'account_name': 'mystorageaccountname',            'account_key': 'mystorageaccountkey',            'protocol': 'https',            'table': 'logs',            'level': 'INFO',            'class': 'azure_storage_logging.handlers.TableStorageHandler',            'formatter': 'simple',            'batch_size': 20,            'extra_properties': ['%(hostname)s', '%(levelname)s'],            'partition_key_formatter': 'cfg://formatters.partition_key',            'row_key_formatter': 'cfg://formatters.row_key',        },    },    'loggers': {        'example': {            'handlers': ['file', 'queue', 'table'],            'level': 'DEBUG',        },    }}import loggingfrom logging.config import dictConfigdictConfig(LOGGING)logger = logging.getLogger('example')logger.debug('debug message')logger.info('info message')logger.warning('warning message')logger.error('error message')logger.critical('critical message')

Notice

  • Set is_emulated to True at initialization of the logging handlersif you want to use this package with Azure storage emulator.

License

Apache License 2.0

Credits

Tags:

PRAGMA foreign_keys = off; BEGIN TRANSACTION; COMMIT TRANSACTION; PRAGMA foreign_keys = on;

Subscribe to our newsletter