Why can't configure Azure diagnostics to use Azure Table Storage via new Azure Portal?
Asked Answered
W

2

6

I am developing a web api which will be hosted in Azure. I would like to use Azure diagnostics to log errors to Azure table storage. In the Classic portal, I can configure the logs to go to Azure table storage.

Classic Portal Diagnostic Settings

However in the new Azure portal, the only storage option I have is to use Blob storage:

New Azure Portal Settings

It seems that if I was to make use of a web role, I could configure the data store for diagnostics but as I am developing a web api, I don't want to create a separate web role for every api just so that I can log to an azure table.

Is there a way to programmatically configure azure diagnostics to propagate log messages to a specific data store without using a web role? Is there any reason why the new Azure portal only has diagnostic settings for blob storage and not table storage?

I can currently work around the problem by using the classic portal but I am worried that table storage for diagnostics will eventually become deprecated since it hasn't been included in the diagnostic settings for the new portal.

Westbound answered 15/3, 2016 at 9:4 Comment(1)
You may want to check out custom services to do that, e.g. www.trypour.comGalba
R
4

(I'll do some necromancy on this question as this was the most relevant StackOverflow question I found while searching for a solution to this as it is no longer possible to do this through the classic portal)

Disclaimer: Microsoft has seemingly removed support for logging to Table in the Azure Portal, so I don't know if this is deprecated or will soon be deprecated, but I have a solution that will work now (31.03.2017):

There are specific settings determining logging, I first found out information on this from an issue in the Azure Powershell github: https://github.com/Azure/azure-powershell/issues/317

The specific settings we need are (from github):

AzureTableTraceEnabled = True, & AppSettings has: DIAGNOSTICS_AZURETABLESASURL

Using the excellent resource explorer (https://resources.azure.com) under (GUI navigation):

/subscriptions/{subscriptionName}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{siteName}/config/logs

I was able to find the Setting AzureTableTraceEnabled in the Properties.

The property AzureTableTraceEnabled has Level and sasURL. In my experience updating these two values (Level="Verbose",sasUrl="someSASurl") will work, as updating the sasURL sets DIAGNOSTICS_AZURETABLESASURL in appsettings.

How do we change this? I did it in Powershell. I first tried the cmdlet Get-AzureRmWebApp, but could not find what i wanted - the old Get-AzureWebSite does display AzureTableTraceEnabled, but I could not get it to update (perhaps someone with more powershell\azure experience can come with input on how to use the ASM cmdlets to do this).

The solution that worked for me was setting the property through the Set-AzureRmResource command, with the following settings:

Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceGroupName "$ResourceGroupName" -ResourceType Microsoft.Web/sites/config -ResourceName "$ResourceName/logs" -ApiVersion 2015-08-01 -Force

Where the $PropertiesObject looks like this:

$PropertiesObject = @{applicationLogs=@{azureTableStorage=@{level="$Level";sasUrl="$SASUrl"}}}

The Level corresponds to "Error", "Warning", "Information", "Verbose" and "Off".

It is also possible to do this in the ARM Template (important bits is in properties on the logs resource in the site):

        {
            "apiVersion": "2015-08-01",
            "name": "[variables('webSiteName')]",
            "type": "Microsoft.Web/sites",
            "location": "[resourceGroup().location]",
            "tags": {
                "displayName": "WebApp"
            },
            "dependsOn": [
                "[resourceId('Microsoft.Web/serverfarms/', variables('hostingPlanName'))]"
            ],
            "properties": {
                "name": "[variables('webSiteName')]",
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
            },
            "resources": [
            {
                "name": "logs",
                "type": "config",
                "apiVersion": "2015-08-01",
                "dependsOn": [
                    "[resourceId('Microsoft.Web/sites/', variables('webSiteName'))]"
                ],
                "tags": {
                    "displayName": "LogSettings"
                },
                "properties": {
                    "azureTableStorage": {
                        "level": "Verbose",
                        "sasUrl": "SASURL"
                    }
                }
            }
        }

The issue with doing it in ARM is that I've yet to find a way to generate the correct SAS, it is possible to fetch out Azure Storage Account keys (from: ARM - How can I get the access key from a storage account to use in AppSettings later in the template?):

"properties": {
    "type": "AzureStorage",
        "typeProperties": {
            "connectionString": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]"
    }
}

There are also some clever ways of generating them using linked templates (from: http://wp.sjkp.dk/service-bus-arm-templates/).

The current solution I went for (time constraints) was a custom Powershell script that looks something like this:

...
$SASUrl = New-AzureStorageTableSASToken -Name $LogTable -Permission $Permissions -Context $StorageContext -StartTime $StartTime -ExpiryTime $ExpiryTime -FullUri
$PropertiesObject = @{applicationLogs=@{azureTableStorage=@{level="$Level";sasUrl="$SASUrl"}}}
Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceGroupName "$ResourceGroupName" -ResourceType Microsoft.Web/sites/config -ResourceName "$ResourceName/logs" -ApiVersion 2015-08-01 -Force
...

This is quite an ugly solution, as it is something extra you need to maintain in addition to the ARM template - but it is easy, fast and it works while we wait for updates to the ARM Templates (or for someone cleverer than I to come and enlighten us).

Radian answered 31/3, 2017 at 9:58 Comment(0)
T
2

We don't typically recommend using Tables for log data - it can result in the append only pattern which at scale doesn't work effectively for Table Storage. See the log-data anti-pattern in this guide Table Design Guide. Often times we see that even though people think of log data as structured - they way they typically query it makes Blobs more efficient.

Excerpt from the Design Guide:

Log data anti-pattern

Typically, you should use the Blob service instead of the Table service to store log data.

Context and problem

A common use case for log data is to retrieve a selection of log entries for a specific date/time range: for example, you want to find all the error and critical messages that your application logged between 15:04 and 15:06 on a specific date. You do not want to use the date and time of the log message to determine the partition you save log entities to: that results in a hot partition because at any given time, all the log entities will share the same PartitionKey value (see the section Prepend/append anti-pattern). ...

Solution

The previous section highlighted the problem of trying to use the Table service to store log entries and suggested two, unsatisfactory, designs. One solution led to a hot partition with the risk of poor performance writing log messages; the other solution resulted in poor query performance because of the requirement to scan every partition in the table to retrieve log messages for a specific time span. Blob storage offers a better solution for this type of scenario and this is how Azure Storage Analytics stores the log data it collects.

This section outlines how Storage Analytics stores log data in blob storage as an illustration of this approach to storing data that you typically query by range.

Storage Analytics stores log messages in a delimited format in multiple blobs. The delimited format makes it easy for a client application to parse the data in the log message.

Storage Analytics uses a naming convention for blobs that enables you to locate the blob (or blobs) that contain the log messages for which you are searching. For example, a blob named "queue/2014/07/31/1800/000001.log" contains log messages that relate to the queue service for the hour starting at 18:00 on 31 July 2014. The "000001" indicates that this is the first log file for this period. Storage Analytics also records the timestamps of the first and last log messages stored in the file as part of the blob’s metadata. The API for blob storage enables you locate blobs in a container based on a name prefix: to locate all the blobs that contain queue log data for the hour starting at 18:00, you can use the prefix "queue/2014/07/31/1800."

Storage Analytics buffers log messages internally and then periodically updates the appropriate blob or creates a new one with the latest batch of log entries. This reduces the number of writes it must perform to the blob service.

If you are implementing a similar solution in your own application, you must consider how to manage the trade-off between reliability (writing every log entry to blob storage as it happens) and cost and scalability (buffering updates in your application and writing them to blob storage in batches).

Issues and considerations

Consider the following points when deciding how to store log data:

  • If you create a table design that avoids potential hot partitions, you may find that you cannot access your log data efficiently.

  • To process log data, a client often needs to load many records.

  • Although log data is often structured, blob storage may be a better solution.

Theadora answered 15/3, 2016 at 18:42 Comment(6)
Thanks for the reply, are there any blob storage viewers that you would recommend which can be queried? At the moment it seems that I have to download a csv file to view the logs when I try to access the data from cloud explorer?Westbound
I liked the way I could query the Log table.Stoicism
Is there a nuget library to help 'query' the application logs that are stored in Azure blob storage in this file structure?Ingrained
I created a utlity class to query the logs stored in blob storage - in case anyone else was interested: gist.github.com/CameronWills/0a7e4a8fc8ee9ede65a2cbaac9c0693cIngrained
Is there a way to get this option back? Our full logging and analyzing logic is built on this.Forebode
While this is good info, it doesn't answer the OP's question.Peltier

© 2022 - 2024 — McMap. All rights reserved.