Export and Integration¶
Azure Monitor exports let you route selected tables to downstream archival, streaming, and analytics systems without giving every consumer direct workspace access. This runbook covers continuous export operations and API-driven integration checks.
flowchart TD
Workspace[Log Analytics workspace] --> ExportRule[Data export rule]
ExportRule --> Storage[Storage account]
ExportRule --> EventHubs[Event Hubs]
Workspace --> QueryAPI[Logs query API or CLI query]
QueryAPI --> External[External tools and reports] Prerequisites¶
- Azure CLI authenticated with
az login. - A Log Analytics workspace already collecting data.
- Destination Storage account or Event Hubs namespace already provisioned.
- Tables selected for export are supported by Azure Monitor Logs export.
- Permissions:
Log Analytics Contributoron the workspace.- Write permissions on the destination resource.
- Variables used below:
RG="rg-monitoring-prod" WORKSPACE_NAME="law-ops-central" WORKSPACE_ID="/subscriptions/<subscription-id>/resourceGroups/rg-monitoring-prod/providers/Microsoft.OperationalInsights/workspaces/law-ops-central" STORAGE_ACCOUNT_ID="/subscriptions/<subscription-id>/resourceGroups/rg-storage/providers/Microsoft.Storage/storageAccounts/stmonitoringarchive" EVENT_HUBS_ID="/subscriptions/<subscription-id>/resourceGroups/rg-integration/providers/Microsoft.EventHub/namespaces/eh-monitoring" EXPORT_RULE_NAME="export-security-logs"
When to Use¶
- Security or compliance teams need a copy of selected tables.
- Another analytics platform consumes logs from Storage or Event Hubs.
- API-driven reports need validated workspace access without portal use.
- Export rules must be reviewed after table growth or schema changes.
- A downstream integration broke and you need to confirm whether Azure Monitor is still exporting data.
- Teams want to reduce direct workspace access by moving consumers to curated exports.
Procedure¶
Step 1: Inspect current export rules and target tables¶
Start with the workspace inventory so you do not create overlapping rules accidentally.
az monitor log-analytics workspace data-export list \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--query "[].{name:name,destination:destination.resourceId,tables:tableNames,enabled:enable}" \
--output table
Name Destination Tables Enabled
-------------------- ------------------------------------------------------------------------------------------------ -------------------------------- -------
export-security-logs /subscriptions/<subscription-id>/resourceGroups/rg-storage/providers/Microsoft.Storage/storageAccounts/stmonitoringarchive ['SecurityEvent','Heartbeat'] True
az monitor log-analytics query \
--workspace $WORKSPACE_ID \
--analytics-query "Usage | where TimeGenerated > ago(1d) | summarize TotalGB=sum(Quantity)/1024 by DataType | top 10 by TotalGB desc" \
--output table
Step 2: Create a continuous export rule to Storage¶
Use Storage export when downstream systems need durable files instead of near-real-time streaming.
az monitor log-analytics workspace data-export create \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--name $EXPORT_RULE_NAME \
--destination $STORAGE_ACCOUNT_ID \
--tables SecurityEvent Heartbeat \
--enable true \
--output json
{
"destination": {
"resourceId": "/subscriptions/<subscription-id>/resourceGroups/rg-storage/providers/Microsoft.Storage/storageAccounts/stmonitoringarchive"
},
"enable": true,
"name": "export-security-logs",
"tableNames": [
"SecurityEvent",
"Heartbeat"
]
}
Step 3: Create or switch an export rule for Event Hubs streaming¶
Use Event Hubs when a SIEM or external stream processor needs near-real-time delivery.
az monitor log-analytics workspace data-export create \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--name "export-heartbeat-eventhubs" \
--destination $EVENT_HUBS_ID \
--tables Heartbeat \
--enable true \
--output json
{
"destination": {
"resourceId": "/subscriptions/<subscription-id>/resourceGroups/rg-integration/providers/Microsoft.EventHub/namespaces/eh-monitoring"
},
"enable": true,
"name": "export-heartbeat-eventhubs",
"tableNames": [
"Heartbeat"
]
}
Step 4: Validate the export rule definitions¶
Read back each rule after creation so the destination and table set are confirmed from Azure rather than assumed from local commands.
az monitor log-analytics workspace data-export show \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--name $EXPORT_RULE_NAME \
--query "{name:name,destination:destination.resourceId,tables:tableNames,enabled:enable}" \
--output json
{
"destination": "/subscriptions/<subscription-id>/resourceGroups/rg-storage/providers/Microsoft.Storage/storageAccounts/stmonitoringarchive",
"enabled": true,
"name": "export-security-logs",
"tables": [
"SecurityEvent",
"Heartbeat"
]
}
Step 5: Validate workspace integration queries for external consumers¶
Even when exports are enabled, many teams still rely on direct query integration for reports, automation, and operational checks.
az monitor log-analytics query \
--workspace $WORKSPACE_ID \
--analytics-query "Heartbeat | where TimeGenerated > ago(30m) | summarize LastSeen=max(TimeGenerated), Agents=dcount(Computer)" \
--output table
Verification¶
Verify all export rules on the workspace:
az monitor log-analytics workspace data-export list \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--query "[].{name:name,destination:destination.resourceId,enabled:enable}" \
--output table
Name Destination Enabled
------------------------- ---------------------------------------------------------------------------------------------- -------
export-security-logs /subscriptions/<subscription-id>/resourceGroups/rg-storage/providers/Microsoft.Storage/storageAccounts/stmonitoringarchive True
export-heartbeat-eventhubs /subscriptions/<subscription-id>/resourceGroups/rg-integration/providers/Microsoft.EventHub/namespaces/eh-monitoring True
az monitor log-analytics query \
--workspace $WORKSPACE_ID \
--analytics-query "SecurityEvent | where TimeGenerated > ago(1h) | count" \
--output table
az monitor log-analytics workspace data-export show \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--name "export-heartbeat-eventhubs" \
--query "{name:name,destination:destination.resourceId,tables:tableNames}" \
--output json
{
"destination": "/subscriptions/<subscription-id>/resourceGroups/rg-integration/providers/Microsoft.EventHub/namespaces/eh-monitoring",
"name": "export-heartbeat-eventhubs",
"tables": [
"Heartbeat"
]
}
Rollback / Troubleshooting¶
Disable or delete an export rule that is sending the wrong data:
az monitor log-analytics workspace data-export delete \
--resource-group $RG \
--workspace-name $WORKSPACE_NAME \
--name $EXPORT_RULE_NAME \
--yes
Common problems: - Export rule created but downstream system sees nothing - Validate permissions and connectivity on the destination service. - Rule creation fails - Confirm the selected table supports export and the destination resource ID is valid. - Exported volume is too high - Reduce the table list or use DCR filtering before ingestion. - Query integration fails - Check workspace RBAC and whether the external identity has query rights. - Storage destination receives data too slowly for the use case - Move that consumer to Event Hubs or direct query integration instead of archive-style export. - Event-driven parser breaks after schema changes - Restrict the table set and coordinate schema validation with the consumer team before widening coverage.
Automation¶
Export rules should be tracked like any other integration contract.
az monitor log-analytics workspace data-export list \
--query "[].{name:name,resourceGroup:resourceGroup,destination:destination.resourceId}" \
--output json