Export Azure Cost Data To Storage Accounts For Reporting: A Practical Azure FinOps Guide

Why a Storage Account Is Your Cost Data Lake

Azure Cost Management generates detailed billing data every day, but accessing it live through the portal has limits: 13 months of history, no custom querying, and no ability to join cost data with operational telemetry. The moment you export that data to an Azure Storage account, those limitations disappear. A storage account becomes a persistent, queryable archive of every dollar spent across every resource — stretching back years, accessible by any tool, and owned entirely by your organization.

Exporting cost data to storage is the bridge between interactive portal analysis and production-grade cost analytics. The exported files feed Power BI dashboards, Azure Data Explorer clusters, Microsoft Fabric workspaces, and custom applications. This guide focuses specifically on the storage side of that equation: choosing the right storage configuration, managing the exported files, controlling costs on the storage itself, and connecting downstream consumers.

Storage Account Configuration for Cost Exports

Not every storage account is suitable for receiving cost exports. The account must meet specific requirements, and the configuration choices you make affect both cost and usability.

Account Type and Tier

Use a StorageV2 (general-purpose v2) account with Standard performance and LRS (locally redundant storage) replication. Cost export data is easily regenerated from Azure Cost Management, so paying for GRS or ZRS replication on the storage account is unnecessary expense. If the storage account serves additional purposes beyond cost exports, choose the redundancy level that those other workloads require.

For the blob access tier, Hot is the right default for active analysis. If your exports are retained but rarely accessed after a few months, configure a lifecycle management policy that moves older files to the Cool tier automatically.

Container Organization

Create dedicated containers for cost exports rather than mixing them with other data:

# Recommended container structure
cost-exports/
  ├── actual-cost/       # ActualCost exports
  ├── amortized-cost/    # AmortizedCost exports  
  ├── focus/            # FOCUS format (combines both)
  └── backfill/         # Historical backfill data

Alternatively, if using FOCUS format (which combines actual and amortized costs), a single container with descriptive root folder paths is sufficient. Each export creates its own subfolder structure automatically (ExportName/YYYYMMDD-YYYYMMDD/RunID/), so there is no risk of data collision between exports targeting the same container.

Network Security

For production environments, configure the storage account firewall to deny public access and rely on private endpoints or service endpoints for connectivity. Cost Management exports support writing to firewall-protected storage accounts through a system-assigned managed identity. The export service creates this identity automatically and assigns the StorageBlobDataContributor role scoped to the destination container.

Enable “Allow trusted Azure services access” on the storage account firewall for the export managed identity to authenticate. The user creating the export needs Microsoft.Authorization/roleAssignments/write permission to set up the role assignment during the initial configuration.

Configuring Exports Through the Portal

Navigate to Cost Management → Exports and click Add to create a new export. The wizard walks through these decisions:

  1. Scope — The billing scope whose costs are exported. Choose subscription for team-level data, or billing account for comprehensive enterprise data.
  2. Export type — Select Cost and usage details (FOCUS) for the most versatile format, or ActualCost/AmortizedCost for specific perspectives.
  3. Storage account — Select the target account and container. The root folder path defines where within the container the files land.
  4. Format — Choose Parquet with Snappy compression for analytical workloads, or CSV with Gzip compression for compatibility with spreadsheet tools.
  5. Schedule — Daily (month-to-date) for operational reporting, Monthly (last month) for finalized records. Run both for comprehensive coverage.

After creation, the first export runs within 24 hours. You can also trigger an immediate run by clicking Run now on the export.

Understanding the Exported File Structure

Each export run creates a folder structure in your storage container:

{RootFolder}/{ExportName}/{YYYYMMDD-YYYYMMDD}/{RunID}/
  ├── manifest.json
  ├── part-00000.snappy.parquet
  ├── part-00001.snappy.parquet
  └── ...

The Manifest File

The manifest.json file is critical for programmatic consumers. It contains:

{
  "manifestVersion": "2024-08-01",
  "byteCount": 45823017,
  "blobCount": 3,
  "dataRowCount": 284592,
  "exportConfig": {
    "exportName": "daily-focus-export",
    "resourceId": "/subscriptions/...",
    "dataVersion": "2023-12-01",
    "type": "FocusCost",
    "timeFrame": "MonthToDate"
  },
  "blobs": [
    {
      "blobName": "part-00000.snappy.parquet",
      "byteCount": 15274339,
      "dataRowCount": 94864
    }
  ]
}

Always read the manifest first when processing exports programmatically. It tells you exactly how many files exist, how many rows to expect, and the schema version — preventing partial reads and ensuring your processing validates completeness.

File Partitioning

Files are automatically partitioned to stay under 1 GB uncompressed. For subscriptions with thousands of resources, a single daily export might produce three or four partition files. The partitioning is by row count, not by any data dimension — part-00000 and part-00001 contain different rows from the same dataset.

Managing Export Lifecycle

Lifecycle Management Policy

Cost export data accumulates over time. A storage lifecycle policy manages the growth automatically:

{
  "rules": [
    {
      "name": "move-old-exports-to-cool",
      "enabled": true,
      "type": "Lifecycle",
      "definition": {
        "filters": {
          "blobTypes": ["blockBlob"],
          "prefixMatch": ["cost-exports/"]
        },
        "actions": {
          "baseBlob": {
            "tierToCool": {
              "daysAfterModificationGreaterThan": 90
            },
            "tierToArchive": {
              "daysAfterModificationGreaterThan": 365
            },
            "delete": {
              "daysAfterModificationGreaterThan": 2555
            }
          }
        }
      }
    }
  ]
}

This policy moves exports to Cool storage after 90 days, to Archive after one year, and deletes them after seven years. Adjust the retention period based on your compliance and analysis requirements. Many organizations keep at least three years for trend analysis and seven years for audit compliance.

Monitoring Storage Costs

Tag the storage account with a FinOps or cost-reporting identifier so its own costs are tracked separately. A storage account receiving daily Parquet exports from a subscription with moderate activity typically costs between $3 and $10 per month including storage and transaction charges. Billing account-level exports covering hundreds of subscriptions may cost more due to larger file sizes.

Querying Cost Data Directly from Storage

Several tools can query Parquet files directly from Azure Blob Storage without ingesting the data into a separate database.

Azure Synapse Serverless SQL

Synapse serverless SQL pools query Parquet files in-place using the OPENROWSET function:

SELECT 
    ServiceCategory,
    SUM(BilledCost) AS TotalCost
FROM OPENROWSET(
    BULK 'https://stfinopsexports.blob.core.windows.net/cost-exports/focus/**/*.parquet',
    FORMAT = 'PARQUET'
) AS costs
WHERE ChargePeriodStart >= '2026-03-01'
GROUP BY ServiceCategory
ORDER BY TotalCost DESC

This query runs against the Parquet files directly without loading them into a table first. The **/*.parquet wildcard pattern reads all partition files across all export runs. You pay only for the data scanned, making it cost-effective for periodic analysis.

Azure Data Explorer External Tables

ADX can define an external table that references the blob storage path:

.create external table CostExportsExternal (
    BilledCost: real,
    EffectiveCost: real, 
    ChargePeriodStart: datetime,
    ServiceCategory: string,
    ServiceName: string,
    ResourceId: string,
    ResourceName: string
)
kind=storage
dataformat=parquet
(
    h@'https://stfinopsexports.blob.core.windows.net/cost-exports/focus/;managed_identity=system'
)
with (folder="ExternalCosts")

Once defined, query the external table like any ADX table. This approach avoids ingestion costs while still providing KQL query capabilities.

Power BI Direct Import

Power BI Desktop can import Parquet files from Azure Blob Storage directly. Use Get Data → Azure → Azure Blob Storage, navigate to the container, and select the Parquet files. Power BI reads the schema from the Parquet metadata and imports the data into the model. For large datasets, use Synapse or ADX as an intermediary and connect Power BI via DirectQuery.

Backfilling Historical Data

Scheduled exports only capture data going forward. To populate your storage account with historical cost data, use the Exports Execute API to trigger exports for specific past periods:

# Backfill 12 months of historical cost data
$scope = "/subscriptions/your-subscription-id"
$exportName = "focus-multicloud-daily"
$token = (Get-AzAccessToken -ResourceUrl https://management.azure.com).Token
$headers = @{ Authorization = "Bearer $token"; "Content-Type" = "application/json" }

$startDate = (Get-Date).AddMonths(-12)
while ($startDate -lt (Get-Date).AddMonths(-1)) {
    $endDate = $startDate.AddMonths(1).AddDays(-1)
    $body = @{
        timePeriod = @{
            from = $startDate.ToString("yyyy-MM-01T00:00:00Z")
            to = $endDate.ToString("yyyy-MM-ddT23:59:59Z")
        }
    } | ConvertTo-Json
    
    $uri = "https://management.azure.com$scope/providers/Microsoft.CostManagement/exports/${exportName}/run?api-version=2025-03-01"
    Invoke-RestMethod -Uri $uri -Method POST -Headers $headers -Body $body
    Write-Host "Queued: $($startDate.ToString('yyyy-MM'))"
    $startDate = $startDate.AddMonths(1)
}

The portal supports backfilling up to 13 months via “Export selected dates.” The REST API can reach up to seven years for cost and usage data on EA and MCA accounts. Each backfilled month creates its own dated folder in the storage structure, identical in format to scheduled exports.

Access Control on Exported Data

Exported cost data may contain sensitive information — resource names that reveal business strategies, cost amounts that indicate budget sizes, and tag values that expose organizational structure. Control access through Azure RBAC on the storage account:

  • Storage Blob Data Reader — For teams that need to query the exported data (Power BI users, analysts, Data Explorer connections)
  • Storage Blob Data Contributor — For the FinOps team that manages export configuration and storage lifecycle
  • Storage Blob Data Owner — For administrators who manage access to the container

For granular access control, use container-level RBAC assignments. Place different scopes’ exports in separate containers and assign access per container. Teams see only the cost data for their own subscriptions while the central FinOps team has access to all containers.

Keeping Storage Costs Under Control

The irony of paying too much for your cost reporting storage is worth avoiding. Several practices keep the storage footprint minimal:

Use Parquet with Snappy compression — it is typically 70 to 80 percent smaller than equivalent CSV data. Enable the lifecycle management policy described earlier to automatically tier older data to cheaper storage. With daily overwrite enabled on month-to-date exports, only the latest run’s files persist for the current month, preventing duplicate data accumulation.

If you retain monthly snapshots for auditing, the storage requirement is roughly proportional to your Azure spend diversity (number of unique resources and meters). A subscription with 200 resources generates roughly 5 to 20 MB per month in compressed Parquet FOCUS data. A billing account with 500 subscriptions and thousands of resources might produce 1 to 5 GB per month.

Exporting cost data to storage is rarely the destination — it is the enabler. The storage account is the foundation that supports every advanced cost analytics capability: multi-year trend analysis, cross-cloud reporting, machine learning anomaly detection, and custom dashboards. The investment is measured in single-digit dollars per month for storage, and the return is measured in the quality of every cost decision your organization makes from that data.

For more details, refer to the official documentation: What is Microsoft Cost Management.

Leave a Reply