industry insights

Observability's Dirty Secret: The Pricing Scam

Your observability bill is intentionally confusing, costing you a fortune. We expose the hidden metrics and reveal the simple fix that's changing the industry.

Stork.AI
Hero image for: Observability's Dirty Secret: The Pricing Scam
💡

TL;DR / Key Takeaways

Your observability bill is intentionally confusing, costing you a fortune. We expose the hidden metrics and reveal the simple fix that's changing the industry.

Your Bill Shouldn't Require a PhD

Observability, the essential practice for understanding complex distributed systems, carries a pervasive, unacknowledged burden: its billing. Engineering leaders routinely grapple with invoices that are needlessly complex and opaque, transforming a critical operational expense into a constant source of anxiety. This systemic lack of transparency forces technical teams into frustrating, time-consuming accounting roles, diverting their focus from core innovation and development.

"Customers don't need a PhD to understand what it means," declared a Better Stack expert on the CodeRED podcast, perfectly encapsulating the industry's widespread exasperation. This isn't mere frustration; it represents a fundamental breakdown in how vendors communicate value. Each provider invents its own obscure, often incomparable metric for consumption, creating a Tower of Babel in pricing that leaves customers bewildered.

Consider the bewildering landscape of billing models: Datadog bills for "custom metrics," Grafana for "active series," while SigNoz charges based on "million samples." Better Stack itself previously used "active data points" before acknowledging the confusion and shifting to a more understandable gigabyte-based model. This dizzying array of incompatible units makes it virtually impossible for organizations to accurately compare costs or predict future expenditures across platforms, even for identical workloads.

This inherent unpredictability cripples effective budget planning and financial forecasting. Engineering teams, tasked with scaling vital services to meet growing user demand or new feature rollouts, frequently hesitate. Their caution stems not from technical limitations, but from the paralyzing fear of an unforeseen, massive bill spike that could derail an entire quarter's budget. This chilling effect on innovation and operational agility directly impacts project timelines and market responsiveness.

Ultimately, this isn't a technical challenge concerning data processing or system architecture. This is a profound business problem that strikes directly at an organization's financial health. It impacts the bottom line through unpredictable, often escalating costs and indirectly through diminished developer productivity, as highly paid engineers waste valuable cycles deciphering convoluted invoices instead of building and deploying critical features. This systemic lack of transparency erodes trust and stifles the very growth observability is meant to enable.

The Metrics Maze: What Are You Actually Paying For?

Illustration: The Metrics Maze: What Are You Actually Paying For?
Illustration: The Metrics Maze: What Are You Actually Paying For?

Observability platforms present a baffling kaleidoscope of billing metrics, forcing engineering teams into a perpetual state of confusion. No universal standard governs how vendors quantify usage, transforming the simple act of cost comparison into a complex, specialized discipline. This lack of transparency directly contradicts the principle of understandable and predictable billing, a core tenet for any effective service.

Businesses grapple with vastly different units from leading providers. Grafana quantifies usage by active series, tracking unique metric combinations over a defined period. Datadog opts for custom metrics, often referring to user-defined data points ingested beyond standard system metrics. SigNoz, meanwhile, bills based on million samples, counting the raw volume of data points ingested, while Dash0 employs a 'data points metric,' a similar but distinct measure.

Each metric attempts to capture a facet of system activity, but their fundamental definitions diverge wildly. An "active series" might represent a single metric across many instances, whereas a "custom metric" could be a single value from one service. A "million samples" count aggregates raw data points, which might correlate to numerous active series or custom metrics depending on sampling rates and data cardinality. These variations make apples-to-apples comparisons virtually impossible.

Customers cannot easily convert their Datadog "custom metrics" into an equivalent "million samples" on SigNoz, or estimate Grafana's "active series" cost based on Dash0's "data points metric." Each vendor's unique billing calculus requires a deep, often proprietary, understanding of their specific aggregation methods and data models. This opaque system prevents teams from accurately forecasting expenses or making informed decisions about vendor migration.

This complex landscape diverts critical engineering talent from innovation to invoice deciphering. As experts on the CodeRED podcast have highlighted, such convoluted pricing models are "stupid" because they mandate customers become billing specialists rather than focusing on their own production systems. The industry’s fragmented approach creates an unnecessary burden, ensuring that understanding your observability bill remains a task for the few, not the many.

The Datadog Comparison Nightmare

Observability billing models consistently obscure true costs, but Datadog's approach often epitomizes this complexity. "I'm looking at my Datadog invoice, and I'm like, how much is this going to cost on Better Stack? I have no clue. And that's just stupid," lamented a speaker on the CodeRED podcast episode, perfectly capturing the industry's frustration. This sentiment highlights the profound difficulty engineering teams face when attempting to benchmark or migrate services.

Imagine a team considering a move from Datadog to another provider. Their Datadog bill itemizes usage based on custom metrics, host units, serverless invocations, and more. Attempting to translate "10 million custom metrics" into another platform's units becomes an exercise in futility, as each vendor employs a proprietary language for its billing metrics: - Grafana charges for "active series." - SigNoz charges for "million samples." - Dash0 charges for "data points metric." This disparity renders direct cost comparisons virtually impossible.

This deliberate ambiguity fosters a powerful form of vendor lock-in. Engineering leaders, already grappling with complex systems, face an insurmountable task: calculating the actual cost of an alternative. The sheer time investment required for such an evaluation, coupled with the inherent risk of miscalculation, often deters teams from even exploring other options. This strategic opacity ensures customers remain tethered, despite potential cost savings or feature advantages elsewhere.

Datadog's public pricing page further illustrates this labyrinthine structure. It presents a modular system of dozens of individual SKUs and add-ons for infrastructure, APM, logs, security, and more. Each service features its own distinct metric—from gigabytes of ingested logs to specific host counts and tracing volumes. Understanding the true cumulative cost, let alone projecting it onto a new platform, demands an internal audit of data consumption far beyond what most teams can realistically undertake. For a stark contrast in transparency, teams can review the straightforward pricing at Pricing - Better Stack.

Why Confusion Is a Business Model

The bewildering complexity of observability billing models is not an accidental byproduct of a nascent industry; it represents a deliberate, highly profitable business strategy. Vendors engineer this opaqueness, transforming what should be a straightforward transaction into an intricate puzzle. This deliberate opaqueness serves a clear purpose: to maximize revenue by obfuscating true costs and hindering competitive comparison.

This pricing psychology discourages engineering leaders from shopping around. When comparing "custom metrics" on Datadog against "active series" on Grafana, "million samples" on SigNoz, or Better Stack’s "gigabytes," the sheer effort required to translate and project costs becomes prohibitive. This complexity fosters significant vendor lock-in, making it far easier to renew an existing, albeit expensive, contract than to undertake a rigorous, time-consuming evaluation of alternatives. It also enables seamless upselling, as the true cost implications of increased data ingestion or new service adoption remain hidden until the next invoice arrives.

A particularly insidious aspect of this model is metric creep. A minor change in application code, perhaps adding a new internal counter or logging additional attributes, can trigger a massive spike in billable custom metrics or active series. These seemingly innocuous adjustments lead to disproportionately inflated bills, often without clear, real-time feedback on the financial impact. The vendor benefits immensely from this hidden cost multiplier, turning incremental data collection into exponential revenue.

Ultimately, this convoluted pricing structure is an anti-customer practice that stifles innovation and fair competition. Engineering teams become hesitant to instrument new features or collect comprehensive telemetry, fearing astronomical and unpredictable costs. This reluctance to fully observe their systems impedes performance optimization and debugging efforts. The lack of transparent, comparable pricing also creates an uneven playing field, making it incredibly difficult for innovative competitors with simpler, more predictable models to demonstrate their value effectively.

The Gigabyte Revolution: A Simple Way Forward

Illustration: The Gigabyte Revolution: A Simple Way Forward
Illustration: The Gigabyte Revolution: A Simple Way Forward

The industry's convoluted pricing schemes, from Datadog’s "custom metrics" and Grafana’s "active series" to SigNoz’s "million samples" and Dash0’s "data points metric," actively obscure actual costs. Engineering teams face an impossible task comparing invoices when every vendor invents a new, proprietary unit of measurement. This deliberate obfuscation leaves leaders guessing about future expenses and hinders effective budget planning, often resulting in unexpected bill shocks that derail projects.

A powerful antidote emerges in the form of gigabyte-based billing for metrics. This straightforward approach cuts through the complexity, offering a universally understood unit that transcends vendor-specific jargon. As Better Stack articulated on their CodeRED podcast, "Everyone can imagine one gigabyte," making pricing instantly graspable, inherently comparable, and truly transparent. This shift empowers customers to understand their consumption without requiring a specialized degree.

Engineers already operate daily with gigabytes as a fundamental unit across other critical infrastructure services, fostering an inherent understanding of its value. Consider the established predictability of cloud storage costs, where platforms like AWS S3 charge directly per gigabyte stored or transferred. Network egress fees similarly adhere to this intuitive model, providing clear cost projections based on actual data volume. This widespread familiarity with data volume as a billing unit builds trust and eliminates the need for a "PhD to understand" an invoice, unlike the opaque "active series" or "custom metrics" models.

Better Stack recently championed this shift, moving from their prior "active data points" model to billing for metrics in gigabytes. This strategic pivot exemplifies a vendor actively listening to widespread customer frustration with opaque pricing, as discussed during their "Good Observability Pricing..." segment. Their decision provides a crucial real-world case study, demonstrating that simplifying observability costs is not only possible but also deeply beneficial for engineering teams seeking predictable spending. It establishes a new benchmark for transparency in an industry long notorious for its pricing games and complex calculations.

Is Simpler Always Cheaper? Unpacking TCO

Will a gigabyte-based billing model truly slash your observability costs? Many engineering leaders naturally focus on the raw per-unit price, but that narrow view misses the crucial picture. The real question revolves around Total Cost of Ownership (TCO), extending far beyond the initial sticker price.

Complex, opaque billing models, like those charging for "active series" or "custom metrics," introduce significant hidden expenditures. These aren't line items on an invoice, but they drain resources. Consider the dozens of engineering hours spent each month just to decipher a bill, or the finance team's struggle to forecast next quarter’s spend with any accuracy.

This engineering overhead and financial ambiguity are direct costs. Teams delay crucial scaling decisions, fearing an unpredictable bill spike. They spend time optimizing data ingress to game a convoluted pricing structure instead of focusing on product innovation or actual system reliability. This inefficiency directly impacts a company's bottom line.

A simple, gigabyte-based model, as advocated by CodeRED guests and implemented by some providers, radically simplifies this. You know precisely what you pay per GB, eliminating the guesswork. This clarity fosters unparalleled predictability, allowing engineering teams to scale confidently and allocate resources without fear of surprise charges.

Imagine the difference: instead of agonizing over whether a new service will double your "custom metrics" bill, you simply estimate its data volume. This empowers proactive resource planning and confident budget allocation. While platforms like Datadog detail their various metrics and tiers [Pricing - Datadog], the complexity often obscures true comparative costs against a straightforward GB model.

Ultimately, simplicity isn't just about ease of understanding; it’s a powerful cost-saving feature. It frees up high-value engineering talent from billing forensics, redirects financial planning towards growth, and removes a major impediment to innovation and scaling. The most affordable observability solution is often the one you can actually comprehend and predict.

The Incumbent's Dilemma: Why Giants Won't Change

Incumbent observability giants like Datadog face immense structural disincentives to simplify their deeply entrenched pricing models. Their current complex structures, often based on obscure units like custom metrics, active series, or millions of data points, are not accidental; they are meticulously integrated into their multi-billion dollar business operations. A fundamental shift to a transparent, gigabyte-based model would necessitate a complete re-evaluation of their entire financial architecture, go-to-market strategy, and competitive positioning.

These intricate billing metrics form the bedrock of their lucrative enterprise sales contracts, which often span multiple years. Multi-year agreements with Global 2000 companies feature highly tailored terms, meticulously negotiated around existing opaque units that benefit the vendor. Revenue projections and investor expectations, critical for public companies like Datadog, hinge on the predictable, if convoluted, income streams generated by these established pricing schemes. Disrupting this financial stability would send shockwaves through their quarterly reporting, potentially impacting stock valuations and shareholder confidence.

Organizational inertia further entrenches the status quo. Overhauling a core billing system for a company the size of Datadog represents a monumental internal undertaking, a multi-year project with significant risk. This transformation would demand extensive re-engineering across diverse departments—from core engineering to sales and finance—redefining data pipelines, contract structures, and revenue forecasting, incurring astronomical costs and high risk of disruption.

For these market leaders, complex pricing functions as a strategic feature, not a bug. It creates substantial barriers for customers to accurately compare true costs with agile challengers like Better Stack or SigNoz, fostering powerful vendor lock-in. This deliberate opacity reduces churn, inhibits competitive switching, and enables "land and expand" strategies where initial perceived costs can appear deceptively low before scaling rapidly. Ultimately, the intricate billing system, while a persistent source of frustration for engineering leaders, meticulously serves the incumbents' financial objectives and secures their market dominance.

Your SRE Team's New Superpower: Predictable Budgets

Illustration: Your SRE Team's New Superpower: Predictable Budgets
Illustration: Your SRE Team's New Superpower: Predictable Budgets

For Site Reliability Engineers and DevOps professionals, unpredictable observability costs represent a constant, low-level dread. Every new feature, every performance experiment, every scaling event carries the unspoken risk of blowing the budget, forcing a difficult conversation with finance. This insidious uncertainty stifles innovation and transforms necessary technical work into a financial minefield.

A shift to simple, predictable budgets fundamentally changes this dynamic. When billing for metrics by the gigabyte, as Better Stack now does, SRE teams gain clarity. They no longer fear deploying a new service or running a crucial A/B test, knowing that their increased data ingestion translates directly into an easily quantifiable, proportionate cost. This predictability empowers engineers to focus on reliability and innovation, not invoice deciphering.

This straightforward cost model also provides unparalleled cost transparency. Teams can immediately correlate infrastructure changes with their financial impact. Scaling up a database, optimizing a logging pipeline, or refactoring an application’s telemetry all have a clear, measurable effect on the observability bill. This direct feedback loop enables proactive cost management and informed decision-making, transforming SREs from cost centers into strategic financial partners.

Ultimately, predictable observability billing fosters healthier engineering-finance alignment. Finance departments gain clear forecasts, easing budget allocation and reducing unexpected expenditures. Engineering teams, in turn, demonstrate fiscal responsibility without sacrificing agility. This mutual understanding and trust replace the usual friction, allowing both sides to collaborate effectively towards organizational goals, rather than battling over opaque invoices.

Your 3-Step Plan to Audit Your Observability Bill

Take control of your observability spending. Stop accepting opaque invoices as an unavoidable cost of doing business. This three-step plan empowers your engineering leadership to scrutinize vendor bills, identify hidden costs, and demand the transparency you deserve.

First, isolate the primary metric driving your current observability invoice. Datadog, for instance, often bills heavily on "custom metrics" or "hosts," while Grafana Cloud might charge by "active series." Force your team to pinpoint the single biggest cost contributor. Understanding this core driver is the first step toward regaining control.

Next, work with your SRE and DevOps teams to estimate your actual metric data footprint. This means approximating how many gigabytes of metric data your systems generate and send each month. While providers like SigNoz might bill by "million samples," converting this to a simple GB figure provides a universal baseline. This exercise gives you a concrete number to compare against simpler, gigabyte-based models.

This GB estimate offers a direct comparison point, cutting through the abstract "active series" or "data points metric" counts. For context on how other vendors present their offerings, explore resources like Grafana Cloud Pricing | Free, Pro, Enterprise, which details various tiers and their associated limits. This clarity is precisely what current complex models intentionally obscure.

Finally, challenge your vendor. Armed with your estimated data footprint and identified primary metric, schedule a meeting with your sales representative. Demand a clear, data-based explanation of your current costs in terms of gigabytes, not proprietary units. Ask them directly: "How many gigabytes of metric data are you charging me for, and at what rate?"

Observe their response carefully. If they struggle to provide a straightforward answer or deflect with complex explanations, you've exposed their lack of transparency. Their inability to simplify their own billing is a telling sign, indicating whether their business model prioritizes your clarity or their obfuscation. This direct confrontation is your most powerful tool.

The Future is Transparent: Demand Better Billing

Observability stands at a critical juncture. For too long, vendors have obscured true costs behind arcane metrics like "active series," "custom metrics," or "million samples." This complexity, as explored in the CodeRED episode, is not an accident; it serves a specific business model designed to maximize vendor revenue and minimize customer predictability.

A clearer path emerges: the gigabyte revolution. Charging for metrics by the gigabyte offers a universally understandable and predictable model. This straightforward approach allows engineering teams to accurately forecast expenses, directly linking data ingestion to tangible cost, unlike the opaque systems currently in place at market leaders.

Engineering leaders and practitioners must seize this moment. Demand genuine transparency from observability providers. Stop accepting billing statements that require a dedicated analyst to decipher. Your teams deserve predictability and clarity to manage budgets effectively, freeing up valuable SRE time from cost reconciliation.

Customers hold the ultimate power to reshape this market. Every procurement decision is a vote. By prioritizing vendors offering transparent, gigabyte-based billing, you signal a clear preference for simplicity and predictability over deliberate obfuscation. This collective action forces incumbents to adapt or risk losing significant market share.

Understandable pricing is not merely a nicety; it is an essential evolution for the entire tech industry. It fosters trust, enables better financial planning, and empowers development and operations teams to focus on innovation, not invoice decryption. The future of observability is transparent, predictable, and unequivocally customer-centric.

This transition will unlock significant value, allowing organizations to scale their observability practices without fear of unexpected budget overruns. Ultimately, demanding better billing means advocating for a healthier, more efficient ecosystem where engineering excellence thrives on clarity, not confusion.

Frequently Asked Questions

Why is observability pricing so complicated?

Many vendors use proprietary and non-standard metrics like 'active series' or 'custom metrics' instead of universal units. This makes direct comparison between platforms difficult, obscures the total cost, and can lead to vendor lock-in.

What are some examples of complex pricing metrics?

Examples include Grafana charging for 'active series,' Datadog for 'custom metrics,' and SigNoz for 'million samples.' Each requires deep platform-specific knowledge to estimate costs accurately.

How does gigabyte-based pricing simplify observability costs?

It uses a universally understood unit of data measurement (GB). This makes costs predictable and directly proportional to the data you send, similar to familiar cloud services like AWS S3, eliminating the need to understand abstract metrics.

Which companies are moving towards simpler pricing?

The article highlights Better Stack as a key example, which recently switched its metrics pricing to a straightforward gigabyte-based model to improve clarity and predictability for customers.

Frequently Asked Questions

Why is observability pricing so complicated?
Many vendors use proprietary and non-standard metrics like 'active series' or 'custom metrics' instead of universal units. This makes direct comparison between platforms difficult, obscures the total cost, and can lead to vendor lock-in.
What are some examples of complex pricing metrics?
Examples include Grafana charging for 'active series,' Datadog for 'custom metrics,' and SigNoz for 'million samples.' Each requires deep platform-specific knowledge to estimate costs accurately.
How does gigabyte-based pricing simplify observability costs?
It uses a universally understood unit of data measurement (GB). This makes costs predictable and directly proportional to the data you send, similar to familiar cloud services like AWS S3, eliminating the need to understand abstract metrics.
Which companies are moving towards simpler pricing?
The article highlights Better Stack as a key example, which recently switched its metrics pricing to a straightforward gigabyte-based model to improve clarity and predictability for customers.

Topics Covered

#observability#pricing#devops#sre#cost-optimization
🚀Discover More

Stay Ahead of the AI Curve

Discover the best AI tools, agents, and MCP servers curated by Stork.AI. Find the right solutions to supercharge your workflow.

Back to all posts