Contact sales

Optimizing Your Data Operations: Why It Matters and 2 Real-Life Applications

Learn how an IoT platform that bills based on data operations - value-add actions executed by you and your product - make it easier to build a profitable connected product.

Ready to build your IoT product?

Create your Particle account and get access to:

  • Discounted IoT devices
  • Device management console
  • Developer guides and resources
Start for free

Most Internet of Things platforms charge you based on data consumed. Depending on your use case, this can drastically cut into the profitability of your connected product.

Particle differs from traditional IoT solutions by charging you based on data operations rather than data consumption. In other words, we charge you only when you complete a value-add action—and we do it on a per-action basis rather than relying on the bytes. If your product sends high data payloads, the data operations model can keep you out of the red and on a path to sustainable profitability.

In this article, we’ll discuss what that means for your product and why it might be a game changer for both your product and your business.

Before we go on, we need to do a quick shout-out. Earlier this year, global enterprise account executive Arjun Varma and solutions architect Manuel Orduño shared their data ops knowledge at Spectra, our annual conference for IoT leaders who want to unlock new ways to design, build, and launch connected products.

You can watch their full session here.

What Are Data Operations?

This question is a great place to start, and Arjun answered it best:

“Data operations are value-adding device-to-cloud and cloud-to-device actions that are executed by customers on the Particle platform.”

This GIF should clear up a few things:

Particle IoT Platform-as-a-Service - animation

Believe it or not, this near-constant customer-guided interaction between devices and the cloud is a new motion we built at Particle to support our scaling customers. Strong data processes put customers in the driver’s seat, giving them the power to choose how they gather data from the physical world for their specific applications.

As customers scale their fleets from tens to hundreds or even thousands of devices, data operations are critical. With seamless ops, teams can better roll out parameters around how they engage with their data. Plus, they’ll better forecast their cloud costs—a new, innovative alternative to the standard model of being charged for data consumption based on megabytes.

This can be the difference between a profitable hardware-as-a-service business and one that never gains traction.

Speaking of billing on data operations versus data consumption, we have some additional thoughts before we dive into a few examples of data ops done right.

Why Billing on Data Operations vs. Data Consumption Just Makes Sense

At Particle, watching our customers (and subsequently their customers) scale and grow with IoT connectivity is one of the best parts of the job.

What’s even better is knowing that our customers don’t have to choose between servicing a large enterprise and a startup. With billing based on data operations, both are possible.

Here’s the problem we’re solving: Devices send different-sized data payloads all the time. Because traditional data models charge for data used, even if it’s dropped or doesn’t make it to the device or the cloud, it’s easy to pay for things that you don’t end up benefiting from—and no startup will tolerate wasted expenses. As you can imagine, costs are harder to predict and control under this model, and there's a risk that the unit economics of your product may become unfavorable as a result.

We developed Particle’s IoT Platform-as-a-Service model to change all that.

Paying for data operations instead of data consumption empowers our customers to:

  • Pay one flat rate for calling a device or sending data to the cloud
  • Be billed on actions, not bytes

With this level of customization, flexibility, and control, our customers can offer their customers more. Gone are the days of charging customers for every little handshake, ping, or keepalive, as we’ve built these into our platform and all of our IoT devices. Simply put, the future of IoT is not about data consumed, but the number of data operations processed. Simplification and transparency are key.

How Data Operations Help You Forecast and Scale Your IoT Deployment

We’ve touched briefly on the benefits that data operations bring to forecasting and scaling IoT, but there’s more to the story. Here are two additional perks of leveraging data operations solutions versus consumption-based solutions.

You have more control over your fleet.

When it comes to scaling a fleet, data operations make it easier to forecast and manage operating costs. Taking advantage of processing power on the edge rather than relying fully on the cloud is huge.

We’ve watched customer after customer maintain low operating costs even as their data volumes rise with a growing fleet because with Particle, they’ve got more control over that bi-directional communication between cloud and device. Ultimately, this control and level of detail equip them to package solutions, sell better products, and profitably scale their IoT solutions.

You get to decide which pings and keepalives send data and which don’t.

Particle offers the ability to decide which data from your devices will be published to the cloud and when. Let’s say a product designer tells you, “Hey, I need my device to publish information once every 15 minutes. No more, no less.” That's completely under your control.

What's not under control is how many handshakes, pings, and keepalives devices need to fulfill that requirement of publishing data once every 15 minutes—or how many over-the-air or firmware updates you'll have to push to your fleet.

Luckily, those issues aren’t concerning at all with a solution like Particle’s OTA services.

2 Ways Particle Helps You Optimize Your Data Operations

As we started working with businesses that had high volumes of data operations, we realized we needed to figure out how to optimize data operations in a cost-effective, performance-enhancing way.

High megabyte consumption projects can sound “bad,” but here’s the deal: End-user requirements should drive your product development, so if an end user really needs that kind of data operation usage, building it is worth the engineering effort.

But there's no point in optimizing data operations if you’re in the prototyping phase or still figuring out how to scale a fresh product, so your goal in the beginning should be to kickstart your project, deploy it to the field, and start gathering feedback. Once you get a sense of your data costs, you can identify your break-even point and determine where it makes sense to double down on your data ops.

A different example: Let’s say your project requires you to gather historical trends for a machine learning algorithm in tandem with accelerometer data. You'll get vibration pattern data that requires super high-frequency sampling and will need to be stored and amassed before you can send it to the cloud.

It’d be easy to send all that data to the cloud and deal with it later, but that’s not cost-effective without good data operations in place. To manage this kind of project more effectively, there are several techniques and solutions that you could employ with Particle’s support. Let’s look at two ways Particle makes it easier to optimize data operations.

Edge Computing: Geofencing

This first example spotlights a popular Particle feature: Edge Geofencing. It’s sold as part of our Asset Tracking platform and is available within our Tracker family of products.

In case you need a quick refresher, geofencing enables users in the asset tracking industry to create geographic boundaries, then evaluate whenever a tracking asset enters or exits that zone—and maybe even how the asset enters or exits. For some concrete geofencing examples, think of theft detection sensors hooked onto items at the mall or e-bikes that throttle speed in school zones.

There are two ways to build geofences, and while both work, one is definitely better. (Spoiler: It’s the second.)

1. Apply a geofence at the cloud level. Picture a moving bike with its location constantly being uploaded to the cloud to evaluate its current location against the designated virtual zone. As you can imagine, this uses tons of data—to say nothing of added latency and reception dependencies based on cloud cover and proximity to high buildings, tunnels, etc.

Long story short, unless there's stellar connectivity all the time, it'll take a while for the cloud to determine that a bike is exiting its designated zone, which could lead to theft and other problems.

2. Implement geofencing right at the edge. Here comes the smarter solution! To reduce the data operations rate and decrease dependency on perfect cloud connectivity, we built a tracker platform to implement geofencing at the edge. Instead of constantly uploading each bike's location to the cloud, a bike company can set geographic conditions or zone based on the radius around a specific point or latitude and longitude.

This way, the device on each bike can determine if it's entering or exiting the geofence on its own, with the end result being that the bikes only send data when necessary (a.k.a., when they hit the fence). There’s no need to ask the cloud for stuff the device can determine on its own.

Data Operations and Protocol Buffers

Our second example—protocol buffers—features a real customer story. A customer of ours that requires time series data aggregation had an application generating 320 bytes of data per minute per device. This customer's data did not need to be sent in real time, but rather in 10-minute intervals. Given our data publishing structure, this required around 30 data operations every 10 minutes.

As our customer grew its user base, its needs changed. It was now amassing 320 bytes of data per minute, which put it on a high commercial tier and increased its data ops rate tenfold. (For context, our customer would need to use 30 data ops every 10 minutes, which made operational costs unreasonable.)

Our customer handled this task—known as time-series data aggregation—in three steps:

  1. Instead of publishing raw JavaScript Object Notation, the messages were serialized using Google's protocol buffers in the binary format. Rather than having raw JSON that was very human and readable, each message was turned into a chain of ones and zeros and became a binary string.
  2. Since our platform does not support sending binary data, the customer first encoded the binary stream using Base85. While this does lose some efficiency, this approach has tangible benefits compared to sending raw JSON.
  3. In lieu of human-readable text, devices sent encoded data to the cloud, then decoded the messages to extract the real sensor data. By employing this technique, customers can expect data ops usage reductions of as much as 60%.

Note: We just gave you a brief overview of something highly technical. Check out our primer on IoT protocols, and if you have questions about how this can work in your context, set up a call with us and we’d be happy to share more.

For Us, It’s Data Operations or Bust

Optimizing your data operations is no small feat, but think of the alternative. Continued billing based on data consumption leads to wasted money, an inability to scale, and overall deficiencies within your IoT-connected product.

If you’re struggling to find a path to profitable scalability with your current IoT platform or you’re seeking a platform that won’t crush you with data consumption costs, why not check out Particle’s IoT Platform-as-a-Service? Book a consultation and work with our team to build a business case for your connected product.

Don't Let Runaway Data Costs Torpedo Your Connected Product's Profitability

Browse our newly available cellular IoT devices and request a dev kit so you can go from prototype to production faster. Or, talk to an expert on our sales team to get help building a business case for your product.