Calculating and Compressing OLAP Cubes

Over the weekend, I've been hard at work at writing an implementation of the Dwarf algorithm for compressing and calculating (aggregating) OLAP cubes. The initial results look good, and I've learned a great deal.

To begin, a little background. OLAP cubes typically perform numerous calculations and aggregations on a dimensional model in order to speed up query performance. Data warehouses are all about storing extremely large sets of data, and presenting it to the user for analysis. Users being users, they don't want to wait while you perform a SQL GROUP BY and SUM over 10 million rows. So an effective OLAP cube calculates all of the GROUP BY combinations ahead of time, dramatically speeding up users' queries against the cube.

The trouble is that the number of GROUP BY combinations increases exponentially as the numbers of dimensions (and dimension attributes) increases. Plus, data warehouses are meant to store data over multiple years, and at a very fine grain. Therefore, a typical data warehouse can easily store hundreds of millions of rows.

So you can see where this gets a bit tricky. How do you effectively calculate all of your GROUP BY combinations across such large data sets? How can you do it before the universe ends? How can you update your calculated cube with new rows?

This is where the Dwarf algorithm comes in. It promises to provide a way of calculating and compressing your cube with only one pass through your data. Sounds good to me! Let's try it.

First off, my implementation, which I've named BigDwarf, will become part of ActiveWarehouse. ActiveWarehouse is a Rails plugin which brings proven data warehouse techniques and conventions to Ruby on Rails. ActiveWarehouse follows the Rails way of "convention over configuration". BigDwarf is one of ActiveWarehouse's different aggregation strategies.

I've named the implementation BigDwarf because I've implemented the Dwarf algorithm in spirit only. Dwarf works so well because it does both prefix and suffix coalescing, or compression. I've implemented only prefix compression so far. Suffix coalescing, which apparently provides the most dramatic space reductions for a sparse cube, is on the TODO list.

The current implementation does not require ordering of the data set, which traditional Dwarf does. This is good and bad. It's good because we don't need to sort everything before loading, which can be a costly step. However, we'd probably just use the UNIX sort command to sort the file, with the assumption that it's faster than doing in the database (that's a big assumption). Loading the data in arbitrary order gives us a lot of flexibility.

However, it's bad because there's a great potential speed optimization we can use if we order all the dimension attributes in the file. It makes the algorithm a bit more complex, but I think it also reduces the amount of recursion. This is on the TODO list for further research and implementation.

Because BigDwarf is for ActiveWarehouse, this is all written in Ruby. Turns out Ruby is slow. Repeat after me: Ruby is slow. It's a beautiful language, but it's just not a data crunching language. I've managed to optimize BigDwarf enough so that the bottleneck now is the + operator on Fixnum. Here are some things to avoid if you want to write fast Ruby code:

* ==
* [] - array access
* Hash#[] - calculating hashes

Pretty much everything involving accessing your data in a collection will slow you down.

Once I've optimized BigDwarf enough where I can continue working on it, I ran some tests. Here's what we have so far. My test data set is a 10,037,355 line file extracted from our SQL Server 2005 database. The file includes 5 dimensional attributes and one fact (the number we want to sum across all of the dimensions). The file is a text file, tab delimited, one line per row.

BigDwarf processes this file at 4348 lines per second. It will store the fully calculated cube in 3,301,132 bytes. This is down from the original file's 337,022,624 bytes. That's a very dramatic compression, at approximately 99% compression rate. YMMV, of course, as dimension cardinality and size of the values in your data dictate much of that compression. The lower cardinality, the higher compression you'll see.

BigDwarf also supports querying, with basic filtering support. I've yet to do work to optimize the query performance, or to really get a sense of how fast it is. That's on my TODO as well.

All in all, BigDwarf is working really well so far. There's work to do for further compression through suffix coalescing and further optimization through smarter cube building.

Popular posts from this blog

Lists and arrays in Dart

Converting Array to List in Scala

Null-aware operators in Dart