Showing posts with label Columnar Storage. Show all posts
Showing posts with label Columnar Storage. Show all posts

Friday, August 31, 2012

Facebook makes big data look... big!

Oh I love these things: http://techcrunch.com/2012/08/22/how-big-is-facebooks-data-2-5-billion-pieces-of-content-and-500-terabytes-ingested-every-day/

Every day there are 2.5B content items shares, and 2.7B "Like"s. I care less about GiGo content itself, but metadata, connections, relations are kept transactionally in a relational database. The above 2 use-cases generate 5.2B transactions on the database, and since there are only 86400 seconds a day, we get over 60000 write transactions per second on the database, from these 2 use-cases alone, not to mention all other use-cases, such as new profiles, emails, queries...

And what's the size of new data, on top of all the existing data, that cannot be deleted so easily, (remember why? Get a hint here: http://database-scalability.blogspot.com/2012/08/twitter-and-new-big-data-lifecycle.html). A total 500+TB is added every day, I would exaggerated and assume 98% is pictures and other GiGo content only to leaves us with fuzzy new daily 10TB. There were times Oracle called VLDB to a DB of over 1TB, and here we have 10TB, every day.

So how do FB handle all this? They have a scaled-out grid of several 10000s of MySQL servers.

The size alone is not the entire problem. Enough juice, memory, MPP, columnar - will do the trick.
The but if we put this throughput of 100Ks transactions per second, it'll rip the guts out of any database engine. Remember that it translates every write operation into at least 4 internal operations (table, index(s), undo, log) and also needs to do "buffer management, locking, thread locks/semaphores, and recovery tasks". Can't happen.

The only way to handle such data size and such load is scale-out, divide the big problem to 20000 smaller problems, and this is what FB is doing with their cluster of 10000s of MySQLs.

You're probably thinking "naaa it's not my problem", "hey how many Facebooks are out there?". Take a look and try to put yourself, your organization, on the chart below:

Where are you today? Where will you be in 1 year? In 5 years? Things today go wild and and they go wild faster than ever. Big data is everywhere. New social apps aren't afraid of "what if no one will show up to my party?", rather they're afraid "what if EVERYBODY show up?"

You don't need to be Facebook to need a good solution to scale out your database

Monday, May 21, 2012

Scaling OLTP is nothing like scaling Analytics

We're in the big data business. OLTP applications and Analytics.

Scaling OLTP applications is nothing like scaling Analytics, like I posted here: http://database-scalability.blogspot.com/2012/05/oltp-vs-analytics.html. OLTP is a mixture of read and writes, heavy session concurrency and also growing amounts of data.

In my previous post, http://database-scalability.blogspot.com/2012/05/scale-differences-between-oltp-and.html, I mentioned that Analytics can be scaled using: columnar storage, RAM and query parallelism.

Columnar storage cannot be used for OLTP, as while it makes read scans better, it hurts writes, especially INSERTs. Same goes for RAM, the approach of “let’s put everything in memory” is also problematic for writes that should be Durable (the D in ACID). There are databases that reach Durability with writing to memory of at least 2 machines, I'll get to that in a later post, but in the simpler view, RAM is great for reads (Analytics), very limited for writes (OLTP).

Query parallelism that worked for Analytics, is limited for OLTP. Mostly because of high concurrency and writes, OLTP is a mixture of read and writes, ratios today reach 50%-50% and more. Every write operation is eventually at least 5 operations for the database, including table, index(s), rollback segment, transaction log, row-level locking and more. Now multiply with 1000 concurrent transactions, and 1TB of data. The database engine itself becomes the bottleneck! It puts so many resources into buffer management, locking, thread locks/semaphores, and recovery tasks, no resources are left available for handling query data!

3 bullets why naïve parallelism is not a magic bullet for OLTP:
  1. Parallel query within the same database server will just turn the hard-to-manage 1000 concurrent transactions into impossible 1000000 concurrent sub-transactions… Good luck with that… 
  2. Parallelizing query on several database servers is a step in a good direction. However it can’t scale: if I have 10 servers and each one my 1000 concurrent transactions needs to gather data from all servers in parallel, how many concurrent transactions I’ll have on each server? That’s right, 1000. What did I solve? Can I scale to 2000 concurrent transactions? All my servers will die together. In that case what if I scale to 20 servers instead of 10? Then I’ll have 20 servers with 2000 concurrent transactions… that will all die together. 
  3. OLTP operations are not good candidates for parallelism:
    1. Scans, Full table/index scans and range scans, are parallelized all the time in Analytics, are seldom in OLTP. In OLTP most accesses are short, pinpointed, index-based small range and unique scans. Oracle’s optimizer mode FIRST_ROWS (OLTP) will almost always prefer index access and ALL_ROWS (Analytics) will have hard time give up its favorable full table scan.  So what exactly do we parallelize in OLTP? An index rebuild once a day (scan...)?
    2. 1000 concurrent 1-row INSERT commands a second - is a valid OLTP scenario. What exactly do I parallelize?
Parallelism cannot be the one and only complete solution. It serves a minor role in the solution, whose key factor is: distribution.

OLTP databases can scale only by a smart distribution of data but also the concurrent sessions among numerous database servers. It’s all in the distribution of the data, if data is distributed in a smart way, concurrent sessions will be also distribute across servers.

Go from 1 big fat database server dealing with 1TB of data and 1000 concurrent transactions, to 10 databases, each deal with easy 100GB and 100 concurrent transactions. I'll hit the jackpot if I'll manage to keep databases isolated, shared nothing, processing-wise, not only cables-wise. Best are transactions that start and finish on a single database.

And if I’m lucky and my business is booming, I can scale:
  1. Data grew from 1TB to 1.5TB? Add more databases servers.
  2. Concurrent sessions grew from 1000 to 1500? Add more databases servers.
  3. Parallel query/update? Sure! If a session does need to scan data from all servers, or need to perform an index rebuild, it can run in parallel on all servers, and will take a fraction of the time.
Ask Facebook (FB, as of today... ). Each of their 10,000s databases is handling a fraction of the data, in a way that only a fraction of all sessions are accessing it in any point in time.

Each of the databases is still doing hard work on every update/insert/delete and on buffer management, locking, thread locks/semaphores, and recovery tasks. What can we do? It's OLTP, it's read/write, it's ACID... It's heavy! I trust every one of my DBs to do what it does best, I just give it the optimal data size and session concurrency to do that.


Let's summarize here:

In my next post I'll dive more into implementations caveats (shared disk, shared memory, sharding) and pitfalls, do's and don't's... 

Stay tuned, join those who subscribed and get automatic updates, get involved!

Tuesday, May 15, 2012

Scale differences between OLTP and Analytics


In my previous post,http://database-scalability.blogspot.com/2012/05/oltp-vs-analytics.html, I reviewed the differences between OLTP and Analytics databases.

Scale challenges are different between those 2 worlds of databases.



Scale challenges in the Analytics world are with the growing amounts of data. Most solutions have been leveraging those 3 main aspects: Columnar storage, RAM and parallelism.
Columnar storage makes scans and data filtering more precise and focused. After that – it all goes down to the I/O - the faster the I/O is, the faster the query will finish and bring results. Faster disks and also SSD can play good role, but above all: RAM! Specialized Analytics databases (such as Oracle Exadata and Netezza) have TBs of RAM. Then, in order to bring results for queries, data needs to be scanned and filtered, a great fit for parallelism. A big data range is divided into many smaller ranges given to parallel worker threads that each performs his task in parallel, the entire scan will finish in a fraction of the time.

In the OLTP, scale challenges are in the growing transaction concurrency throughput and… growing amounts of data. Again? Didn't we just say growing data is the problem of Analytics? Well, today’s OLTP apps are required to hold more data to provide a larger span online functionality. In the last couple of years OLTP data archiving was changed dramatically. OLTP data now covers years and not just days or weeks. Facebook recently launched its “time line” feature (http://www.facebook.com/about/timeline), can you imagine your timeline ends after 1 week? Facebook’s probably world’s largest OLTP database holds data of a billion users for years back. Today all data is required anywhere anytime, right here, right now, online. Many of today’s OLTP databases go well beyond the 1TB line. And what about transaction concurrency throughput? Applications today are bombarded by millions of users shooting transactions from browsers, smartphones, tablets… I personally checked my bank account 3 times today. Why? Because I can…

What can be done to solve OLTP scale challenges?

In my next post let's start answering this question with understanding why solutions proposed for the Analytics are limited in the OLTP, and start reviewing relevant approaches.

Stay tuned, subscribe, get involved!