Showing posts with label OLTP. Show all posts
Showing posts with label OLTP. Show all posts

Tuesday, August 28, 2012

Scale Up, Partitioning, Scale Out

On the 8/16 I conducted a webinar titled: "Scale Up vs. Scale Out" (http://www.slideshare.net/ScaleBase/scalebase-webinar-816-scaleup-vs-scaleout):

The webinar was successful, we had many attendees and great participation in questions and answers throughout the session and in the end. Only after the webinar it only occurred to me that one specific graphic was missing from the webinar deck. It was occurred to me after answering several audience questions about "the difference between partitioning and sharding" or "why partitioning doesn't qualify as scale-out". 

Having the webinar today, I would definitely include the following picture, describing the core difference between Scale Up, Partitioning, and Scale Out:

In the above (poor) graphics, I used the black server box as the database server machine, the good old cylinder as the disk or storage device, and the colorful square thingy stands for the database engine. Believe it or not, this is a real complete architecture chart of Oracle 10gR2 SGA, miniatured to a small scale. Yes, all databases including Oracle and also MySQL, are complex beasts, a lot of stuff is going on inside the database engine for every command. 

If my DB is like in the "starting point" then I'm either really small, or I'm in a really bad shape by now. 
Partitioning makes wonders as data grows towards being "big data". It optimizes the data placement on separate files or disks, it makes every partition optimized and "thin" and less fragmented as you would expect from a gigantic busy monolithic table. Still, although splitting the data across files, we're still "stuck" with busy monolithic database engine that relies on a single box "compute" or "computing power". 

While we distributed the data, we didn't distribute the "compute". 
When there is a heavy join operation, there is one busy monolithic database engine to collect data from all partitions and process this join. 
When there are 10000 concurrent transactions to handle right here and now, there is one busy monolithic database engine to do all database-engine activities such as buffer management, locking, thread locks/semaphores, and recovery tasks. Buffers, locking queues, transaction queues... are still the same for all partitions. 

This is where Scale-out is different than partitions. It enables distribution and parallelism of the data as well as the so important compute, brings the compute closer to the data, enables several database engine process different sets of data, handling different sets of the overall session concurrency.

You can think of it as one step forward from partitioning, and it comes with great great results. It's not a simple step though, an abstraction layer is required to represent the databases grid as one database to the application, same as what it's used to use. 
In further posts I'll go into more on this "Scale Out Abstraction Layer", and also about ScaleBase which is a provider of such layer 

Monday, August 6, 2012

Twitter and the new big data lifecycle

Recently I came across this fine article in The New York Times: "Twitter Is Working on a Way to Retrieve Your Old Tweets". Dick Costolo, Twitter’s chief executive, said:
"It’s a different way of architecting search, going through all tweets of all time. You can’t just put three engineers on it."
Mr. Costolo is right, and pointed the spotlight to a very important change we're experiencing today, in these such interesting times. The word is expectations and those are changing fast!

Not so long ago, Big Data was a synonym to Analytics, Data Warehouse, Business Intelligence. Traditionally operational (OLTP) apps held limited amounts of data, only the "current" data, relevant for the ongoing operations. A cashier in a supermarket would hold only recent transactions, to enable lookup of a charge that was done 10 minutes ago, if I need to return and item or dispute the charge while at the cashier. When I come back to the store the day after, I won't go to the cashier, I should go to "customer service" that with a different application, a different database - I will get the service for my returning items or disputes. A dispute after several months will not be handled by the customer service in the store, but by "the chain's dispute department", using a different, 3rd app with a 3rd cumulative aggregative DB. And on and on it goes. 

In this simplified example, the organization invested many resources in 3 different DBs and apps aggregating different levels of data, enabling similar and marginal additional functionality. Why? Data volume and concurrency.

At the cashiers, the only place where new data is really generated, there also the highest concurrency. In a global look many thousands of items are "beeped" and sold through the cashiers every minute - data is kept small - generated and extracted out shortly after that. The customer service reps handle tens of customers a minute over larger data, and "the chain's dispute department" overlooks the biggest data, but handles 1 or 2 cases an hour, and might also execute more "analytic-style" queries to determine the nature of a dispute... 

This was, in a nutshell, the "lifecycle of the data" in the old world. But today, everything changes - it's all online, right here, right now!

Enormous amount of (big) data is generated and also searched and analyzed at the same time. Everything is online, here and now. Every tweet (millions a day) is reported instantly to hundreds of followers, participates in saved searches, analyzed by numerous robots and engines throughout the web, and also by Twitter itself. Same goes for every search or e-mail I send in Google and for every status or "like" in Facebook that is is reported to my hundreds of friends and also analyzed at the same time, here and now. Hey its their way to make money, to push the right ads at the right time.  

And now - we learn the users expect to see online data that "old" in the terminology of the old days. I want to see statuses, likes and tweets from 2 and 4 months ago, in the same interface and the same experience I'm used to, don't send me to the "customer service department"!

On the bottom line - it requires scale. Scale you online database to handle online data volumes and throughput, as well as older data, on the same grid, without interference, with the same applications. This is what scale out is all about. Think outside the (one database server) box. If you have 10 databases for the current data, you can have 10 more with older data, and 100 more with even-older data and so on. Giving a transparent unified view to (or virtualizing) this database grid - is the solution occupies most of my time, and it's the missing link to making a database scale-out a commodity.

Thursday, June 28, 2012

ARM based data center. Inspiring.

In a previous post I wrote ARM based servers. Since then, and thanks to all the comments and responses I got, I looked more into this ARM thing and it's absolutely fascinating...

Look at this beauty (taken from the site of Calxeda, the manufacturer):

What is it? A chip? A server? No, it's a cluster of 4 servers...

And this:

is HP Redstone Server, 288 chips, 1,152 cores (Calxeda quad-core SoC) in a 4U server “Dramatically reducing the cost and complexity of cabling and switching”. Calxeda is talking about: “Cut energy and space by 90%”, and “10x the performance at the same power, the same space” and it's just the beginning...

And this is from the last couple of days... From ISC'12 (International Supercomputing Conference): "ARM in Servers – Taming Big Data with Calxeda":

  • In the case of data intensive computing, re-balancing or ‘right-sizing’ the solution to eliminate bottlenecks can significantly improve overall efficiency
  • By combining a quad-core ARM® Cortex™-A series processor with topology agnostic integrated fabric interconnect (providing up to 50Gbits of bandwidth at latencies less than 200ns per hop), they can eliminate network bottlenecks and increase scalability


You still can't go to the store and buy a 4U ARM-based database server that performs 10x and uses 1/10 of the power (combine them, it order of magnitude of 100x...). It's not now, maybe not tomorrow, but it's not sci-fi. And technologies will have to adapt to this world of "multiple machines, shared nothing, commodity hardware". I think databases will be the hardest tech to adapt, the only way is to distribute the data wisely and then distribute the processing, sometimes parallelize processing and access to harness those thousands of cores.

Wednesday, June 20, 2012

The catch-22 of read/write splitting

In my previous post I covered the shard-disk paradigm's pros and cons, but the conclusion that is that it cannot really qualify as a scale-out solution, when it comes to massive OLTP, big-data, big-sessions-count and mixture of reads and writes.

Read/Write splitting is achieved when numerous replicated database servers are used for reads. This way the system can scale to cope with increase in concurrent load. This solution qualifies as a scale-out solution as it allow expansion beyond the boundaries of one DB, DB machines are shared-nothing, can be added as a slave to the replication "group" when required.


And, as a fact, read/write splitting is very popular and widely used by lots of high-traffic applications such as popular web sites, blogs, mobile apps, online games and social applications. 

However, today's extreme challenges of big-data, increased load and advance requirements expose vulnerabilities and flaws in this solution. Let's summarize them here:

  • All writes go to the master node = bottleneck: While reading sessions are distributed across several database servers (replication slaves), writing sessions are all going to the same primary/master server, hence still a bottleneck, all of them will consume all resources from the DB for our well-known "buffer management, locking, thread locks/semaphores, and recovery tasks"
  • Scaled sessions' load, not big data: While I can take my, X reading sessions and spread them over my 5 replication slaves giving each to handle with only X/5 sessions, however my giant DB will have to be replicated as a whole to all servers. Prepare lots of disks...
  • Scale? Yes. Query performance? No: Queries on each read-replica need to cope with the entire data of the database. No parallelism, to smaller data sets to handle
  • Replication lag: Async replication will always introduce lag. Be prepared for a lag between the reads and the writes.
  • Reads after write will show missing data. The transaction is not yet committed so it's not written to the log, not propagated to salve machine, not applied at the slave DB. 
Above all, databases suffer from writes made by many concurrent sessions. Database engine themselves become bottleneck because of their *buffer management, locking, thread locks/semaphores, and recovery tasks*. Reads are a secondary target. BTW - reads performance and scale can be very well gained by good smart caching, use of a NoSQL such as Memcached in the app, in front of the RDBMS. In modern applications we see more and more avoided reads and writes, that cannot be avoided or cached, storming the DB.

R/W splitting is usually implemented today inside the application code, the it's easy to start, then becomes hard... I recommend using a specialized COTS product that does it 100 times better and may eliminate some or all limitations above (ScaleBase is one solution that gives that (among other things)).


This is read/write splitting's catch 22. It's an OK scale-out solution and relatively easy to implement, but improvement of caching systems, changing requirements in the online applications and big-data and big-concurrency - rapidly driving it towards its fate, become less and less relevant, and only play a partial role in a complete scale-out plan. 

In a complete scale-out solution, where data is distributed (not replicated) throughout a grid of shared-nothing databases, read/write splitting will play its part, but only a minor one. Will get to that in next posts.

Thursday, June 7, 2012

Why shared-storage DB clusters don't scale

Yesterday I was asked by a customer for the reason why he had failed to achieve scale with a state-of-the-art "shared-storage" cluster. "It's a scale-out to 4 servers, but with a shared disk. And I got, after tons of work and efforts, 130% throughput, not even close to the expected 400%" he said.

Well, scale-out cannot be achieved with a shared storage and the word "shared" is the key. Scale-out is done with absolutely nothing shared or a "shared-nothing" architecture. This what makes it linear and unlimited. Any shared resource, creates a tremendous burden on each and every database server in the cluster.

In a previous post, I identified database engine activities such as buffer management, locking, thread locks/semaphores, and recovery tasks - as the main bottleneck in the OLTP database, handling reads and writes mixture. No matter which database engine, they all have all of the above, Oracle, MySQL, all of them. "The database engine itself becomes the bottleneck!" I wrote.

With a shared disk - there is a single shared copy of my big data on the shared disk, the database engine still have to maintain "buffer management, locking, thread locks/semaphores, and recovery tasks". But now - with a twist! Now all of the above need to be done "globally" between all participating servers, thru network adapters and cables, introducing latency. Every database server in the cluster needs to update all other nodes for every "buffer management, locking, thread locks/semaphores, and recovery tasks" it is doing on a block of data. See here number of "conversation paths" between the 4 nodes:
Blue dashed lines are data access, red lines are communications between nodes. Every node must notified and be notified to and by all other nodes. It's a complete graph, with 4 nodes and there are 6 edges. With 10 you'll find 45 red lines here (n(n - 1)/2, a reminder from Computer Science courses...). Imagine the noise, the latency, for every "buffer management, locking, thread locks/semaphores, and recovery tasks". Shared-storage becomes shared-everything. Node A makes an update to block X - 10 machines need to acknowledge. I wanted to scale, but instead I multiplied the initial problem.

And I didn't even mention the fact that the shared disk might become a SPOF and a bottleneck.
And I didn't even mention the limitations when you wanna go with this to cloud or virtualization.
And I didn't mention the tons of money this toy costs. I prefer buying 2 of these, one for me and one for a good friend, and hit the road together...

Shared-disk solutions gives a very limited solution to OLTP scale. If the data is not distributed, computing resources required to handle this data will not be distributed and thus reduced, on the contrary, they will be multiplied and consume all available resources from all machines.

Real scale-out is achieved by distributing the data in shared nothing, every database node is independent with its data, no duplications, no notifications, ownership or acknowledges over any network. If data is distributed correctly, concurrent sessions will be also distribute across servers, each node runs extremely fast on its small-data, small load, with its own small "buffer management, locking, thread locks/semaphores, and recovery tasks".

My customer responded "Makes perfect sense! Tell me more about Scale-Out and distribution...". 

Wednesday, May 30, 2012

Scale-out your DB on ARM-based servers

Today, I think we witnessed a small sign for a big revolution...

http://www.pcworld.com/businesscenter/article/256383/dell_reaches_for_the_cloud_with_new_prototype_arm_server.html
"Dell announced a prototype low-power server with ARM processors, following a growing demand by Web companies for custom-built servers that can scale performance while reducing financial overhead on data centers"
In short, ARM (see Wikipedia definition here) is an architecture standard for processors. ARM processors are slower compared to good old x86 processors from Intel and AMD, but have power-efficiency, density and price attributes that intrigue customers, especially in our days of green data centers where carbon emissions is carefully measured, and of course, cost-saving economics.

Take iPhones and iPads for example, those amazing machines do fast real-time calculations with their relatively powerful ARM processors (Apple A4, A5, A5x), yet are extremely efficient with regards to power and stay relatively cold. See picture (credits to Wikipedia) of the newest Apple A5x chip, used in New iPad:


Today when true big web and cloud players build their data centers the question is not "how big are your servers?" but rather "how many servers do you carry?". Ask Facebook, Google, Netflix, and more to come... For those guys, no single server can be big enough anyway, so they're built from the ground up for scaling-out to numerous servers and performing small tasks, concurrently. Familiar with Google's Map-Reduce and Hadoop? So - why not parallelize on ARM based servers? Can you imagine 20 iPads, occupying the same space as a 1U rack-mounted "pizza" server, but with 5x parallel computing power, 20x electricity power efficiency, and 100x cooling costs efficiency?

So what all this has to do with database scalability you ask?

With this quote from the article, I don't agree: "But ARM still cannot match up chips from Intel and AMD for resource-heavy tasks such as databases.".

Oh I remember those days when, in every data center in every organization I gave consulting to, I saw the same picture... All machines were nice, neatly organized, tagged, blades, standard racks, mostly virtualized... But the DB? No... Those servers were non-standard, the biggest, ugliest, most-expensive, capex and opex. Why? It's the DB! It's special! It has needs!! Specialized HW, specialized storage, $$$. Those days are over, as organizations' need to save is arriving to the shores of the sacred database. Today, harder questions are being asked, more is being done in commoditization, more databases are virtualized, and the cloud...

Big Data is everywhere, in the web, in the cloud, in the enterprise, databases must scale, scale-out or else explode, it's a hard fact. Databases can be scaled with smart distribution and parallelism, and then can use commodity hardware, can be easily virtualized, and cloud-ified. If distribution is done correctly - the sum is greater than its parts, and the parts in this case can be low end... The lowest of the low... database machines can definitely be ARM based servers, each holds portion of the data, attracts a portion of the concurrent sessions, and contributes to the overall processing.

A database on an iPad? Naa, I prefer breaking another record in Fruit Ninja.
A database on 20 ARM-based servers? If it's 5x faster and costs 60x less in electricity and cooling - then yes, definitely.

Monday, May 21, 2012

Scaling OLTP is nothing like scaling Analytics

We're in the big data business. OLTP applications and Analytics.

Scaling OLTP applications is nothing like scaling Analytics, like I posted here: http://database-scalability.blogspot.com/2012/05/oltp-vs-analytics.html. OLTP is a mixture of read and writes, heavy session concurrency and also growing amounts of data.

In my previous post, http://database-scalability.blogspot.com/2012/05/scale-differences-between-oltp-and.html, I mentioned that Analytics can be scaled using: columnar storage, RAM and query parallelism.

Columnar storage cannot be used for OLTP, as while it makes read scans better, it hurts writes, especially INSERTs. Same goes for RAM, the approach of “let’s put everything in memory” is also problematic for writes that should be Durable (the D in ACID). There are databases that reach Durability with writing to memory of at least 2 machines, I'll get to that in a later post, but in the simpler view, RAM is great for reads (Analytics), very limited for writes (OLTP).

Query parallelism that worked for Analytics, is limited for OLTP. Mostly because of high concurrency and writes, OLTP is a mixture of read and writes, ratios today reach 50%-50% and more. Every write operation is eventually at least 5 operations for the database, including table, index(s), rollback segment, transaction log, row-level locking and more. Now multiply with 1000 concurrent transactions, and 1TB of data. The database engine itself becomes the bottleneck! It puts so many resources into buffer management, locking, thread locks/semaphores, and recovery tasks, no resources are left available for handling query data!

3 bullets why naïve parallelism is not a magic bullet for OLTP:
  1. Parallel query within the same database server will just turn the hard-to-manage 1000 concurrent transactions into impossible 1000000 concurrent sub-transactions… Good luck with that… 
  2. Parallelizing query on several database servers is a step in a good direction. However it can’t scale: if I have 10 servers and each one my 1000 concurrent transactions needs to gather data from all servers in parallel, how many concurrent transactions I’ll have on each server? That’s right, 1000. What did I solve? Can I scale to 2000 concurrent transactions? All my servers will die together. In that case what if I scale to 20 servers instead of 10? Then I’ll have 20 servers with 2000 concurrent transactions… that will all die together. 
  3. OLTP operations are not good candidates for parallelism:
    1. Scans, Full table/index scans and range scans, are parallelized all the time in Analytics, are seldom in OLTP. In OLTP most accesses are short, pinpointed, index-based small range and unique scans. Oracle’s optimizer mode FIRST_ROWS (OLTP) will almost always prefer index access and ALL_ROWS (Analytics) will have hard time give up its favorable full table scan.  So what exactly do we parallelize in OLTP? An index rebuild once a day (scan...)?
    2. 1000 concurrent 1-row INSERT commands a second - is a valid OLTP scenario. What exactly do I parallelize?
Parallelism cannot be the one and only complete solution. It serves a minor role in the solution, whose key factor is: distribution.

OLTP databases can scale only by a smart distribution of data but also the concurrent sessions among numerous database servers. It’s all in the distribution of the data, if data is distributed in a smart way, concurrent sessions will be also distribute across servers.

Go from 1 big fat database server dealing with 1TB of data and 1000 concurrent transactions, to 10 databases, each deal with easy 100GB and 100 concurrent transactions. I'll hit the jackpot if I'll manage to keep databases isolated, shared nothing, processing-wise, not only cables-wise. Best are transactions that start and finish on a single database.

And if I’m lucky and my business is booming, I can scale:
  1. Data grew from 1TB to 1.5TB? Add more databases servers.
  2. Concurrent sessions grew from 1000 to 1500? Add more databases servers.
  3. Parallel query/update? Sure! If a session does need to scan data from all servers, or need to perform an index rebuild, it can run in parallel on all servers, and will take a fraction of the time.
Ask Facebook (FB, as of today... ). Each of their 10,000s databases is handling a fraction of the data, in a way that only a fraction of all sessions are accessing it in any point in time.

Each of the databases is still doing hard work on every update/insert/delete and on buffer management, locking, thread locks/semaphores, and recovery tasks. What can we do? It's OLTP, it's read/write, it's ACID... It's heavy! I trust every one of my DBs to do what it does best, I just give it the optimal data size and session concurrency to do that.


Let's summarize here:

In my next post I'll dive more into implementations caveats (shared disk, shared memory, sharding) and pitfalls, do's and don't's... 

Stay tuned, join those who subscribed and get automatic updates, get involved!

Tuesday, May 15, 2012

Scale differences between OLTP and Analytics


In my previous post,http://database-scalability.blogspot.com/2012/05/oltp-vs-analytics.html, I reviewed the differences between OLTP and Analytics databases.

Scale challenges are different between those 2 worlds of databases.



Scale challenges in the Analytics world are with the growing amounts of data. Most solutions have been leveraging those 3 main aspects: Columnar storage, RAM and parallelism.
Columnar storage makes scans and data filtering more precise and focused. After that – it all goes down to the I/O - the faster the I/O is, the faster the query will finish and bring results. Faster disks and also SSD can play good role, but above all: RAM! Specialized Analytics databases (such as Oracle Exadata and Netezza) have TBs of RAM. Then, in order to bring results for queries, data needs to be scanned and filtered, a great fit for parallelism. A big data range is divided into many smaller ranges given to parallel worker threads that each performs his task in parallel, the entire scan will finish in a fraction of the time.

In the OLTP, scale challenges are in the growing transaction concurrency throughput and… growing amounts of data. Again? Didn't we just say growing data is the problem of Analytics? Well, today’s OLTP apps are required to hold more data to provide a larger span online functionality. In the last couple of years OLTP data archiving was changed dramatically. OLTP data now covers years and not just days or weeks. Facebook recently launched its “time line” feature (http://www.facebook.com/about/timeline), can you imagine your timeline ends after 1 week? Facebook’s probably world’s largest OLTP database holds data of a billion users for years back. Today all data is required anywhere anytime, right here, right now, online. Many of today’s OLTP databases go well beyond the 1TB line. And what about transaction concurrency throughput? Applications today are bombarded by millions of users shooting transactions from browsers, smartphones, tablets… I personally checked my bank account 3 times today. Why? Because I can…

What can be done to solve OLTP scale challenges?

In my next post let's start answering this question with understanding why solutions proposed for the Analytics are limited in the OLTP, and start reviewing relevant approaches.

Stay tuned, subscribe, get involved!