Showing posts with label big data. Show all posts
Showing posts with label big data. Show all posts

Wednesday, April 3, 2013

MySQL thread pool and scalability examples

Nice article about SimCity outage and ways to defend databases: http://www.mysqlperformanceblog.com/2013/03/16/simcity-outages-traffic-control-and-thread-pool-for-mysql/

The graphs showing throughput with and without the thread pool are taken from the benchmark performed by Oracle and taken from here:
http://www.mysql.com/products/enterprise/scalability.html

The main take away is this graph (all rights reserved to Oracle, picture original URL):
20x Better Scalability: Read/Write
Scalability is where throughput can grow and grow, as demand grows. I need to get more from the database, the question is: "can it scale to give it to me?". Scalability is where the response time remains "acceptable" while the throughput grows and grows.

Every database has a "knee point".
  1. In the best case scenario, in this knee-point, throughput will go into a flat plateau, and On the same point BTW,  response time will start climbing, passing the non-acceptable point.
  2. In a worse case scenario, in this knee-point, throughput, instead of a flat plateau, it will take a plunger. On the same point BTW, response time will start climbing fast to the roof.
Actually, the red best case scenario, is actually pretty bad... There's NO scalability there, throughput has a hard limit! It's around 6,500 transactions per second. I need to do more on my DB, there are additional connections - but the DB is not giving even 1 inch of additional throughput. It doesn't scale.

The thread pool feature is no more than a defense mechanism. It doesn't break the scalability limit of a single machine, rather its job is to defend the database from death.

Real scalability is when throughput graph is neither dropping or becoming flat - it goes up and up and up with a stable response time. This can be achieved only by Scale Out. Getting 7,500 TPS with 1 database with 32 connections, then add an additional database and the straight line going up will reach, say, 14,000. A system with 3 database can support 96 connections and 21,000 TPS... and on and on it goes... 

Data needs to be distributed across those databases, so the load can be distributed as well. Maintaining this distributed data on the scaled-out database is the key... I'll touch that in future posts. 

Tuesday, March 26, 2013

They say: "Relational Databases Aren't Dead"

This is a good read, claiming: "Relational Databases Aren't Dead. Heck, They're Not Even Sleeping", http://readwrite.com/2013/03/26/relational-databases-far-from-dead. A key quote:
"While not comprehensive, the uses for NoSQL databases center around the acquisition of fast-growing data or data that does not easily fit within uniform structures."

There were 2 parts in the statement about NoSQL's uses. I'll start with the latter:


"data that does not easily fit within uniform structures" - NoSQL is probably the right choice, hmm although I always encourage thinking and architecting in advance. And also online structure changes do exist in the RDBMS world and recently in MySQL: http://dev.mysql.com/doc/refman/5.6/en/innodb-online-ddl.html...
I would definitely warn about the caveats of NoSQL when it comes to actually use and query the data that is so easily stored there...

"acquisition of fast-growing data" - is no longer a no-go for RDBMS and MySQL database. Distributed RDBMS solutions do exist today and they can exploit performance and scalability from the good old MySQL itself

What do you think?

Friday, January 4, 2013

Partial partitioning and sharding

I came across this: http://stackoverflow.com/questions/14136633/difference-between-partial-replication-and-sharding
I was wondering if sharding is an alternate name for partial replication or not. What I have figured out that --
  • Partial Repl. – each data item has only copies at some but not all of the nodes (‘Sharding’?)
  • Pure Partial Repl. – has only copies of a subset of the data item but no node contains a full copy of the database
  • Hybrid Partial Repl. – a set of nodes are full replicas and another set of nodes are partial replicas

I thought it was a good topic, I write a really nice answer, but there was a problem when I pressed "post this answer", probably some error or mistake on their side. Anyway - this is what I have to say about partial partitioning and sharding:

Partial replication is an interesting way, in which you distribute the data with replication from a master to slaves, each contains a portion of the data. Eventually you get an array of smaller DBs, read only, each contains a portion of the data. Reads can very well be distributed and parallelized. 

But what about the writes? 

Those are still clogged, in 1 big fat lazy master database, tasks as buffer management, locking, thread locks/semaphores, and recovery tasks - are the real bottleneck of the OLTP, they make writes impossible to scale... As I wrote in many previous posts, for example here: http://database-scalability.blogspot.com/2012/08/scale-up-partitioning-scale-out.html.

Sharding is where every piece of data lives only in one place, within an array of DBs. Each database is the complete owner of the data: data is read from there, data is written to there. This way, reads and writes are distributed and parallelized, real scale-out can be achieved. 

The idea behind sharding is great, the ultimate scaling solution (ask Facebook, Google, Twitter and all the other big guys) but it's a mess to handle, to maintain. It's hard as hell if done by yourself, ScaleBase enables an automatic transparent scale-out machine - that does all that, so you won't have too...



Friday, December 21, 2012

The battle on the OS...

I came across this piece: http://www.zdnet.com/windows-has-fallen-behind-apple-ios-and-google-android-7000008699/, and I'm all nostalgic now...

I had an Atari, my first computer was Apple IIc,  bunch of my friends had a Commodore 64 (I envied them, it was much cooler than the Apple!) and Amiga (wow!!).

Then arrived almost 2 lost decades where people knew* that Wintel is the only thing out there, and couldn't believe there was or ever will be anything else. Just like in the quote from MIB below...

Thank you smartphone, and thank you RIM, Apple and then Google. You made the world a better place.
I know you're not doing anything for us, you're doing it for your own good, to generate value and yield for your stockholders, while pumping ridiculous paychecks to the executives...

Still, you managed to make the world a better, more interesting place to live in, and more challenges to cope with... Big Data, OLTP velocity, app and data sprawl. More challenges for people like me to seek a solution for! You made scalability a big challenge! So, thank you.

BTW, as it turns out to be, I have a Wintel laptop and a Nokia Windows (splendid!) phone, but I'm doing it out of being open minded, and celebrating plurality!


* - The unforgettable quote from "Men In Black":

"...Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow"

Monday, December 17, 2012

Database Performance, a Ferrari and a truck

In the last days I got several queries, from colleagues and customers, about one thing I thought it's a given, well well known, but found out differently: "What is database performance?". Is it speed? Is it throughput? What are the metrics and how do you measure?

I tried to refer to an existing link, but then had to write and describe myself. The thing nearest to describing what I think "Database Performance" really is, is this, it's not bad yet I was able to make it even simpler to my esteemed colleagues and more esteemed customers.

Database performance, in an essence, derived from 2 major metrics:
Latency: the time we wait for an operation to finish. Measured in milliseconds (ms) or any other time unit.
Throughput: number of transactions/commands per time unit usually second or minute.

In the classic world of Data Warehouse and Analytics, throughput is usually a non-issue and latency is king. When database grows larger and larger, analytics complex queries take longer and longer to finish, and the demand is "I need speed!".

In the world of OLTP, throughput is the important measure. TPC-C benchmarks for example, measure only throughput (New Order Transactions per Minute). Oracle made it to meet 30,249,688 NO Transactions Per Minute, nice job, we as readers of the results have no way to know if a single transaction tool 1ms and they managed to squeeze thousands of those in parallel in 1 minute to meet this number, or maybe, the scenario transaction took exactly 1 minute, and Oracle managed to perform 30,249,688 such transaction in parallel. The truth is somewhere in the middle, between the 1 millisecond and 1 minute...

In OLTP the latency should be bearable (for some it's 50ms, for some it's 500ms) and stable as throughput must grow and grow as number of users/sites/devices/accounts/profiles grows and grows.

Another key word is predictability. In my OLTP I need predictable good enough, bearable, constant latency performance. I can't afford a 50ms transactions to take 1 minute once every while. I need transactions latency to be some X I can live with, I need it constant and predictable - while throughput is growing.

Not a popular comparison, but very very relevant: A Ferrari and a truck. Both have 500 horsepower.
A Ferrari will take you 200 miles per hour! However a truck will drive a good legal 70, and she'll go same 70 miles per hour with 100 pounds, 1 ton or 20 tons. Constant, stable, predictable. Yea, I'd like to have a Ferrari for my spare time, or to ace a benchmark, but when it comes to backend server infrastructure they're more like a truck to me... and they deliver...

Life's not fair sometimes... at least one of these has definitely got the looks:


Friday, August 31, 2012

Facebook makes big data look... big!

Oh I love these things: http://techcrunch.com/2012/08/22/how-big-is-facebooks-data-2-5-billion-pieces-of-content-and-500-terabytes-ingested-every-day/

Every day there are 2.5B content items shares, and 2.7B "Like"s. I care less about GiGo content itself, but metadata, connections, relations are kept transactionally in a relational database. The above 2 use-cases generate 5.2B transactions on the database, and since there are only 86400 seconds a day, we get over 60000 write transactions per second on the database, from these 2 use-cases alone, not to mention all other use-cases, such as new profiles, emails, queries...

And what's the size of new data, on top of all the existing data, that cannot be deleted so easily, (remember why? Get a hint here: http://database-scalability.blogspot.com/2012/08/twitter-and-new-big-data-lifecycle.html). A total 500+TB is added every day, I would exaggerated and assume 98% is pictures and other GiGo content only to leaves us with fuzzy new daily 10TB. There were times Oracle called VLDB to a DB of over 1TB, and here we have 10TB, every day.

So how do FB handle all this? They have a scaled-out grid of several 10000s of MySQL servers.

The size alone is not the entire problem. Enough juice, memory, MPP, columnar - will do the trick.
The but if we put this throughput of 100Ks transactions per second, it'll rip the guts out of any database engine. Remember that it translates every write operation into at least 4 internal operations (table, index(s), undo, log) and also needs to do "buffer management, locking, thread locks/semaphores, and recovery tasks". Can't happen.

The only way to handle such data size and such load is scale-out, divide the big problem to 20000 smaller problems, and this is what FB is doing with their cluster of 10000s of MySQLs.

You're probably thinking "naaa it's not my problem", "hey how many Facebooks are out there?". Take a look and try to put yourself, your organization, on the chart below:

Where are you today? Where will you be in 1 year? In 5 years? Things today go wild and and they go wild faster than ever. Big data is everywhere. New social apps aren't afraid of "what if no one will show up to my party?", rather they're afraid "what if EVERYBODY show up?"

You don't need to be Facebook to need a good solution to scale out your database

Tuesday, August 28, 2012

Scale Up, Partitioning, Scale Out

On the 8/16 I conducted a webinar titled: "Scale Up vs. Scale Out" (http://www.slideshare.net/ScaleBase/scalebase-webinar-816-scaleup-vs-scaleout):

The webinar was successful, we had many attendees and great participation in questions and answers throughout the session and in the end. Only after the webinar it only occurred to me that one specific graphic was missing from the webinar deck. It was occurred to me after answering several audience questions about "the difference between partitioning and sharding" or "why partitioning doesn't qualify as scale-out". 

Having the webinar today, I would definitely include the following picture, describing the core difference between Scale Up, Partitioning, and Scale Out:

In the above (poor) graphics, I used the black server box as the database server machine, the good old cylinder as the disk or storage device, and the colorful square thingy stands for the database engine. Believe it or not, this is a real complete architecture chart of Oracle 10gR2 SGA, miniatured to a small scale. Yes, all databases including Oracle and also MySQL, are complex beasts, a lot of stuff is going on inside the database engine for every command. 

If my DB is like in the "starting point" then I'm either really small, or I'm in a really bad shape by now. 
Partitioning makes wonders as data grows towards being "big data". It optimizes the data placement on separate files or disks, it makes every partition optimized and "thin" and less fragmented as you would expect from a gigantic busy monolithic table. Still, although splitting the data across files, we're still "stuck" with busy monolithic database engine that relies on a single box "compute" or "computing power". 

While we distributed the data, we didn't distribute the "compute". 
When there is a heavy join operation, there is one busy monolithic database engine to collect data from all partitions and process this join. 
When there are 10000 concurrent transactions to handle right here and now, there is one busy monolithic database engine to do all database-engine activities such as buffer management, locking, thread locks/semaphores, and recovery tasks. Buffers, locking queues, transaction queues... are still the same for all partitions. 

This is where Scale-out is different than partitions. It enables distribution and parallelism of the data as well as the so important compute, brings the compute closer to the data, enables several database engine process different sets of data, handling different sets of the overall session concurrency.

You can think of it as one step forward from partitioning, and it comes with great great results. It's not a simple step though, an abstraction layer is required to represent the databases grid as one database to the application, same as what it's used to use. 
In further posts I'll go into more on this "Scale Out Abstraction Layer", and also about ScaleBase which is a provider of such layer 

Monday, August 6, 2012

Twitter and the new big data lifecycle

Recently I came across this fine article in The New York Times: "Twitter Is Working on a Way to Retrieve Your Old Tweets". Dick Costolo, Twitter’s chief executive, said:
"It’s a different way of architecting search, going through all tweets of all time. You can’t just put three engineers on it."
Mr. Costolo is right, and pointed the spotlight to a very important change we're experiencing today, in these such interesting times. The word is expectations and those are changing fast!

Not so long ago, Big Data was a synonym to Analytics, Data Warehouse, Business Intelligence. Traditionally operational (OLTP) apps held limited amounts of data, only the "current" data, relevant for the ongoing operations. A cashier in a supermarket would hold only recent transactions, to enable lookup of a charge that was done 10 minutes ago, if I need to return and item or dispute the charge while at the cashier. When I come back to the store the day after, I won't go to the cashier, I should go to "customer service" that with a different application, a different database - I will get the service for my returning items or disputes. A dispute after several months will not be handled by the customer service in the store, but by "the chain's dispute department", using a different, 3rd app with a 3rd cumulative aggregative DB. And on and on it goes. 

In this simplified example, the organization invested many resources in 3 different DBs and apps aggregating different levels of data, enabling similar and marginal additional functionality. Why? Data volume and concurrency.

At the cashiers, the only place where new data is really generated, there also the highest concurrency. In a global look many thousands of items are "beeped" and sold through the cashiers every minute - data is kept small - generated and extracted out shortly after that. The customer service reps handle tens of customers a minute over larger data, and "the chain's dispute department" overlooks the biggest data, but handles 1 or 2 cases an hour, and might also execute more "analytic-style" queries to determine the nature of a dispute... 

This was, in a nutshell, the "lifecycle of the data" in the old world. But today, everything changes - it's all online, right here, right now!

Enormous amount of (big) data is generated and also searched and analyzed at the same time. Everything is online, here and now. Every tweet (millions a day) is reported instantly to hundreds of followers, participates in saved searches, analyzed by numerous robots and engines throughout the web, and also by Twitter itself. Same goes for every search or e-mail I send in Google and for every status or "like" in Facebook that is is reported to my hundreds of friends and also analyzed at the same time, here and now. Hey its their way to make money, to push the right ads at the right time.  

And now - we learn the users expect to see online data that "old" in the terminology of the old days. I want to see statuses, likes and tweets from 2 and 4 months ago, in the same interface and the same experience I'm used to, don't send me to the "customer service department"!

On the bottom line - it requires scale. Scale you online database to handle online data volumes and throughput, as well as older data, on the same grid, without interference, with the same applications. This is what scale out is all about. Think outside the (one database server) box. If you have 10 databases for the current data, you can have 10 more with older data, and 100 more with even-older data and so on. Giving a transparent unified view to (or virtualizing) this database grid - is the solution occupies most of my time, and it's the missing link to making a database scale-out a commodity.

Tuesday, July 10, 2012

So now Hadoop's days are numbered?

Earlier this week we all read GigaOM's article with this title:
"Why the days are numbered for Hadoop as we know it"
I know GigaOM like to provoke scandals sometimes, we all remember some other unforgettable piece, but there is something behind it...

Hadoop today (after SOA not so long ago) is one of the worst case of an abused buzzword ever known to men. It's everything, everywhere, can cure illnesses and do "big-data" at the same time! Wow! Actually Hadoop is a software framework that supports data-intensive distributed applications, derived from Google's MapReduce and Google File System (GFS) papers.

My take from the article is this: Hadoop is a foundation, low-level platform. I used the word "platform" just because of a lack of a better word. Wait there is a great word that captures it all! 


This word is Assembler


When computers begun 70 years ago or so, Assembly is the mother of all programming languages, Assembler made it work in real world computers, silicone and copper. In the world of Big Data, map-reduce, massive distribution and parallelism is the mother of all living things (Assembly). And Hadoop enables it to actually run in the real world (Assembler)... 


Like Assembler, Hadoop core is far from being really usable.  Doing something real, good, working, repeatable with it requires skills that only a few people can really master (Like good Assembler programmers, back in 1960's).




While I consider myself lucky to have the chance to actually punch cards with brilliant(?) Assembler code, many of today's brightest minds in Silicone Valleys around the world never wrote one opcode. They're all using PHP, Ruby, Java and node.js, which are great "wrappers" around good old Assembly to bring programming, innovation, disruptiveness - to the masses, make the whole world a better place. It's how it should be.


Hadoop will die only if data and big data dies. Nonsense. Data is by far the most important asset organizations have. Facebook as well as Bank Of America will be worth a fraction of their value in minutes if they loose the same fraction of their data. Both won't be able to compete if they can't be intelligent and analyze their data that multiplies every (low number) days/weeks/months. The data makes a business intelligent and Hadoop helps exactly there. 


Hadoop is the Assembler of all analytical big data processing, ETL and queries. The potential around it and its ecosystem is literally unlimited, tons of innovation and disruptiveness are poured by startups and communities all over, like Splunk, HBase, Cloudera, Hive, Hadapt, and many many more. And we're just in the "FORTRAN" phase...

Thursday, June 28, 2012

ARM based data center. Inspiring.

In a previous post I wrote ARM based servers. Since then, and thanks to all the comments and responses I got, I looked more into this ARM thing and it's absolutely fascinating...

Look at this beauty (taken from the site of Calxeda, the manufacturer):

What is it? A chip? A server? No, it's a cluster of 4 servers...

And this:

is HP Redstone Server, 288 chips, 1,152 cores (Calxeda quad-core SoC) in a 4U server “Dramatically reducing the cost and complexity of cabling and switching”. Calxeda is talking about: “Cut energy and space by 90%”, and “10x the performance at the same power, the same space” and it's just the beginning...

And this is from the last couple of days... From ISC'12 (International Supercomputing Conference): "ARM in Servers – Taming Big Data with Calxeda":

  • In the case of data intensive computing, re-balancing or ‘right-sizing’ the solution to eliminate bottlenecks can significantly improve overall efficiency
  • By combining a quad-core ARM® Cortex™-A series processor with topology agnostic integrated fabric interconnect (providing up to 50Gbits of bandwidth at latencies less than 200ns per hop), they can eliminate network bottlenecks and increase scalability


You still can't go to the store and buy a 4U ARM-based database server that performs 10x and uses 1/10 of the power (combine them, it order of magnitude of 100x...). It's not now, maybe not tomorrow, but it's not sci-fi. And technologies will have to adapt to this world of "multiple machines, shared nothing, commodity hardware". I think databases will be the hardest tech to adapt, the only way is to distribute the data wisely and then distribute the processing, sometimes parallelize processing and access to harness those thousands of cores.

Wednesday, June 20, 2012

The catch-22 of read/write splitting

In my previous post I covered the shard-disk paradigm's pros and cons, but the conclusion that is that it cannot really qualify as a scale-out solution, when it comes to massive OLTP, big-data, big-sessions-count and mixture of reads and writes.

Read/Write splitting is achieved when numerous replicated database servers are used for reads. This way the system can scale to cope with increase in concurrent load. This solution qualifies as a scale-out solution as it allow expansion beyond the boundaries of one DB, DB machines are shared-nothing, can be added as a slave to the replication "group" when required.


And, as a fact, read/write splitting is very popular and widely used by lots of high-traffic applications such as popular web sites, blogs, mobile apps, online games and social applications. 

However, today's extreme challenges of big-data, increased load and advance requirements expose vulnerabilities and flaws in this solution. Let's summarize them here:

  • All writes go to the master node = bottleneck: While reading sessions are distributed across several database servers (replication slaves), writing sessions are all going to the same primary/master server, hence still a bottleneck, all of them will consume all resources from the DB for our well-known "buffer management, locking, thread locks/semaphores, and recovery tasks"
  • Scaled sessions' load, not big data: While I can take my, X reading sessions and spread them over my 5 replication slaves giving each to handle with only X/5 sessions, however my giant DB will have to be replicated as a whole to all servers. Prepare lots of disks...
  • Scale? Yes. Query performance? No: Queries on each read-replica need to cope with the entire data of the database. No parallelism, to smaller data sets to handle
  • Replication lag: Async replication will always introduce lag. Be prepared for a lag between the reads and the writes.
  • Reads after write will show missing data. The transaction is not yet committed so it's not written to the log, not propagated to salve machine, not applied at the slave DB. 
Above all, databases suffer from writes made by many concurrent sessions. Database engine themselves become bottleneck because of their *buffer management, locking, thread locks/semaphores, and recovery tasks*. Reads are a secondary target. BTW - reads performance and scale can be very well gained by good smart caching, use of a NoSQL such as Memcached in the app, in front of the RDBMS. In modern applications we see more and more avoided reads and writes, that cannot be avoided or cached, storming the DB.

R/W splitting is usually implemented today inside the application code, the it's easy to start, then becomes hard... I recommend using a specialized COTS product that does it 100 times better and may eliminate some or all limitations above (ScaleBase is one solution that gives that (among other things)).


This is read/write splitting's catch 22. It's an OK scale-out solution and relatively easy to implement, but improvement of caching systems, changing requirements in the online applications and big-data and big-concurrency - rapidly driving it towards its fate, become less and less relevant, and only play a partial role in a complete scale-out plan. 

In a complete scale-out solution, where data is distributed (not replicated) throughout a grid of shared-nothing databases, read/write splitting will play its part, but only a minor one. Will get to that in next posts.

Thursday, June 7, 2012

Why shared-storage DB clusters don't scale

Yesterday I was asked by a customer for the reason why he had failed to achieve scale with a state-of-the-art "shared-storage" cluster. "It's a scale-out to 4 servers, but with a shared disk. And I got, after tons of work and efforts, 130% throughput, not even close to the expected 400%" he said.

Well, scale-out cannot be achieved with a shared storage and the word "shared" is the key. Scale-out is done with absolutely nothing shared or a "shared-nothing" architecture. This what makes it linear and unlimited. Any shared resource, creates a tremendous burden on each and every database server in the cluster.

In a previous post, I identified database engine activities such as buffer management, locking, thread locks/semaphores, and recovery tasks - as the main bottleneck in the OLTP database, handling reads and writes mixture. No matter which database engine, they all have all of the above, Oracle, MySQL, all of them. "The database engine itself becomes the bottleneck!" I wrote.

With a shared disk - there is a single shared copy of my big data on the shared disk, the database engine still have to maintain "buffer management, locking, thread locks/semaphores, and recovery tasks". But now - with a twist! Now all of the above need to be done "globally" between all participating servers, thru network adapters and cables, introducing latency. Every database server in the cluster needs to update all other nodes for every "buffer management, locking, thread locks/semaphores, and recovery tasks" it is doing on a block of data. See here number of "conversation paths" between the 4 nodes:
Blue dashed lines are data access, red lines are communications between nodes. Every node must notified and be notified to and by all other nodes. It's a complete graph, with 4 nodes and there are 6 edges. With 10 you'll find 45 red lines here (n(n - 1)/2, a reminder from Computer Science courses...). Imagine the noise, the latency, for every "buffer management, locking, thread locks/semaphores, and recovery tasks". Shared-storage becomes shared-everything. Node A makes an update to block X - 10 machines need to acknowledge. I wanted to scale, but instead I multiplied the initial problem.

And I didn't even mention the fact that the shared disk might become a SPOF and a bottleneck.
And I didn't even mention the limitations when you wanna go with this to cloud or virtualization.
And I didn't mention the tons of money this toy costs. I prefer buying 2 of these, one for me and one for a good friend, and hit the road together...

Shared-disk solutions gives a very limited solution to OLTP scale. If the data is not distributed, computing resources required to handle this data will not be distributed and thus reduced, on the contrary, they will be multiplied and consume all available resources from all machines.

Real scale-out is achieved by distributing the data in shared nothing, every database node is independent with its data, no duplications, no notifications, ownership or acknowledges over any network. If data is distributed correctly, concurrent sessions will be also distribute across servers, each node runs extremely fast on its small-data, small load, with its own small "buffer management, locking, thread locks/semaphores, and recovery tasks".

My customer responded "Makes perfect sense! Tell me more about Scale-Out and distribution...". 

Wednesday, May 30, 2012

Scale-out your DB on ARM-based servers

Today, I think we witnessed a small sign for a big revolution...

http://www.pcworld.com/businesscenter/article/256383/dell_reaches_for_the_cloud_with_new_prototype_arm_server.html
"Dell announced a prototype low-power server with ARM processors, following a growing demand by Web companies for custom-built servers that can scale performance while reducing financial overhead on data centers"
In short, ARM (see Wikipedia definition here) is an architecture standard for processors. ARM processors are slower compared to good old x86 processors from Intel and AMD, but have power-efficiency, density and price attributes that intrigue customers, especially in our days of green data centers where carbon emissions is carefully measured, and of course, cost-saving economics.

Take iPhones and iPads for example, those amazing machines do fast real-time calculations with their relatively powerful ARM processors (Apple A4, A5, A5x), yet are extremely efficient with regards to power and stay relatively cold. See picture (credits to Wikipedia) of the newest Apple A5x chip, used in New iPad:


Today when true big web and cloud players build their data centers the question is not "how big are your servers?" but rather "how many servers do you carry?". Ask Facebook, Google, Netflix, and more to come... For those guys, no single server can be big enough anyway, so they're built from the ground up for scaling-out to numerous servers and performing small tasks, concurrently. Familiar with Google's Map-Reduce and Hadoop? So - why not parallelize on ARM based servers? Can you imagine 20 iPads, occupying the same space as a 1U rack-mounted "pizza" server, but with 5x parallel computing power, 20x electricity power efficiency, and 100x cooling costs efficiency?

So what all this has to do with database scalability you ask?

With this quote from the article, I don't agree: "But ARM still cannot match up chips from Intel and AMD for resource-heavy tasks such as databases.".

Oh I remember those days when, in every data center in every organization I gave consulting to, I saw the same picture... All machines were nice, neatly organized, tagged, blades, standard racks, mostly virtualized... But the DB? No... Those servers were non-standard, the biggest, ugliest, most-expensive, capex and opex. Why? It's the DB! It's special! It has needs!! Specialized HW, specialized storage, $$$. Those days are over, as organizations' need to save is arriving to the shores of the sacred database. Today, harder questions are being asked, more is being done in commoditization, more databases are virtualized, and the cloud...

Big Data is everywhere, in the web, in the cloud, in the enterprise, databases must scale, scale-out or else explode, it's a hard fact. Databases can be scaled with smart distribution and parallelism, and then can use commodity hardware, can be easily virtualized, and cloud-ified. If distribution is done correctly - the sum is greater than its parts, and the parts in this case can be low end... The lowest of the low... database machines can definitely be ARM based servers, each holds portion of the data, attracts a portion of the concurrent sessions, and contributes to the overall processing.

A database on an iPad? Naa, I prefer breaking another record in Fruit Ninja.
A database on 20 ARM-based servers? If it's 5x faster and costs 60x less in electricity and cooling - then yes, definitely.

Monday, May 21, 2012

Scaling OLTP is nothing like scaling Analytics

We're in the big data business. OLTP applications and Analytics.

Scaling OLTP applications is nothing like scaling Analytics, like I posted here: http://database-scalability.blogspot.com/2012/05/oltp-vs-analytics.html. OLTP is a mixture of read and writes, heavy session concurrency and also growing amounts of data.

In my previous post, http://database-scalability.blogspot.com/2012/05/scale-differences-between-oltp-and.html, I mentioned that Analytics can be scaled using: columnar storage, RAM and query parallelism.

Columnar storage cannot be used for OLTP, as while it makes read scans better, it hurts writes, especially INSERTs. Same goes for RAM, the approach of “let’s put everything in memory” is also problematic for writes that should be Durable (the D in ACID). There are databases that reach Durability with writing to memory of at least 2 machines, I'll get to that in a later post, but in the simpler view, RAM is great for reads (Analytics), very limited for writes (OLTP).

Query parallelism that worked for Analytics, is limited for OLTP. Mostly because of high concurrency and writes, OLTP is a mixture of read and writes, ratios today reach 50%-50% and more. Every write operation is eventually at least 5 operations for the database, including table, index(s), rollback segment, transaction log, row-level locking and more. Now multiply with 1000 concurrent transactions, and 1TB of data. The database engine itself becomes the bottleneck! It puts so many resources into buffer management, locking, thread locks/semaphores, and recovery tasks, no resources are left available for handling query data!

3 bullets why naïve parallelism is not a magic bullet for OLTP:
  1. Parallel query within the same database server will just turn the hard-to-manage 1000 concurrent transactions into impossible 1000000 concurrent sub-transactions… Good luck with that… 
  2. Parallelizing query on several database servers is a step in a good direction. However it can’t scale: if I have 10 servers and each one my 1000 concurrent transactions needs to gather data from all servers in parallel, how many concurrent transactions I’ll have on each server? That’s right, 1000. What did I solve? Can I scale to 2000 concurrent transactions? All my servers will die together. In that case what if I scale to 20 servers instead of 10? Then I’ll have 20 servers with 2000 concurrent transactions… that will all die together. 
  3. OLTP operations are not good candidates for parallelism:
    1. Scans, Full table/index scans and range scans, are parallelized all the time in Analytics, are seldom in OLTP. In OLTP most accesses are short, pinpointed, index-based small range and unique scans. Oracle’s optimizer mode FIRST_ROWS (OLTP) will almost always prefer index access and ALL_ROWS (Analytics) will have hard time give up its favorable full table scan.  So what exactly do we parallelize in OLTP? An index rebuild once a day (scan...)?
    2. 1000 concurrent 1-row INSERT commands a second - is a valid OLTP scenario. What exactly do I parallelize?
Parallelism cannot be the one and only complete solution. It serves a minor role in the solution, whose key factor is: distribution.

OLTP databases can scale only by a smart distribution of data but also the concurrent sessions among numerous database servers. It’s all in the distribution of the data, if data is distributed in a smart way, concurrent sessions will be also distribute across servers.

Go from 1 big fat database server dealing with 1TB of data and 1000 concurrent transactions, to 10 databases, each deal with easy 100GB and 100 concurrent transactions. I'll hit the jackpot if I'll manage to keep databases isolated, shared nothing, processing-wise, not only cables-wise. Best are transactions that start and finish on a single database.

And if I’m lucky and my business is booming, I can scale:
  1. Data grew from 1TB to 1.5TB? Add more databases servers.
  2. Concurrent sessions grew from 1000 to 1500? Add more databases servers.
  3. Parallel query/update? Sure! If a session does need to scan data from all servers, or need to perform an index rebuild, it can run in parallel on all servers, and will take a fraction of the time.
Ask Facebook (FB, as of today... ). Each of their 10,000s databases is handling a fraction of the data, in a way that only a fraction of all sessions are accessing it in any point in time.

Each of the databases is still doing hard work on every update/insert/delete and on buffer management, locking, thread locks/semaphores, and recovery tasks. What can we do? It's OLTP, it's read/write, it's ACID... It's heavy! I trust every one of my DBs to do what it does best, I just give it the optimal data size and session concurrency to do that.


Let's summarize here:

In my next post I'll dive more into implementations caveats (shared disk, shared memory, sharding) and pitfalls, do's and don't's... 

Stay tuned, join those who subscribed and get automatic updates, get involved!