Friday, December 21, 2012

The battle on the OS...

I came across this piece:, and I'm all nostalgic now...

I had an Atari, my first computer was Apple IIc,  bunch of my friends had a Commodore 64 (I envied them, it was much cooler than the Apple!) and Amiga (wow!!).

Then arrived almost 2 lost decades where people knew* that Wintel is the only thing out there, and couldn't believe there was or ever will be anything else. Just like in the quote from MIB below...

Thank you smartphone, and thank you RIM, Apple and then Google. You made the world a better place.
I know you're not doing anything for us, you're doing it for your own good, to generate value and yield for your stockholders, while pumping ridiculous paychecks to the executives...

Still, you managed to make the world a better, more interesting place to live in, and more challenges to cope with... Big Data, OLTP velocity, app and data sprawl. More challenges for people like me to seek a solution for! You made scalability a big challenge! So, thank you.

BTW, as it turns out to be, I have a Wintel laptop and a Nokia Windows (splendid!) phone, but I'm doing it out of being open minded, and celebrating plurality!

* - The unforgettable quote from "Men In Black":

"...Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow"

Monday, December 17, 2012

Database Performance, a Ferrari and a truck

In the last days I got several queries, from colleagues and customers, about one thing I thought it's a given, well well known, but found out differently: "What is database performance?". Is it speed? Is it throughput? What are the metrics and how do you measure?

I tried to refer to an existing link, but then had to write and describe myself. The thing nearest to describing what I think "Database Performance" really is, is this, it's not bad yet I was able to make it even simpler to my esteemed colleagues and more esteemed customers.

Database performance, in an essence, derived from 2 major metrics:
Latency: the time we wait for an operation to finish. Measured in milliseconds (ms) or any other time unit.
Throughput: number of transactions/commands per time unit usually second or minute.

In the classic world of Data Warehouse and Analytics, throughput is usually a non-issue and latency is king. When database grows larger and larger, analytics complex queries take longer and longer to finish, and the demand is "I need speed!".

In the world of OLTP, throughput is the important measure. TPC-C benchmarks for example, measure only throughput (New Order Transactions per Minute). Oracle made it to meet 30,249,688 NO Transactions Per Minute, nice job, we as readers of the results have no way to know if a single transaction tool 1ms and they managed to squeeze thousands of those in parallel in 1 minute to meet this number, or maybe, the scenario transaction took exactly 1 minute, and Oracle managed to perform 30,249,688 such transaction in parallel. The truth is somewhere in the middle, between the 1 millisecond and 1 minute...

In OLTP the latency should be bearable (for some it's 50ms, for some it's 500ms) and stable as throughput must grow and grow as number of users/sites/devices/accounts/profiles grows and grows.

Another key word is predictability. In my OLTP I need predictable good enough, bearable, constant latency performance. I can't afford a 50ms transactions to take 1 minute once every while. I need transactions latency to be some X I can live with, I need it constant and predictable - while throughput is growing.

Not a popular comparison, but very very relevant: A Ferrari and a truck. Both have 500 horsepower.
A Ferrari will take you 200 miles per hour! However a truck will drive a good legal 70, and she'll go same 70 miles per hour with 100 pounds, 1 ton or 20 tons. Constant, stable, predictable. Yea, I'd like to have a Ferrari for my spare time, or to ace a benchmark, but when it comes to backend server infrastructure they're more like a truck to me... and they deliver...

Life's not fair sometimes... at least one of these has definitely got the looks:

Wednesday, October 3, 2012

"(Cloud) is complete gibberish. It's insane. When is this idiocy going to stop?"

This is Larry Ellison keynotes in Oracle OpenWorld on September 2008. Only 4 years ago.
"The computer industry is the only industry that is more fashion-driven than women's fashion. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?"
"We'll make cloud computing announcements. I'm not going to fight this thing. But I don't understand what we would do differently in the light of cloud."
The above along with additional marbles are all here:

Yesterday Larry stood on that same stage and announced Oracle 12c. The c stands for... Cloud!

And what makes Oracle 12c cloud ready?
12c is a "container database." It's function is to hold lots of other databases, keeping their data separate, but allowing them to share underlying hardware resources like memory or file storage. So this way 12c can be used for software-as-a-service tech companies that need a way to let multiple customers access a single database. It's also geared toward large enterprises who may have hundreds of Oracle databases. It would let them consolidate their databases onto less hardware, saving them money on that and making all of those databases easier to manage.

So in short 2 words: multi-tenancy.

Oracle is still Shared-everything, big boxes, and allow virtualization using internal division and allocation of those shared resources to multiple smaller "virtual databases". Indeed, it's great for consolidation and also multi-tenancy.

Cloud? IMHO, to me, cloud is multi-tenancy, but also scale (out) and elasticity. Amazon calls their cloud services EC2, the E stands for Elasticity. The new "DB made for the cloud" has no news about scale-out or about elasticity. Maybe we'll need to wait for Oracle 13e... :)

Also there's a new Exadata database machine, called x3... Yet another bigger box to do all the above. They say it's Oracle competition to SAP HANA.

And finally, we have a new player in the cloud services... Oracle! They'll have a public cloud offering (like Amazon, Rackspace, HP) and also a private cloud, which is a replica of Oracle's public cloud that is put in the customer's own data center. Oracle would still own the hardware and be responsible for running it, securing it and updating it. "as a Service" in your premise. It's interesting and even rhymes...

So it seems someone regained his composure...

Friday, September 28, 2012

Being successful like Pinterest without its DB adventures...

I just came across this: "Scaling Pinterest and adventures in database sharding"  (
"Pinterest has learned about scaling the way most popular sites do — the architecture works until one day it doesn’t"
Pinterest found out that "the architecture" is not scalable and they turned to development of a Scale Out mechanism also called Sharding.

I find it amazing that sharding, or in other words, the idea of "scale out by splitting and parallelizing data across shared-nothing commodity-hardware" is not supplied "out of the box" by "the architecture" (such as database, load-balancer, any other IT stuff). I'm wondering who was the one that decided that an IT issue like scale-out should be outsourced from the database to the application developers?...


When was the last time you heard about a PHP or Ruby developer wrote code to enable Scale Out. NEVER! Scale Out in the application layer is enabled easily by a magical box called a load balancer, and you can get one from F5 or wherever for a low 4 digit USD. Commodity! 

But to scale the database? To enjoy the obvious advantages of "scale out by splitting and parallelizing data across shared-nothing commodity-hardware"? - for this the world still thinks developers need to stop investing effort in innovation, better product, competitive business. Instead they need harness their how-databases-really-work skills to write band-aid code to scale the DB. 

Amazing... As you know I took it personally, and have been solving this paradox every day now, by bringing a complete, automatic, out-of-the-box "scale-out machine", that we like to call ScaleBase. I think Pinterest story is great, with a great outcome, but it's not always the case with this complex matter, and a generic, repeatable, IT-level solution for Scale Out can make it much easier for all other "Pinterests" out there to be as successful and make the right choice and enjoy the great benefits - without the tremendous efforts and labor in home-grown sharding.

Friday, August 31, 2012

Facebook makes big data look... big!

Oh I love these things:

Every day there are 2.5B content items shares, and 2.7B "Like"s. I care less about GiGo content itself, but metadata, connections, relations are kept transactionally in a relational database. The above 2 use-cases generate 5.2B transactions on the database, and since there are only 86400 seconds a day, we get over 60000 write transactions per second on the database, from these 2 use-cases alone, not to mention all other use-cases, such as new profiles, emails, queries...

And what's the size of new data, on top of all the existing data, that cannot be deleted so easily, (remember why? Get a hint here: A total 500+TB is added every day, I would exaggerated and assume 98% is pictures and other GiGo content only to leaves us with fuzzy new daily 10TB. There were times Oracle called VLDB to a DB of over 1TB, and here we have 10TB, every day.

So how do FB handle all this? They have a scaled-out grid of several 10000s of MySQL servers.

The size alone is not the entire problem. Enough juice, memory, MPP, columnar - will do the trick.
The but if we put this throughput of 100Ks transactions per second, it'll rip the guts out of any database engine. Remember that it translates every write operation into at least 4 internal operations (table, index(s), undo, log) and also needs to do "buffer management, locking, thread locks/semaphores, and recovery tasks". Can't happen.

The only way to handle such data size and such load is scale-out, divide the big problem to 20000 smaller problems, and this is what FB is doing with their cluster of 10000s of MySQLs.

You're probably thinking "naaa it's not my problem", "hey how many Facebooks are out there?". Take a look and try to put yourself, your organization, on the chart below:

Where are you today? Where will you be in 1 year? In 5 years? Things today go wild and and they go wild faster than ever. Big data is everywhere. New social apps aren't afraid of "what if no one will show up to my party?", rather they're afraid "what if EVERYBODY show up?"

You don't need to be Facebook to need a good solution to scale out your database

Tuesday, August 28, 2012

Scale Up, Partitioning, Scale Out

On the 8/16 I conducted a webinar titled: "Scale Up vs. Scale Out" (

The webinar was successful, we had many attendees and great participation in questions and answers throughout the session and in the end. Only after the webinar it only occurred to me that one specific graphic was missing from the webinar deck. It was occurred to me after answering several audience questions about "the difference between partitioning and sharding" or "why partitioning doesn't qualify as scale-out". 

Having the webinar today, I would definitely include the following picture, describing the core difference between Scale Up, Partitioning, and Scale Out:

In the above (poor) graphics, I used the black server box as the database server machine, the good old cylinder as the disk or storage device, and the colorful square thingy stands for the database engine. Believe it or not, this is a real complete architecture chart of Oracle 10gR2 SGA, miniatured to a small scale. Yes, all databases including Oracle and also MySQL, are complex beasts, a lot of stuff is going on inside the database engine for every command. 

If my DB is like in the "starting point" then I'm either really small, or I'm in a really bad shape by now. 
Partitioning makes wonders as data grows towards being "big data". It optimizes the data placement on separate files or disks, it makes every partition optimized and "thin" and less fragmented as you would expect from a gigantic busy monolithic table. Still, although splitting the data across files, we're still "stuck" with busy monolithic database engine that relies on a single box "compute" or "computing power". 

While we distributed the data, we didn't distribute the "compute". 
When there is a heavy join operation, there is one busy monolithic database engine to collect data from all partitions and process this join. 
When there are 10000 concurrent transactions to handle right here and now, there is one busy monolithic database engine to do all database-engine activities such as buffer management, locking, thread locks/semaphores, and recovery tasks. Buffers, locking queues, transaction queues... are still the same for all partitions. 

This is where Scale-out is different than partitions. It enables distribution and parallelism of the data as well as the so important compute, brings the compute closer to the data, enables several database engine process different sets of data, handling different sets of the overall session concurrency.

You can think of it as one step forward from partitioning, and it comes with great great results. It's not a simple step though, an abstraction layer is required to represent the databases grid as one database to the application, same as what it's used to use. 
In further posts I'll go into more on this "Scale Out Abstraction Layer", and also about ScaleBase which is a provider of such layer 

Monday, August 6, 2012

Twitter and the new big data lifecycle

Recently I came across this fine article in The New York Times: "Twitter Is Working on a Way to Retrieve Your Old Tweets". Dick Costolo, Twitter’s chief executive, said:
"It’s a different way of architecting search, going through all tweets of all time. You can’t just put three engineers on it."
Mr. Costolo is right, and pointed the spotlight to a very important change we're experiencing today, in these such interesting times. The word is expectations and those are changing fast!

Not so long ago, Big Data was a synonym to Analytics, Data Warehouse, Business Intelligence. Traditionally operational (OLTP) apps held limited amounts of data, only the "current" data, relevant for the ongoing operations. A cashier in a supermarket would hold only recent transactions, to enable lookup of a charge that was done 10 minutes ago, if I need to return and item or dispute the charge while at the cashier. When I come back to the store the day after, I won't go to the cashier, I should go to "customer service" that with a different application, a different database - I will get the service for my returning items or disputes. A dispute after several months will not be handled by the customer service in the store, but by "the chain's dispute department", using a different, 3rd app with a 3rd cumulative aggregative DB. And on and on it goes. 

In this simplified example, the organization invested many resources in 3 different DBs and apps aggregating different levels of data, enabling similar and marginal additional functionality. Why? Data volume and concurrency.

At the cashiers, the only place where new data is really generated, there also the highest concurrency. In a global look many thousands of items are "beeped" and sold through the cashiers every minute - data is kept small - generated and extracted out shortly after that. The customer service reps handle tens of customers a minute over larger data, and "the chain's dispute department" overlooks the biggest data, but handles 1 or 2 cases an hour, and might also execute more "analytic-style" queries to determine the nature of a dispute... 

This was, in a nutshell, the "lifecycle of the data" in the old world. But today, everything changes - it's all online, right here, right now!

Enormous amount of (big) data is generated and also searched and analyzed at the same time. Everything is online, here and now. Every tweet (millions a day) is reported instantly to hundreds of followers, participates in saved searches, analyzed by numerous robots and engines throughout the web, and also by Twitter itself. Same goes for every search or e-mail I send in Google and for every status or "like" in Facebook that is is reported to my hundreds of friends and also analyzed at the same time, here and now. Hey its their way to make money, to push the right ads at the right time.  

And now - we learn the users expect to see online data that "old" in the terminology of the old days. I want to see statuses, likes and tweets from 2 and 4 months ago, in the same interface and the same experience I'm used to, don't send me to the "customer service department"!

On the bottom line - it requires scale. Scale you online database to handle online data volumes and throughput, as well as older data, on the same grid, without interference, with the same applications. This is what scale out is all about. Think outside the (one database server) box. If you have 10 databases for the current data, you can have 10 more with older data, and 100 more with even-older data and so on. Giving a transparent unified view to (or virtualizing) this database grid - is the solution occupies most of my time, and it's the missing link to making a database scale-out a commodity.

Tuesday, July 10, 2012

So now Hadoop's days are numbered?

Earlier this week we all read GigaOM's article with this title:
"Why the days are numbered for Hadoop as we know it"
I know GigaOM like to provoke scandals sometimes, we all remember some other unforgettable piece, but there is something behind it...

Hadoop today (after SOA not so long ago) is one of the worst case of an abused buzzword ever known to men. It's everything, everywhere, can cure illnesses and do "big-data" at the same time! Wow! Actually Hadoop is a software framework that supports data-intensive distributed applications, derived from Google's MapReduce and Google File System (GFS) papers.

My take from the article is this: Hadoop is a foundation, low-level platform. I used the word "platform" just because of a lack of a better word. Wait there is a great word that captures it all! 

This word is Assembler

When computers begun 70 years ago or so, Assembly is the mother of all programming languages, Assembler made it work in real world computers, silicone and copper. In the world of Big Data, map-reduce, massive distribution and parallelism is the mother of all living things (Assembly). And Hadoop enables it to actually run in the real world (Assembler)... 

Like Assembler, Hadoop core is far from being really usable.  Doing something real, good, working, repeatable with it requires skills that only a few people can really master (Like good Assembler programmers, back in 1960's).

While I consider myself lucky to have the chance to actually punch cards with brilliant(?) Assembler code, many of today's brightest minds in Silicone Valleys around the world never wrote one opcode. They're all using PHP, Ruby, Java and node.js, which are great "wrappers" around good old Assembly to bring programming, innovation, disruptiveness - to the masses, make the whole world a better place. It's how it should be.

Hadoop will die only if data and big data dies. Nonsense. Data is by far the most important asset organizations have. Facebook as well as Bank Of America will be worth a fraction of their value in minutes if they loose the same fraction of their data. Both won't be able to compete if they can't be intelligent and analyze their data that multiplies every (low number) days/weeks/months. The data makes a business intelligent and Hadoop helps exactly there. 

Hadoop is the Assembler of all analytical big data processing, ETL and queries. The potential around it and its ecosystem is literally unlimited, tons of innovation and disruptiveness are poured by startups and communities all over, like Splunk, HBase, Cloudera, Hive, Hadapt, and many many more. And we're just in the "FORTRAN" phase...

Thursday, June 28, 2012

ARM based data center. Inspiring.

In a previous post I wrote ARM based servers. Since then, and thanks to all the comments and responses I got, I looked more into this ARM thing and it's absolutely fascinating...

Look at this beauty (taken from the site of Calxeda, the manufacturer):

What is it? A chip? A server? No, it's a cluster of 4 servers...

And this:

is HP Redstone Server, 288 chips, 1,152 cores (Calxeda quad-core SoC) in a 4U server “Dramatically reducing the cost and complexity of cabling and switching”. Calxeda is talking about: “Cut energy and space by 90%”, and “10x the performance at the same power, the same space” and it's just the beginning...

And this is from the last couple of days... From ISC'12 (International Supercomputing Conference): "ARM in Servers – Taming Big Data with Calxeda":

  • In the case of data intensive computing, re-balancing or ‘right-sizing’ the solution to eliminate bottlenecks can significantly improve overall efficiency
  • By combining a quad-core ARM® Cortex™-A series processor with topology agnostic integrated fabric interconnect (providing up to 50Gbits of bandwidth at latencies less than 200ns per hop), they can eliminate network bottlenecks and increase scalability

You still can't go to the store and buy a 4U ARM-based database server that performs 10x and uses 1/10 of the power (combine them, it order of magnitude of 100x...). It's not now, maybe not tomorrow, but it's not sci-fi. And technologies will have to adapt to this world of "multiple machines, shared nothing, commodity hardware". I think databases will be the hardest tech to adapt, the only way is to distribute the data wisely and then distribute the processing, sometimes parallelize processing and access to harness those thousands of cores.

Wednesday, June 20, 2012

The catch-22 of read/write splitting

In my previous post I covered the shard-disk paradigm's pros and cons, but the conclusion that is that it cannot really qualify as a scale-out solution, when it comes to massive OLTP, big-data, big-sessions-count and mixture of reads and writes.

Read/Write splitting is achieved when numerous replicated database servers are used for reads. This way the system can scale to cope with increase in concurrent load. This solution qualifies as a scale-out solution as it allow expansion beyond the boundaries of one DB, DB machines are shared-nothing, can be added as a slave to the replication "group" when required.

And, as a fact, read/write splitting is very popular and widely used by lots of high-traffic applications such as popular web sites, blogs, mobile apps, online games and social applications. 

However, today's extreme challenges of big-data, increased load and advance requirements expose vulnerabilities and flaws in this solution. Let's summarize them here:

  • All writes go to the master node = bottleneck: While reading sessions are distributed across several database servers (replication slaves), writing sessions are all going to the same primary/master server, hence still a bottleneck, all of them will consume all resources from the DB for our well-known "buffer management, locking, thread locks/semaphores, and recovery tasks"
  • Scaled sessions' load, not big data: While I can take my, X reading sessions and spread them over my 5 replication slaves giving each to handle with only X/5 sessions, however my giant DB will have to be replicated as a whole to all servers. Prepare lots of disks...
  • Scale? Yes. Query performance? No: Queries on each read-replica need to cope with the entire data of the database. No parallelism, to smaller data sets to handle
  • Replication lag: Async replication will always introduce lag. Be prepared for a lag between the reads and the writes.
  • Reads after write will show missing data. The transaction is not yet committed so it's not written to the log, not propagated to salve machine, not applied at the slave DB. 
Above all, databases suffer from writes made by many concurrent sessions. Database engine themselves become bottleneck because of their *buffer management, locking, thread locks/semaphores, and recovery tasks*. Reads are a secondary target. BTW - reads performance and scale can be very well gained by good smart caching, use of a NoSQL such as Memcached in the app, in front of the RDBMS. In modern applications we see more and more avoided reads and writes, that cannot be avoided or cached, storming the DB.

R/W splitting is usually implemented today inside the application code, the it's easy to start, then becomes hard... I recommend using a specialized COTS product that does it 100 times better and may eliminate some or all limitations above (ScaleBase is one solution that gives that (among other things)).

This is read/write splitting's catch 22. It's an OK scale-out solution and relatively easy to implement, but improvement of caching systems, changing requirements in the online applications and big-data and big-concurrency - rapidly driving it towards its fate, become less and less relevant, and only play a partial role in a complete scale-out plan. 

In a complete scale-out solution, where data is distributed (not replicated) throughout a grid of shared-nothing databases, read/write splitting will play its part, but only a minor one. Will get to that in next posts.

Thursday, June 7, 2012

Why shared-storage DB clusters don't scale

Yesterday I was asked by a customer for the reason why he had failed to achieve scale with a state-of-the-art "shared-storage" cluster. "It's a scale-out to 4 servers, but with a shared disk. And I got, after tons of work and efforts, 130% throughput, not even close to the expected 400%" he said.

Well, scale-out cannot be achieved with a shared storage and the word "shared" is the key. Scale-out is done with absolutely nothing shared or a "shared-nothing" architecture. This what makes it linear and unlimited. Any shared resource, creates a tremendous burden on each and every database server in the cluster.

In a previous post, I identified database engine activities such as buffer management, locking, thread locks/semaphores, and recovery tasks - as the main bottleneck in the OLTP database, handling reads and writes mixture. No matter which database engine, they all have all of the above, Oracle, MySQL, all of them. "The database engine itself becomes the bottleneck!" I wrote.

With a shared disk - there is a single shared copy of my big data on the shared disk, the database engine still have to maintain "buffer management, locking, thread locks/semaphores, and recovery tasks". But now - with a twist! Now all of the above need to be done "globally" between all participating servers, thru network adapters and cables, introducing latency. Every database server in the cluster needs to update all other nodes for every "buffer management, locking, thread locks/semaphores, and recovery tasks" it is doing on a block of data. See here number of "conversation paths" between the 4 nodes:
Blue dashed lines are data access, red lines are communications between nodes. Every node must notified and be notified to and by all other nodes. It's a complete graph, with 4 nodes and there are 6 edges. With 10 you'll find 45 red lines here (n(n - 1)/2, a reminder from Computer Science courses...). Imagine the noise, the latency, for every "buffer management, locking, thread locks/semaphores, and recovery tasks". Shared-storage becomes shared-everything. Node A makes an update to block X - 10 machines need to acknowledge. I wanted to scale, but instead I multiplied the initial problem.

And I didn't even mention the fact that the shared disk might become a SPOF and a bottleneck.
And I didn't even mention the limitations when you wanna go with this to cloud or virtualization.
And I didn't mention the tons of money this toy costs. I prefer buying 2 of these, one for me and one for a good friend, and hit the road together...

Shared-disk solutions gives a very limited solution to OLTP scale. If the data is not distributed, computing resources required to handle this data will not be distributed and thus reduced, on the contrary, they will be multiplied and consume all available resources from all machines.

Real scale-out is achieved by distributing the data in shared nothing, every database node is independent with its data, no duplications, no notifications, ownership or acknowledges over any network. If data is distributed correctly, concurrent sessions will be also distribute across servers, each node runs extremely fast on its small-data, small load, with its own small "buffer management, locking, thread locks/semaphores, and recovery tasks".

My customer responded "Makes perfect sense! Tell me more about Scale-Out and distribution...". 

Wednesday, May 30, 2012

Scale-out your DB on ARM-based servers

Today, I think we witnessed a small sign for a big revolution...
"Dell announced a prototype low-power server with ARM processors, following a growing demand by Web companies for custom-built servers that can scale performance while reducing financial overhead on data centers"
In short, ARM (see Wikipedia definition here) is an architecture standard for processors. ARM processors are slower compared to good old x86 processors from Intel and AMD, but have power-efficiency, density and price attributes that intrigue customers, especially in our days of green data centers where carbon emissions is carefully measured, and of course, cost-saving economics.

Take iPhones and iPads for example, those amazing machines do fast real-time calculations with their relatively powerful ARM processors (Apple A4, A5, A5x), yet are extremely efficient with regards to power and stay relatively cold. See picture (credits to Wikipedia) of the newest Apple A5x chip, used in New iPad:

Today when true big web and cloud players build their data centers the question is not "how big are your servers?" but rather "how many servers do you carry?". Ask Facebook, Google, Netflix, and more to come... For those guys, no single server can be big enough anyway, so they're built from the ground up for scaling-out to numerous servers and performing small tasks, concurrently. Familiar with Google's Map-Reduce and Hadoop? So - why not parallelize on ARM based servers? Can you imagine 20 iPads, occupying the same space as a 1U rack-mounted "pizza" server, but with 5x parallel computing power, 20x electricity power efficiency, and 100x cooling costs efficiency?

So what all this has to do with database scalability you ask?

With this quote from the article, I don't agree: "But ARM still cannot match up chips from Intel and AMD for resource-heavy tasks such as databases.".

Oh I remember those days when, in every data center in every organization I gave consulting to, I saw the same picture... All machines were nice, neatly organized, tagged, blades, standard racks, mostly virtualized... But the DB? No... Those servers were non-standard, the biggest, ugliest, most-expensive, capex and opex. Why? It's the DB! It's special! It has needs!! Specialized HW, specialized storage, $$$. Those days are over, as organizations' need to save is arriving to the shores of the sacred database. Today, harder questions are being asked, more is being done in commoditization, more databases are virtualized, and the cloud...

Big Data is everywhere, in the web, in the cloud, in the enterprise, databases must scale, scale-out or else explode, it's a hard fact. Databases can be scaled with smart distribution and parallelism, and then can use commodity hardware, can be easily virtualized, and cloud-ified. If distribution is done correctly - the sum is greater than its parts, and the parts in this case can be low end... The lowest of the low... database machines can definitely be ARM based servers, each holds portion of the data, attracts a portion of the concurrent sessions, and contributes to the overall processing.

A database on an iPad? Naa, I prefer breaking another record in Fruit Ninja.
A database on 20 ARM-based servers? If it's 5x faster and costs 60x less in electricity and cooling - then yes, definitely.

Monday, May 21, 2012

Scaling OLTP is nothing like scaling Analytics

We're in the big data business. OLTP applications and Analytics.

Scaling OLTP applications is nothing like scaling Analytics, like I posted here: OLTP is a mixture of read and writes, heavy session concurrency and also growing amounts of data.

In my previous post,, I mentioned that Analytics can be scaled using: columnar storage, RAM and query parallelism.

Columnar storage cannot be used for OLTP, as while it makes read scans better, it hurts writes, especially INSERTs. Same goes for RAM, the approach of “let’s put everything in memory” is also problematic for writes that should be Durable (the D in ACID). There are databases that reach Durability with writing to memory of at least 2 machines, I'll get to that in a later post, but in the simpler view, RAM is great for reads (Analytics), very limited for writes (OLTP).

Query parallelism that worked for Analytics, is limited for OLTP. Mostly because of high concurrency and writes, OLTP is a mixture of read and writes, ratios today reach 50%-50% and more. Every write operation is eventually at least 5 operations for the database, including table, index(s), rollback segment, transaction log, row-level locking and more. Now multiply with 1000 concurrent transactions, and 1TB of data. The database engine itself becomes the bottleneck! It puts so many resources into buffer management, locking, thread locks/semaphores, and recovery tasks, no resources are left available for handling query data!

3 bullets why na├»ve parallelism is not a magic bullet for OLTP:
  1. Parallel query within the same database server will just turn the hard-to-manage 1000 concurrent transactions into impossible 1000000 concurrent sub-transactions… Good luck with that… 
  2. Parallelizing query on several database servers is a step in a good direction. However it can’t scale: if I have 10 servers and each one my 1000 concurrent transactions needs to gather data from all servers in parallel, how many concurrent transactions I’ll have on each server? That’s right, 1000. What did I solve? Can I scale to 2000 concurrent transactions? All my servers will die together. In that case what if I scale to 20 servers instead of 10? Then I’ll have 20 servers with 2000 concurrent transactions… that will all die together. 
  3. OLTP operations are not good candidates for parallelism:
    1. Scans, Full table/index scans and range scans, are parallelized all the time in Analytics, are seldom in OLTP. In OLTP most accesses are short, pinpointed, index-based small range and unique scans. Oracle’s optimizer mode FIRST_ROWS (OLTP) will almost always prefer index access and ALL_ROWS (Analytics) will have hard time give up its favorable full table scan.  So what exactly do we parallelize in OLTP? An index rebuild once a day (scan...)?
    2. 1000 concurrent 1-row INSERT commands a second - is a valid OLTP scenario. What exactly do I parallelize?
Parallelism cannot be the one and only complete solution. It serves a minor role in the solution, whose key factor is: distribution.

OLTP databases can scale only by a smart distribution of data but also the concurrent sessions among numerous database servers. It’s all in the distribution of the data, if data is distributed in a smart way, concurrent sessions will be also distribute across servers.

Go from 1 big fat database server dealing with 1TB of data and 1000 concurrent transactions, to 10 databases, each deal with easy 100GB and 100 concurrent transactions. I'll hit the jackpot if I'll manage to keep databases isolated, shared nothing, processing-wise, not only cables-wise. Best are transactions that start and finish on a single database.

And if I’m lucky and my business is booming, I can scale:
  1. Data grew from 1TB to 1.5TB? Add more databases servers.
  2. Concurrent sessions grew from 1000 to 1500? Add more databases servers.
  3. Parallel query/update? Sure! If a session does need to scan data from all servers, or need to perform an index rebuild, it can run in parallel on all servers, and will take a fraction of the time.
Ask Facebook (FB, as of today... ). Each of their 10,000s databases is handling a fraction of the data, in a way that only a fraction of all sessions are accessing it in any point in time.

Each of the databases is still doing hard work on every update/insert/delete and on buffer management, locking, thread locks/semaphores, and recovery tasks. What can we do? It's OLTP, it's read/write, it's ACID... It's heavy! I trust every one of my DBs to do what it does best, I just give it the optimal data size and session concurrency to do that.

Let's summarize here:

In my next post I'll dive more into implementations caveats (shared disk, shared memory, sharding) and pitfalls, do's and don't's... 

Stay tuned, join those who subscribed and get automatic updates, get involved!

Tuesday, May 15, 2012

Scale differences between OLTP and Analytics

In my previous post,, I reviewed the differences between OLTP and Analytics databases.

Scale challenges are different between those 2 worlds of databases.

Scale challenges in the Analytics world are with the growing amounts of data. Most solutions have been leveraging those 3 main aspects: Columnar storage, RAM and parallelism.
Columnar storage makes scans and data filtering more precise and focused. After that – it all goes down to the I/O - the faster the I/O is, the faster the query will finish and bring results. Faster disks and also SSD can play good role, but above all: RAM! Specialized Analytics databases (such as Oracle Exadata and Netezza) have TBs of RAM. Then, in order to bring results for queries, data needs to be scanned and filtered, a great fit for parallelism. A big data range is divided into many smaller ranges given to parallel worker threads that each performs his task in parallel, the entire scan will finish in a fraction of the time.

In the OLTP, scale challenges are in the growing transaction concurrency throughput and… growing amounts of data. Again? Didn't we just say growing data is the problem of Analytics? Well, today’s OLTP apps are required to hold more data to provide a larger span online functionality. In the last couple of years OLTP data archiving was changed dramatically. OLTP data now covers years and not just days or weeks. Facebook recently launched its “time line” feature (, can you imagine your timeline ends after 1 week? Facebook’s probably world’s largest OLTP database holds data of a billion users for years back. Today all data is required anywhere anytime, right here, right now, online. Many of today’s OLTP databases go well beyond the 1TB line. And what about transaction concurrency throughput? Applications today are bombarded by millions of users shooting transactions from browsers, smartphones, tablets… I personally checked my bank account 3 times today. Why? Because I can…

What can be done to solve OLTP scale challenges?

In my next post let's start answering this question with understanding why solutions proposed for the Analytics are limited in the OLTP, and start reviewing relevant approaches.

Stay tuned, subscribe, get involved!

Monday, May 14, 2012

OLTP vs. Analytics

This blog is all into database scaling, but before we dive into scaling challenges, here's a quick alignment around those important terms, if you want, an introduction to OLTP and Analytics...

Traditionally the RDBMS world was divided into 2 main categories:
OLTP, Online Transaction Processing (, data is stored and processed by operational applications throughout the organization. In an example of a supermarket, the cashiers application is clearly an operational application doing OLTP. Traditionally OLTP databases were characterized by:

  1. Large number of concurrent users
  2. High throughput of concurrent transactional activity of reads and writes mixture
  3. Smaller amounts of data

On the other side we could find the Data Warehouse ( or DSS (Decision Support Systems,, Oracle likes DSS very much...) or with their newer name: Analytics (
In our supermarket, the Data Warehouse was loaded from data from all operational system, the cahiers system and also from the logistics and suppliers system , ERP, CRM – and Business Intelligence tools are used to allow convenient visual ways to query and analyze the enormous amounts of data making aggregations, summaries, comparisons and so on. Traditionally Analytics databases were characterized by:

  1. Small number of concurrent users
  2. Non transactional, read-only activity
  3. Huge amounts of data

While RDBMS can be used for both OLTP and Analytics, each has different design methodologies, tuning parameters, hardware requirements. Every DBA will tell you that Oracle’s BITMAP INDEX speeds up Analytics since the schema is in a star-schema design, however it can be hell for OLTP when schema design is totally different, and concurrent write transactions will suffer from the BITMAP’s segment-level-locking. Columnar database
On the other hand, in OLTP the DBA will set a goal and eventually reach a cache hit-ratio of 95%, and “sql-area” hit-ratio of 99.99%. In Analytics the above numbers are impossible to reach just because of the different activity imposed on the database, resulting in different behaviors, and it’s OK. There is a difference.

Scale challenges and practices are also different. In future posts I'll address specific database scale issues relevant to OLTP from one hand and Analytics on the other.

Stay tuned.

Saturday, April 21, 2012

Impressions from Amazon's AWS Summit in NYC

Yesterday (4/19) I attended the AWS Summit in NYC (

I'm a big fan and also a heavy user of AWS especially S3, EC2, and naturally, RDS. In every point in time I have several dozens of AWS machines running for me out there in the East region, and in some cases when we do some special benchmarks and tests, number of EC2 and RDS machines can easily reach 3-digit. As I said, I'm a fan...

A few quotes I was able to catch and document on my laptop, on my laps...:
"When you develop an app for facebook, you must be prepared (and be afraid) that to your party, not noone will show up, but everybody will show up!"
So true! Simple and true. We all want to succeed, to have success with our app. We have to think about scaling from day 1.
"Database was bottleneck for building of sophisticated apps. This is no longer the case when building DynamoDB".
The quote above was about DynamoDB which is an excellent new NoSQL service by AWS. But we all can think about YesSQL databases and hope and wish and make it the same. Databases, good old RDBMSs, are great for applications, they offload a lot of complexity, SQL is a rich language and API to access data, it leverages existing skills and it allows ACID. RDBMSs also should not be a  bottleneck for building of sophisticated apps! They should be able to scale.
"How people really want to interact w the database? not by 'how many servers' but with 'give me a DB to handle 1000 reads, 10000 writes'. That's all. Users want a situation when you cannot run out of space, you cannot run out of capacity."
Inspiring. I couldn't agree more. A service is a good service when it hides away all complexities, gives me a URL and, boom, everything works. AWS are getting there no doubt, and whoever provides a product or a service (including myself...), should work according to this quote!
"RDS has 2 push button scaling: Scale-Up or Scale-Out, read replica, or, sharding... 'have the applicaiton go to the right shard'"
This is a quote said in the excellent Solutions track seminar "Building scalable database application with Amazon RDS". As I said, RDS is an excellent service. It's capacity to be a "service" and being automatically tuned, backed-up, upgraded, etc. - is impressive. The ability to ensure transparent high availability across Availability Zones (Multi-AZ) and have read-replica(s) set up with a click is no less than phenomenal. However in the scale-out department I think the solution is good, but not excellent. The support for read replica is great but it covers only the transportation of the data between the databases. it leaves the application with 2 or more IP addresses to deal with, route reads and writes, handle replication lag consistency and so on. In the sharding department, it's even less complete as, while I can spawn RDS servers as much as I like, the application need to do all the command routing to the right shard and also handle the transportation of the data. It's quite far from the vision I see in the 3rd quote above, quoted and inspired from Dr. Werner Vogels. I think this good service by AWS can can be completed to become and excellent service with a 3rd party products, such as ScaleBase.

In addition to the above quotes, I enjoyed hearing a good scale-out case study from Pinterest (, who invested in sharding themselves over almost 70 RDS databases. See here a good article about Pinterest's case:

I just love those case studies. Every one of those, especially by my prospects, customers, partners, makes me much smarter and my products much much better. If you have a scale-out story - don't be shy to share!!

A quick update: Look at this article, Search for the "Transformation three: We're moving from scaling by architecture to scaling by command". Good statement about database scale-out.

Wednesday, April 18, 2012

So how can we scale databases?

There are ways to scale databases, unfortunately some are limited, some introduce complexities, some are do not fit the cloud...

By scaling solution I mean a solutions that help me scale my existing environment, my existing RDBMS. Some magic or technology that will take my existing Oracle or MySQL for example, to the next level, without porting to a new DB engine/vendor and without completely recoding my app.

Let's try to organize things a bit in this very summarized table, just to get the hunch of it. I can't imagine to cover it all in 1 table or even 100 pages, but that should be a start of a meaningful discussion to continue in next posts:

Scales reads?
Scales writes?
Scales data?
Scales sessions?
Bottom line
Scale-Up: faster HW, CPU, memory, disks, SSD
Costy, limited
Shared disk cluster: Oracle RAC and similar
Costy, hard to implement, might damage non-read-mostly apps
Replication based
Read/Write splitting
A valid solution, easy to implement, limited
Multi master replication
Strict data ownership is a MUST to enjoy any advantage
Scale Out (Sharding?)
A valid solution, might introduce major complexities...

I think it'll be safe to say: there are solutions, there are supporting technologies, however database tech alone is not enough. To really get the benefit from a shared-disk cluster we need to really know what we're doing, not to be overthrown by disk latency or inter-db-machines network noise traffic. Multi-master replication is a recipe for disaster, conflicts, split-brain, loss of data - without proper data ownership definition and enforcement. Shared-disk-cluster and multi-master replication (3 versions of it) exist in Oracle for over 10 years. And still, they can't solve all RDBMS scaling limits because those technologies alone are like double edge swords, should be handled gently by experienced craftsmen, and with good integration with additional concepts and tools.

No, having the long time waited multi-master replication available for MySQL, will not solve scale issues for MySQL, it will not bring piece to the force of geo-clustering etc. Without proper data ownership, proper design, it'll just introduce all the same all flaws Oracle DBAs have been dealing with for the last decade...

And for scale-out, it's absolutely great, but it's very hard to do it with data and databases.
Sharding - it's like the database outsourced scaling to the application. Application developers should concentrate in the application logic, strive to making it better, make the business competitive with new features. Every time an app developer spends time on database specific matters is a poor case of efforts waste and a skill mismatch.

It shouldn't be a surprise, that in recent years I was seeking a solution to scale out over to standard databases building blocks. If you want - a solution to that will bring that obvious advantages of sharding, but without the pains of doing it in the application tier... I gather we have been quite successful in introducing that in the MySQL database.

In future posts I'll drill down and elaborate on rows in the table above, feel free to add and comment, I'll address any comment!!

The light in the end of the tunnel is that the basic building blocks for solving database scalability are there! There're still not a safe, well packaged, polished solution like an iPhone4 that can be easily used by my 1 year old youngest daughter. Those building blocks still need to be put well together into a solution by experienced professionals, or with tools and design, 3rd party products and lots of thinking... It's not a simple task...

Tuesday, April 17, 2012

Applications come and go. Databases are here to scale.

In my heart, I'm a DBA, always was and always will be. People say I'm a database guy by the way I think, keep my car, and file my music and also bank statements... However I did great deal of development, design, architecture on the apps side. I (hope to) have some perspective.

Applications come and go. The second programming language I've ever learned and worked on was COBOL, some still say most of the world's lines of code are written in this language, maybe so, but anyway I since then have known and written in dozens of programming languages, from Assembly to, from Pascal to Delphi, from functional C to Object Oriented SmallTalk, C++, Java and , from compiled C/CGI to interpreted Perl, ASP and Ruby back to compiled node.js... My first applications ran on Main-Frame with green screen, later I created beautiful graphic client-server applications, later I had to create hideous white web applications (like the green MF), later Ajax, Flex and HTML5 made it client based again... And today we call them Apps...

Applications come and go, redesigned, refactored, rewritten. They should. 2 things are constant in the business universe, and those are any business's real assets. Users and data.

Applications are the pipe to give data to users, let them generate and modify the data. And in this universe those are also expanding. Any business wants more users, more customers, more business, more data. Data is never deleted, forever growing, written and updated, can be read many times, can give intelligence to the business when analyzed even years after generated. Data is always audited, backed up, 100% available. DBAs make highest salaries in IT, and DBMS is the most expensive software on the enterprise's shelf. Ask Larry Ellison, rumors say he took Sharon Stone to hang out in a Mig jet fighter. And he can relax, migrating and porting a database is one of the hardest, riskiest operations known to the CIO.

Applications should keep pace with times, trends and fashion, make the users happy. Data, however, must NEVER be compromised. No data should be generated as a silo/an island/isolated. Data WILL live a lot after the application that generated it will be replaced with another, and the programmer who wrote it will change jobs or retire. The data will be integrated by other apps, new generation apps will use it, reports and online analytics will analyze it, enterprise data marts and data warehouses will ETL it.

While there are hundreds of ways to develop an application, 97% of the world's structured data is in less then a dozen environments, all of them are RDBMS. I've worked with most, but Oracle and MySQL are the ones I have most mileage in, most scars from (DBAs always count scars...).

Today, those good old RDBMSs are under attack. RDBMS cannot scale to cope with throughput, data size, concurrency, complexity, distribution, virtualization, consolidation...

The need of good, reliable, interchangeable data is stronger than any man or any trend. Nature will correct itself, and we're living in interesting times. Applications come and go. Databases are here to scale.

Further in this blog let's try see why, what can be done, is done and will be done in database scalability.

Welcome to the database scalability blog.