Showing posts with label Latency. Show all posts
Showing posts with label Latency. Show all posts

Wednesday, April 3, 2013

MySQL thread pool and scalability examples

Nice article about SimCity outage and ways to defend databases: http://www.mysqlperformanceblog.com/2013/03/16/simcity-outages-traffic-control-and-thread-pool-for-mysql/

The graphs showing throughput with and without the thread pool are taken from the benchmark performed by Oracle and taken from here:
http://www.mysql.com/products/enterprise/scalability.html

The main take away is this graph (all rights reserved to Oracle, picture original URL):
20x Better Scalability: Read/Write
Scalability is where throughput can grow and grow, as demand grows. I need to get more from the database, the question is: "can it scale to give it to me?". Scalability is where the response time remains "acceptable" while the throughput grows and grows.

Every database has a "knee point".
  1. In the best case scenario, in this knee-point, throughput will go into a flat plateau, and On the same point BTW,  response time will start climbing, passing the non-acceptable point.
  2. In a worse case scenario, in this knee-point, throughput, instead of a flat plateau, it will take a plunger. On the same point BTW, response time will start climbing fast to the roof.
Actually, the red best case scenario, is actually pretty bad... There's NO scalability there, throughput has a hard limit! It's around 6,500 transactions per second. I need to do more on my DB, there are additional connections - but the DB is not giving even 1 inch of additional throughput. It doesn't scale.

The thread pool feature is no more than a defense mechanism. It doesn't break the scalability limit of a single machine, rather its job is to defend the database from death.

Real scalability is when throughput graph is neither dropping or becoming flat - it goes up and up and up with a stable response time. This can be achieved only by Scale Out. Getting 7,500 TPS with 1 database with 32 connections, then add an additional database and the straight line going up will reach, say, 14,000. A system with 3 database can support 96 connections and 21,000 TPS... and on and on it goes... 

Data needs to be distributed across those databases, so the load can be distributed as well. Maintaining this distributed data on the scaled-out database is the key... I'll touch that in future posts. 

Monday, December 17, 2012

Database Performance, a Ferrari and a truck

In the last days I got several queries, from colleagues and customers, about one thing I thought it's a given, well well known, but found out differently: "What is database performance?". Is it speed? Is it throughput? What are the metrics and how do you measure?

I tried to refer to an existing link, but then had to write and describe myself. The thing nearest to describing what I think "Database Performance" really is, is this, it's not bad yet I was able to make it even simpler to my esteemed colleagues and more esteemed customers.

Database performance, in an essence, derived from 2 major metrics:
Latency: the time we wait for an operation to finish. Measured in milliseconds (ms) or any other time unit.
Throughput: number of transactions/commands per time unit usually second or minute.

In the classic world of Data Warehouse and Analytics, throughput is usually a non-issue and latency is king. When database grows larger and larger, analytics complex queries take longer and longer to finish, and the demand is "I need speed!".

In the world of OLTP, throughput is the important measure. TPC-C benchmarks for example, measure only throughput (New Order Transactions per Minute). Oracle made it to meet 30,249,688 NO Transactions Per Minute, nice job, we as readers of the results have no way to know if a single transaction tool 1ms and they managed to squeeze thousands of those in parallel in 1 minute to meet this number, or maybe, the scenario transaction took exactly 1 minute, and Oracle managed to perform 30,249,688 such transaction in parallel. The truth is somewhere in the middle, between the 1 millisecond and 1 minute...

In OLTP the latency should be bearable (for some it's 50ms, for some it's 500ms) and stable as throughput must grow and grow as number of users/sites/devices/accounts/profiles grows and grows.

Another key word is predictability. In my OLTP I need predictable good enough, bearable, constant latency performance. I can't afford a 50ms transactions to take 1 minute once every while. I need transactions latency to be some X I can live with, I need it constant and predictable - while throughput is growing.

Not a popular comparison, but very very relevant: A Ferrari and a truck. Both have 500 horsepower.
A Ferrari will take you 200 miles per hour! However a truck will drive a good legal 70, and she'll go same 70 miles per hour with 100 pounds, 1 ton or 20 tons. Constant, stable, predictable. Yea, I'd like to have a Ferrari for my spare time, or to ace a benchmark, but when it comes to backend server infrastructure they're more like a truck to me... and they deliver...

Life's not fair sometimes... at least one of these has definitely got the looks: