Category Archives: MySQL

Strange DB requests

Eclipse 5/21/12

Eclipse 5/21/12

I’m on the market so recruiters call me. I get the usual: “what, you can’t commute?”, “Really it’s not that far from San Francisco!”, “Why don’t you want to relocate to LA?”.

I’m open to small gigs as they can be really fun and you’re not on-call. I was recently asked if I could take SQL Server data and import it into MySQL.

Well, yea. But I wanted to know why. Was this a good idea? What was the reason? I can set up a server and tune it for MySQL. I can recommend hardware. But first I needed some questions answered. Why is the first and foremost question.

A quick aside. Anyone who knows me, knows I’m not a big fan of windows. I’m just not. Windows is always crashing on me and I find the UI cumbersome. I love a command line. So, you’d think I’d be salivating at the chance to move from SQL Server to MySQL.

The answer to my why,  was to save money. Now, it may seem like you can save money by not paying those pesky license fees. Really it seems like a no brainer doesn’t it? But SQL Server is   different from MySQL and solves different problems. MySQL needs someone, anyone that understands it. Many startups have DevOps/System Admins who can handle a fair amount of MySQL needs. They can run backups, make sure replication is running, bring up a crashed server. So technically you can get by without a MySQL DBA. Ok those are really good System Admins who understand MySQL  and they seem to be rather scarce these days.

So I asked about performance and scalability. No need for scalability, only about a dozen or less people use it. Well, that seemed odd to me. MySQL is a great database for the web. It requires a lot from the apps that talk to it. The app is a MS app that does a lot of number crunching in SQL Server and then spits it out to excel. This is an excellent use of SQL Server! Many people need SQL Server just for this.

I had to break the news to them that MySQL would not save them money and probably wouldn’t do nearly as good of a job as SQL Server was doing for them. They would not be happy.

Damn! That advice just cost me some money….

Keep This Place CLEAN!

Keep This Place CLEAN!

query-digest tool

First off DISCLAIMER! Always test this to be sure your servers can handle these tools. I try not to ever run percona toolkit tools on the production database server. There are ways to get around this with a little thought. Also, remember that passwords run on the command line will show up in a ps command.

I’ve always used maatkit’s query-digest tool but I think that percona toolkit has made it better. It’s one of my fav mysql tools. I think the documentation is lacking for all the really great things you can do with this tool but it’s ability to warm up a cache on passive master or new slave is priceless.

Generally startups do not hire a DBA in the beginning. Let’s face it, it’s usually one or two engineers with hopefully a marketing or PM in the beginning. First hires tend to be on the business end and more engineers. Everyone thinks that DBAs aren’t really necessary in the beginning. They’ll finally hire a backend engineer to fix some of the queries before they get a DBA. Then the database begins to bottleneck as the site gets more popular. That’s usually when the DBA comes in, along with the architect. Fair enough.

So while the backend re-architecture begins the shoring up of the current system has to happen. This usually involves more servers. Perhaps in the beginning there was a master, a backup slave and a couple of read slaves in the system. The CEO is trying to keep the budget down but is freaking out about the server cost because database servers are more expensive than WebServers/AppServers. But the system needs help and to keep it up more servers are thrown at the problem. This is where query-digest really shines.

First I always add a passive master (which is nothing more than a slave that no one reads from). It’s insurance – just in case there’s a hardware failure a fully caught up slave is ready to take the fallen master’s place. Or is it?

Generally the first thing I do is upgrade the version of MySQL so that the app can get  prepared for the new architecture. I’ve never had this involve one upgrade. But it’s good practice. You want to be sure that the app can handle the newer versions which are faster and have fewer bugs so minor upgrades are started. This means I need 2 servers (ok I can get by with one extra but I usually need an extra read slave by now). The first thing I do is a mysqldump and import into a new slave with a newer version of MySQL. You really don’t have to do this with some upgrades unless you’re also cleaning up charsets. And when you’re not working with a MySQL specialist you almost ALWAYS have to clean up charsets. That means – mysqldump–>import.sql. But does your new slave have all the data that the master has?

That means you need to run pt-table-checksum first, which means you need to run this at a slow time on your master. Once you are sure that your new slave has the same data as your master you are ready to use this new slave, or are you?

Of course you’re not! That’s when I run pt-query-digest. I choose a slave that I’m gonna grab the read queries from and on the NEW SLAVE (this is important!) I run this command:

/usr/local/bin/pt-query-digest –noreport –processlist h=my_running_slave -u myuser -psecret –filter ‘$event->{fingerprint} =~ m/^select/’ –print | mysql -f >/dev/null &

Well, actually I script it and have a variable that runs the query digest for a certain number of minutes or hours & then kills it. I usually have a ~/.my.cnf for this user so I don’t have to have the user/password on the command line. This is just for documentation. I highly suggest you take security precautions.

After I run this cache warmup for a couple of hours (I’m usually sleeping to be honest), I make sure that replication is alive and well and no writes got into this slave. I see how long a typical query takes on this server.  I check one of the gnarly queries to see if it too is speedy. If all is well I put the slave into the read pool and watch it like a hawk.

Once all the slaves are upgraded, including the passive master, it’s time to make the passive master the active master. Just do the same as you would to a slave, but this time you have to grab the queries from the master. That makes it a bit stickier as you need to watch that you’re not overloading your master. I tend to do it in spurts, 2-5 min every 15 min or so. I have to be awake for this to keep an eye on it, after the re-architecture I won’t have to (well I just set up a nagios alert that will wake me up if things go wrong and sleep with my laptop). Viola! I will do this a number of times as re-architecture takes time.

The other cool thing to do with query digest is to use it with tcpdump. I found that the docs for tcpdump were not all that helpful either. I love tcpdump for those servers that are so overloaded I can’t log slow queries. Usually I create a dump file that I’ll offload to another server to look at. Here’s how I get my dump file

/usr/sbin/tcpdump -i eth0 port 3306 -s 65535 -xnq -tttt > tcpdump.out
/usr/sbin/tcpdump -i eth0 port 3306 -s 0 -xnq -tttt > tcpdump.out

You hit <ctrl>-c to kill this and it writes to the tcpdump.out file. I’ve used both of the above and I’m not sure if I got more of the slow queries into the tcpdump outfile? tcpdump will truncate super long queries but it does give you a feeling for the query. I usually talk to an engineer who can give me the full query so I can see if there’s any sort of ORDER BY or GROUP BY.

Oracle MySQL Connect 2012

Me and Giuseppe Maxia (aka Datacharmer)

Me and Giuseppe Maxia (aka Datacharmer)….


This is my third year I’ve gone to a MySQL event that’s been held on a weekend prior to Oracle World. The first year was just a Sunday, can’t remember if last year was just a Sunday too?? This year it was the whole weekend jam packed with speakers and labs. There were lots of talks to choose from and it often times made for difficult choices.

I attended both early morning keynotes. I had a killer cold and took my zicam, cold calms, excedrin and cold-eze with me. I also went to bed early each night. The first keynote was the usual rah-rah-rah stuff from Oracle. I worried I’d fall asleep as my coffee wasn’t kicking in but then Sarah Novotny led a panel of community folks that woke me up. Luckily I tweeted my thoughts and from my tweets it seems that the performance schema with table stats is kick ass, that the drop/load of the InnoDB Buffer Pool is too expensive in most cases, that replication rocks with a Global Transaction ID (notice the work “TRANSACTION” only works on InnoDB tables so if you have MyISAM you’re still outta luck), explain plan as diff formats but the JSON is the coolest (do I see an app for this in the near future??), group commit also rocks (folks have been asking for this for years!), new security features (which are turned ON by default and there’s a couple of gotchas there), and Facebook loves the new data compression. This was really the summary of what’s to come in 5.6. There were sessions and labs to go into all these cool things.

Lunch was provided (albeit a small lunch even for me!) but no time was allotted so you had to grab and eat in the 30 min between sessions.

The day was very long and the BOFs got out by 8pm which gave us 60 min to eat and talk shop with folks. I got home by 9:30 pm and I was beat. My cold was threatening to get worse but my head was bursting at the seams with all the information I got.

Next day the Keynote had Jeremy Cole from Twitter, Daniel Austin from PayPal and Ash Kanagat and Shivinder Singh from Verizon. I’ve heard Jeremy Cole speak thorough the years and while laid back he’s a compelling and interesting speaker. He didn’t disappoint (later in the day I went to his session to hear more). I’ve not heard Daniel Austin speak before. He has a big voice and while it sounds more on the marketing side he is speaking tech. Maybe it’s the words he chooses? dunno? But I did go hear his session later too. The Verizon talk was hard to hear/understand and when I did it just sounded too light weight and marketing. I didn’t go to any Verizon talks later. I do have Verizon and I’m happy to hear they use MySQL.

Again lunch was provided (again no time) but my salad seemed a wee bit bigger and better. I wasn’t really there to eat just make it through the day.

All in all I learned a lot at the sessions I attended. One time slot I walked out of one talk into another that I also walked out of. With my cold I just had a lot of trouble sitting thorough talks where the speaker wasn’t all that good even if the material was. I took one lab and then regretted not taking more. They had windows laptops for each person and while I’m a long time Mac user there were folks that helped me out with the keyboard and VMS! (Thanks Jeremy Cole and Darren Cassar!).

Overall my take on this event is:

IT’S GETTING BETTER and YOU SHOULD GO NEXT YEAR!

MySQL Connect still feels like the step child of Oracle World (luckily the product does NOT! They’ve added engineers to the product and are working hard on pushing out quality releases so the product itself, including the community one, does feel better and more robust than ever before!).

There were more high quality sessions and labs than I could attend. (OMG! the labs totally rocked!)

5.6 is an exciting release with a lot of features and fixes that folks have been asking for, or in the case of Facebook, Percona and Twitter, patching themselves.

For the most part MySQL Connect was super geeky.

First Post!

old chair in kitchen

I’m finally getting around to blogging in the techsphere. It’s about time I know but that’s cause my job usually keeps me too busy. Then there’s the side volunteer gig as a co-host of sfmysql.org (sf MySQL meetup!). That takes a chunk of time (lately more than I’d like, but that’s another post/rant).

I keep up on my blogs via google reader. Generally MySQL planet keeps me up to date. Lately there’s been a lot of talk about what happens when the master database fails (whether it’s really failing or scripts think it’s failing).  The biggest surprise is that maintenance is causing problems when promoting the passive master (really just a slave no apps read from) due to a cold InnoDB cache. I mean, REALLY FOLKS!! There’s a tool for that!

I think Percona needs to blog about how to use their tools more. At a gig where I was working with this cracker-jack SysAdmin who was always on my case  to push the envelope, I told him about a script that used  maatkit’s mk-query-digest for a bit now & it works well. (Ryan Lowe gave me some hints at the talk he gave at sfmysql.org & I slightly modified it. ) In the past this is how I warmed up the passive master. We didn’t even have a passive master at the time & I was making a case for it. So he gave me one. I piped all my reads over to the passive master.  I also modified all writes to be reads and piped those over. We did a table checksum. All was good with the data. Then we failed over to the passive master.

Whoa! no dip in response time. Granted we piped over this traffic for 4 hours (or was it 6?). But it worked. I wondered why we couldn’t just keep that script running all the time. We hadn’t removed the analytics stuff yet so there was too much traffic but we crontab’d it to happen a couple of times in the day & more so during slow hours. If I know I’m doing maintenance to a Master, I always, always warm up the cache. And yes, there’s a tool for that! (thank you maatkit authors & maintainers & thank you Percona!)

The doc for pt-query-digest is here: http://www.percona.com/doc/percona-toolkit/2.1/pt-query-digest.html In the near future I’ll add parts of the script!

Ps. I’m currently looking for a MySQL DBA position in the CITY of San Francisco. I’m a startup freak because I like to do a bit more than just admin the mysql servers. I’m interested in admin’ing Redis, MongoDB (other NoSQL technology) and scalability of data. I love learning new technology that works so well with MySQL!