Tag Archives: MySQL

MySQL Percona 5.5 upgrade to 5.6

IMG_7048photo by © erin o’neill 2014 Las Vegas Int’l Airport

I like to prepare for the worst, especially with Server upgrades. I read through the gotchas, about server settings default changes, passwords being a problem and maybe having a slave 5.6 and a master 5.5. But Percona made the upgrade sound super easy. Too easy.

This was a simple setup used for stats. It wasn’t even that large of a dataset. I already wasn’t using old passwords (hey the install was a new 5.5 and I didn’t allow old passwords there). I’d tuned the server a bit, specifically using innodb_file_per_table, but I’ve been using that since I could.

I backed up the my.cnf file, the schema, the mysql data, the data of my tables. I was good to go. I also dropped my databases for good measure. AND I put the show slave status into a file for later.

Since I was already using percona 5.5 I didn’t need the repo. This was on Ubuntu, not my fav OS. In fact I find Ubuntu a pain in the rear and prefer CentOS. But this was not my choice. I just ran the following:

apt-get -y install percona-server-server-5.6 percona-server-client-5.6
apt-get -y install xtrabackup
apt-get install -y percona-toolkit

Turned out that while xtrabackup was up to date, percona-toolkit was not. Then I started mysql. the apt-get turns out to have run the mysql_upgrade so that wasn’t even needed! I checked my mysql error log and there were no serious errors.

Importing the schema was where my big worry lay because this is a data warehouse which means they use evil stored procedures and views. I get it that data warehouse setups really need stored procedures but views?? I’ve already ranted enough about that. The import went fine. That kind of worried me as I thought stored procedures and views might not upgrade well as they usually don’t. And they’d already given me a major headache in the upgrade from 5.1 to 5.5.

Then I imported the data. Again no errors. Hey I thought this upgrade was supposed to be slightly gnarly???!!

Then I ran the CHANGE MASTER which my dump got wrong. No worries I ran a show slave status to a file after stopping the slave so I had the info. I put the proper file and file position in my CHANGE MASTER command. Then it scolded me for using user names and passwords. WTF??!! Oh, I’m supposed to use an auth plugin. I ran start slave as normal and the show slave status showed that the two replication threads were running and included a WHOLE lot more info. Good thing I’m no longer using perl to grab that info as I would have had to modify my scripts! Not a fun exercise at all.

I’m testing the percona mysql 5.5 master to percona mysql 5.6 slave. Next week I’ll upgrade the master. That’ll make the devs happy as they’ll have more diagnostics for their stored procedures. I hope it’s also faster and less CPU intensive. But it is a data warehouse and one can not escape full table scans….

Oh, and MySQL is gonna scold me every time I run a CHANGE MASTER as I don’t have an auth plugin. I’ll continue to ignore it, but it’s nice to know I can add auth.

Advertisements

First Post!

old chair in kitchen

I’m finally getting around to blogging in the techsphere. It’s about time I know but that’s cause my job usually keeps me too busy. Then there’s the side volunteer gig as a co-host of sfmysql.org (sf MySQL meetup!). That takes a chunk of time (lately more than I’d like, but that’s another post/rant).

I keep up on my blogs via google reader. Generally MySQL planet keeps me up to date. Lately there’s been a lot of talk about what happens when the master database fails (whether it’s really failing or scripts think it’s failing).  The biggest surprise is that maintenance is causing problems when promoting the passive master (really just a slave no apps read from) due to a cold InnoDB cache. I mean, REALLY FOLKS!! There’s a tool for that!

I think Percona needs to blog about how to use their tools more. At a gig where I was working with this cracker-jack SysAdmin who was always on my case  to push the envelope, I told him about a script that used  maatkit’s mk-query-digest for a bit now & it works well. (Ryan Lowe gave me some hints at the talk he gave at sfmysql.org & I slightly modified it. ) In the past this is how I warmed up the passive master. We didn’t even have a passive master at the time & I was making a case for it. So he gave me one. I piped all my reads over to the passive master.  I also modified all writes to be reads and piped those over. We did a table checksum. All was good with the data. Then we failed over to the passive master.

Whoa! no dip in response time. Granted we piped over this traffic for 4 hours (or was it 6?). But it worked. I wondered why we couldn’t just keep that script running all the time. We hadn’t removed the analytics stuff yet so there was too much traffic but we crontab’d it to happen a couple of times in the day & more so during slow hours. If I know I’m doing maintenance to a Master, I always, always warm up the cache. And yes, there’s a tool for that! (thank you maatkit authors & maintainers & thank you Percona!)

The doc for pt-query-digest is here: http://www.percona.com/doc/percona-toolkit/2.1/pt-query-digest.html In the near future I’ll add parts of the script!

Ps. I’m currently looking for a MySQL DBA position in the CITY of San Francisco. I’m a startup freak because I like to do a bit more than just admin the mysql servers. I’m interested in admin’ing Redis, MongoDB (other NoSQL technology) and scalability of data. I love learning new technology that works so well with MySQL!