I want to copy a live production database into my local development database. Is there a way to do this without locking the production database?
I’m currently using:
mysqldump -u root --password=xxx -h xxx my_db1 | mysql -u root --password=xxx -h localhost my_db1
But it’s locking each table as it runs.
Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.
--lock-tables=false option work?
According to the man page, if you are dumping InnoDB tables you can use the
--lock-tables, -l Lock all tables before dumping them. The tables are locked with READ LOCAL to allow concurrent inserts in the case of MyISAM tables. For transactional tables such as InnoDB and BDB, --single-transaction is a much better option, because it does not need to lock the tables at all.
For innodb DB:
mysqldump --single-transaction=TRUE -u username -p DB
This is ages too late, but good for anyone that is searching the topic. If you’re not innoDB, and you’re not worried about locking while you dump simply use the option:
The answer varies depending on what storage engine you’re using. The ideal scenario is if you’re using InnoDB. In that case you can use the
--single-transaction flag, which will give you a coherent snapshot of the database at the time that the dump begins.
--skip-add-locks helped for me
To dump large tables, you should combine the –single-transaction option with –quick.
This is about as late compared to the guy who said he was late as he was to the original answer, but in my case (MySQL via WAMP on Windows 7), I had to use:
For InnoDB tables use flag
it dumps the consistent state of the database at the time when BEGIN
was issued without blocking any applications
Honestly, I would setup replication for this, as if you don’t lock tables you will get inconsistent data out of the dump.
If the dump takes longer time, tables which were already dumped might have changed along with some table which is only about to be dumped.
So either lock the tables or use replication.
mysqldump -uuid -ppwd --skip-opt --single-transaction --max_allowed_packet=1G -q db | mysql -u root --password=xxx -h localhost db
When using MySQL Workbench, at Data Export, click in Advanced Options and uncheck the “lock-tables” options.
Some options, such as –opt (which is enabled by default), automatically enable –lock-tables. If you want to override this, use –skip-lock-tables at the end of the option list.
If you use the Percona XtraDB Cluster –
I found that adding
to the mysqldump command
Allows the Percona XtraDB Cluster to run the dump file
without an issue about LOCK TABLES commands in the dump file.
Another late answer:
If you are trying to make a hot copy of server database (in a linux environment) and the database engine of all tables is MyISAM you should use
Acordingly to documentation:
It uses FLUSH TABLES, LOCK TABLES, and cp or scp to make a database
backup. It is a fast way to make a backup of the database or single
tables, but it can be run only on the same machine where the database
directories are located. mysqlhotcopy works only for backing up
MyISAM and ARCHIVE tables.
LOCK TABLES time depends of the time the server can copy MySQL files (it doesn’t make a dump).
As none of these approaches worked for me, I simply did a:
mysqldump [...] | grep -v "LOCK TABLE" | mysql [...]
It will exclude both
LOCK TABLE <x> and
UNLOCK TABLES commands.
Note: Hopefully your data doesn’t contain that string in it!