I’ve arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I’m not sure how to proceed.
I’m basically a one man company and not long ago I didn’t even use version control for my code. I’m on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects.
What’s a efficient and sufficient (no overkill) way to version my database schemata?
I do have a freelancer or two in some projects but I don’t expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions.
 Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I’m going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit]
[edit2]plus I’m now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]
Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.
Simple way for a small company: dump your database to SQL and add it to your repository. Then every time you change something, add the changes in the dump file.
You can then use diff to see changes between versions, not to mention have comments explaining your changes. This will also make you virtually immune to MySQL upgrades.
The one downside I’ve seen to this is that you have to remember to manually add the SQL to your dumpfile. You can train yourself to always remember, but be careful if you work with others. Missing an update could be a pain later on.
This could be mitigated by creating some elaborate script to do it for you when submitting to subversion but it’s a bit much for a one man show.
Edit: In the year that’s gone by since this answer, I’ve had to implement a versioning scheme for MySQL for a small team. Manually adding each change was seen as a cumbersome solution, much like it was mentioned in the comments, so we went with dumping the database and adding that file to version control.
What we found was that test data was ending up in the dump and was making it quite difficult to figure out what had changed. This could be solved by dumping the schema only, but this was impossible for our projects since our applications depended on certain data in the database to function. Eventually we returned to manually adding changes to the database dump.
Not only was this the simplest solution, but it also solved certain issues that some versions of MySQL have with exporting/importing. Normally we would have to dump the development database, remove any test data, log entries, etc, remove/change certain names where applicable and only then be able to create the production database. By manually adding changes we could control exactly what would end up in production, a little at a time, so that in the end everything was ready and moving to the production environment was as painless as possible.
How about versioning file generated by doing this:
mysqldump --no-data database > database.sql
Where I work we have an install script for each new version of the app which has the sql we need to run for the upgrade. This works well enough for 6 devs with some branching for maintenance releases. We’re considering moving to Auto Patch http://autopatch.sourceforge.net/ which handles working out what patches to apply to any database you are upgrading. It looks like there may be some small complication handling branching with auto Patch, but it doesn’t sound like that’ll be an issue for you.
i’d guess, a batch file like this should do the job (didn’t try tough) …
mysqldump --no-data -ufoo -pbar dbname > path/to/app/schema.sql
svn commit path/to/app/schema.sql
just run the batch file after changing the schema, or let a cron/scheduler do it (but i don’t know … i think, commits work if just the timestamps changed, even if the contents is the same. don’t know if that would be a problem.)
The main ideea is to have a folder with this structure in your project base path
/__DB —-/changesets ——–/1123 —-/data —-/tables
Now who the whole thing works is that you have 3 folders:
Holds the table create query. I recommend using the naming “table_name.sql”.
Holds the table insert data query. I recommend using the same naming “table_name.sql”.
Note: Not all tables need a data file, you would only add the ones that need this initial data on project install.
This is the main folder you will work with.
This holds the change sets made to the initial structure. This holds actually folders with changesets.
For example i added a folder 1123 wich will contain the modifications made in revision 1123 ( the number is from your code source control ) and may contain one or more sql files.
I like to add them grouped into tables with the naming xx_tablename.sql – the xx is a number that tells the order they need to be runned, since sometimes you need the modification runned in a certain order.
When you modify a table, you also add those modifications to table and data files … since those are the file s that will be used to do a fresh install.
This is the main ideea.
for more details you could check this blog post
Take a look at SchemaSync. It will generate the patch and revert scripts (.sql files) needed to migrate and version your database schema over time. It’s a command line utility for MySQL that is language and framework independent.
Some months ago I searched tool for versioning MySQL schema. I found many useful tools, like Doctrine migration, RoR migration, some tools writen in Java and Python.
But no one of them was satisfied my requirements.
- No requirements , exclude PHP and MySQL
- No schema configuration files, like schema.yml in Doctrine
- Able to read current schema from connection and create new migration script, than represent identical schema in other installations of application.
I started to write my migration tool, and today I have beta version.
Please, try it, if you have an interest in this topic.
Please send me future requests and bugreports.
Source code: bitbucket.org/idler/mmp/src
Overview in English: bitbucket.org/idler/mmp/wiki/Home
Overview in Russian: antonoff.info/development/mysql-migration-with-php-project
Our solution is MySQL Workbench. We regularly reverse-engineer the existing Database into a Model with the appropriate version number. It is then possible to easily perform Diffs between versions as needed. Plus, we get nice EER Diagrams, etc.
At our company we did it this way:
We put all tables / db objects in their own file, like
tbl_Foo.sql. The files contain several “parts” that are delimited with
-- part: create
create is just a descriptive identification for a given part, the file looks like:
-- part: create IF not exists ... CREATE TABLE tbl_Foo ... -- part: addtimestamp IF not exists ... BEGIN ALTER TABLE ... END
Then we have an xml file that references every single part that we want executed when we update database to new schema.
It looks pretty much like this:
<playlist> <classes> <class name="table" desc="Table creation" /> <class name="schema" desc="Table optimization" /> </classes> <dbschema> <steps db="a_database"> <step file="tbl_Foo.sql" part="create" class="table" /> <step file="tbl_Bar.sql" part="create" class="table" /> </steps> <steps db="a_database"> <step file="tbl_Foo.sql" part="addtimestamp" class="schema" /> </steps> </dbschema> </playlist>
<classes/> part if for GUI, and
<steps/> is to partition changes. The
<step/>:s are executed sequentially. We have some other entities, like
sqlclr to do different things like deploy binary files, but that’s pretty much it.
Of course we have a component that takes that playlist file and a resource / filesystem object that crossreferences the playlist and takes out wanted parts and then runs them as admin on database.
Since the “parts” in .sql’s are written so they can be executed on any version of DB, we can run all parts on every previous/older version of DB and modify it to be current.
Of course there are some cases where SQL server parses column names “early” and we have to later modify part’s to become
exec_sqls, but it doesn’t happen often.
I think this question deserves a modern answer so I’m going to give it myself. When I wrote the question in 2009 I don’t think Phinx already existed and most definitely Laravel didn’t.
Today, the answer to this question is very clear: Write incremental DB migration scripts, each with an
up and a
down method and run all these scripts or a delta of them when installing or updating your app. And obviously add the migration scripts to your VCS.
As mentioned in the beginning, there are excellent tools today in the PHP world which help you manage your migrations easily. Laravel has DB migrations built-in including the respective shell commands. Everyone else has a similarly powerful framework agnostic solution with Phinx.
Both Artisan migrations (Laravel) and Phinx work the same. For every change in the DB, create a new migration, use plain SQL or the built-in query builder to write the up and down methods and run
artisan migrate resp.
phinx migrate in the console.
I do something similar to Manos except I have a ‘master’ file (master.sql) that I update with some regularity (once every 2 months). Then, for each change I build a version named .sql file with the changes. This way I can start off with the master.sql and add each version named .sql file until I get up to the current version and I can update clients using the version named .sql files to make things simpler.