mysql – Creating rows vs. columns performance

I built an analytics engine that pulls 50-100 rows of raw data from my database (lets call it raw_table), runs a bunch statistical measurements on it in PHP and then comes up with exactly 140 datapoints that I then need to store in another table (lets call it results_table). All of these data points are very small ints (“40″,”2.23″,”-1024″ are good examples of the types of data).

I know the maximum # of columns for mysql is quite high (4000+) but there appears to be a lot of grey area as far as when performance really starts to degrade.

So a few questions here on best performance practices:

1) The 140 datapoints could be, if it is better, broken up into 20 rows of 7 data points all with the same ‘experiment_id‘ if fewer columns is better. HOWEVER I would always need to pull ALL 20 rows (with 7 columns each, plus id, etc) so I wouldn’t think this would be better performance than pulling 1 row of 140 columns. So the question: is it better to store 20 rows of 7-9 columns (that would all need to be pulled at once) or 1 row of 140-143 columns?

2) Given my data examples (“40″,”2.23″,”-1024″ are good examples of what will be stored) I’m thinking smallint for the structure type. Any feedback there, performance-wise or otherwise?

3) Any other feedback on mysql performance issues or tips is welcome.

Thanks in advance for your input.

Answers:

Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.

Method 1

I think the advantage to storing as more rows (i.e. normalized) depends on design and maintenance considerations in the face of change.

Also, if the 140 columns have the same meaning or if it differs per experiment – properly modeling the data according to normalization rules – i.e. how is data related to a candidate key.

As far as performance, if all the columns are used it makes very little difference. Sometimes a pivot/unpivot operation can be expensive over a large amount of data, but it makes little difference on a single key access pattern. Sometimes a pivot in the database can make your frontend code a lot simpler and backend code more flexible in the face of change.

If you have a lot of NULLs, it might be possible to eliminate rows in a normalized design and this would save space. I don’t know if MySQL has support for a sparse table concept, which could come into play there.

Method 2

You have a 140 data items to return every time, each of type double.

It makes no practical difference whether this is 1×140 or 20×7 or 7×20 or 4×35 etc. It could be infinitesimally quicker for one shape of course but then have you considered the extra complexity in the PHP code to deal with a different shape.

Do you have a verified bottleneck, or is this just random premature optimisation?

Method 3

You’ve made no suggestion that you intend to store big data in the database, but for the purposes of this argument, I will assume that you have 1 billion (10^9) data points.

If you store them in 140 columns, you’ll have a mere 7 millon rows, however, if you want to retrieve a single data point from lots of experiments, then it will have to fetch a large number of very wide rows.

These very wide rows will take up more space in your innodb_buffer_pool, hence you won’t be able to cache so many; this will potentially slow you down when you access them again.

If you store one datapoint per row, in a table with very few columns (experiment_id, datapoint_id, value) then you’ll need to pull out the same number of smaller rows.

However, the size of rows makes little difference to the number of IO operations required. If we assume that your 1 billion datapoints doesn’t fit in ram (which is NOT a safe assumption nowadays), maybe the resulting performance will be approximately the same.

It is probably better database design to use few columns; but it will use less disc space and perhaps be faster to populate, if you use lots of columns.


All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x