I need to store potentially 100s of millions URLs in a database. Every URL should be unique, hence I will use ON DUPLICATE KEY UPDATE and count the duplicate URLs.
However, I am not able to create an index on the URL field as my varchar field is 400 characters. MySQL is complaining and saying; “#1071 – Specified key was too long; max key length is 767 bytes”. (Varchar 400 will take 1200 bytes)
What is the best way to do this, if you need to process minimum 500000 URLs per day in a single server?
We are already thinking using MongoDB for the same application, so we can simply query MongoDB and find the duplicate URL, and update the row. However, I am not in favor of solving this problem using MongoDB , and I’d like to use just MySQL at this stage as I’d like to be as lean as possible in the beginning and finish this section of the project much faster. (We haven’t played with MongoDB yet and don’t want to spend time at this stage)
Is there any other possibility doing this using less resources and time. I was thinking to get MD5 hash of the URL and store it as well. And I can make that field UNIQUE instead. I know, there will be collision but it is ok to have 5-10-20 duplicates in the 100 million URLs, if that’s the only problem.
Do you have any suggestions? I also don’t want to spend 10 seconds to insert just one URL, as it will process 500k URLs per day.
What would you suggest?
Edit: As per the request this is the table definition. (I am not using MD5 at the moment, it is for testing)
mysql> DESC url; +-------------+-----------------------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +-------------+-----------------------+------+-----+-------------------+-----------------------------+ | url_id | int(11) unsigned | NO | PRI | NULL | auto_increment | | url_text | varchar(400) | NO | | | | | md5 | varchar(32) | NO | UNI | | | | insert_date | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | | count | mediumint(9) unsigned | NO | | 0 | | +-------------+-----------------------+------+-----+-------------------+-----------------------------+ 5 rows in set (0.00 sec)
Answers:
Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.
Method 1
According to the DNS spec the maximum length of the domain name is :
The DNS itself places only one restriction on the particular labels
that can be used to identify resource records. That one restriction
relates to the length of the label and the full name. The length of
any one label is limited to between 1 and 63 octets. A full domain
name is limited to 255 octets (including the separators).
255 * 3 = 765 < 767 (Just barely 🙂 )
However notice that each component can only be 63 characters long.
So I would suggest chopping the url into the component bits.
Probably this would be adequate:
- protocol flag [“http” -> 0 ] ( store “http” as 0, “https” as 1, etc. )
- subdomain [“foo” ] ( 255 – 63 = 192 characters : I could subtract 2 more because min tld is 2 characters )
- domain [“example”], ( 63 characters )
- tld [“com”] ( 4 characters to handle “info” tld )
- path [ “a/really/long/path” ] ( as long as you want –store in a separate table)
- queryparameters [“with=lots&of=query¶meters=that&goes=on&forever&and=ever” ] ( store in a separate key/value table )
- portnumber / authentication stuff that is rarely used can be in a separate keyed table if actually needed.
This gives you some nice advantages:
- The index is only on the parts of the url that you need to search on (smaller index! )
- queries can be limited to the various url parts ( find every url in the facebook domain for example )
- anything url that has too long a subdomain/domain is bogus
- easy to discard query parameters.
- easy to do case insensitive domain name/tld searching
- discard the syntax sugar ( “://” after protocol, “.” between subdomain/domain, domain/tld, “/” between tld and path, “?” before query, “&” “=” in the query)
- Avoids the major sparse table problem. Most urls will not have query parameters, nor long paths. If these fields are in a separate table then your main table will not take the size hit. When doing queries more records will fit into memory, therefore faster query performance.
- (more advantages here).
Method 2
To index a field up to 767 chars wide, it charset must be ascii or similar, it can´t be utf8 because it uses 3 bytes per char, so the maximun wide for indexed utf-8 fields is 255
Of course, an 767 ascii url field, excedes your initial 400 chars spec. Of course, some urls excedes the 767 limit. Perhaps you can store and index on the first 735 chars plus the md5 hash. You can also have a text full_url field to preserve original value.
Notice that ascii charset is good enough for urls
Method 3
A well formed URL can only contain characters within the ASCII range – other characters need to be encoded. So assuming the URLs you intend to store are well formed (and if they are not, you may want to fix them prior to inserting them to the database), you could define your url_text column character set to ASCII (latin1 in MySQL). With ASCII, one char is one byte, and you will be able to index the whole 400 characters like you want.
Method 4
The odds of a spurious collision with MD5 (128 bits) can be phrased this way:
“If you have 9 Trillion different items, there is only one chance in 9 Trillion that two of them have the same MD5.”
To phrase it another way, it is more likely to be hit by a meteor while winning the mega-lottery.
Method 5
You can change the url_text from VarChar(400) to Text, then you can add a full text index against it allowing you to search for the existence of the URL before you insert it.
All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0