Importing wikimedia dumps

Ouvir com webReader

We are trying to gather some particular statistics about portuguese wikipedia usage.
I proposed myself for import the ptwiki-20090926-stub-meta-history dump, which is a XML file, and we'll be running very heavy queries (it's my task to optimize them, somehow).

What I'd like to mention is that the importing mechanism seems to be tremendously simplified. I remember testing a couple of tools in the past, without much success (or robustness). However, I gave a try to mwdumper this time, and it seems it does it. Note however that there were schema changes from the last mwdumper release, so you should have a look at WMF Bug #18328: mwdumper java.lang.IllegalArgumentException: Invalid contributor which releases a proposed fix which seems to work well. Special note to its memory efficiency: RAM is barely touched!

The xml.gz file is ~550MB, and was converted to a ~499MB sql.gz:

1,992,543 pages (3,458.297/sec), 15,713,915 revs (27,273.384/sec)

I've copied the schema from a running (updated!) mediawiki to spare some time. The tables seem to be InnoDB default, so let's simplify I/O a bit (I'm on my laptop). This will also allow to speed up loading times a lot:

MySQL:
  1. mysql> ALTER TABLE `TEXT` ENGINE=Blackhole;
  2. Query OK, 0 rows affected (0.01 sec)
  3. Records: 0  Duplicates: 0  WARNINGS: 0
  4.  
  5. mysql> ALTER TABLE page DROP INDEX page_random, DROP INDEX page_len;
  6. Query OK, 0 rows affected (0.01 sec)
  7. Records: 0  Duplicates: 0  WARNINGS: 0
  8.  
  9. mysql> ALTER TABLE revision DROP INDEX rev_timestamp, DROP INDEX page_timestamp, DROP INDEX user_timestamp, DROP INDEX usertext_timestamp;
  10. Query OK, 0 rows affected (0.01 sec)
  11. Records: 0  Duplicates: 0  WARNINGS: 0

The important here is to avoid the larger I/O if you don't need it at all. Table text has page/revision content which I'm not interested at all. As regarding MySQL's configuration (and as a personal note, anyway), the following configuration will give you great InnoDB speeds:

CODE:
  1. key_buffer = 512K
  2. sort_buffer_size = 16K
  3. read_buffer_size = 2M
  4. read_rnd_buffer_size = 1M
  5. myisam_sort_buffer_size = 512K
  6. query_cache_size = 0
  7. query_cache_type = 0
  8. bulk_insert_buffer_size = 2M
  9.  
  10. innodb_file_per_table
  11. transaction-isolation = READ-COMMITTED
  12. innodb_buffer_pool_size = 2700M
  13. innodb_additional_mem_pool_size = 20M
  14. innodb_autoinc_lock_mode = 2
  15. innodb_flush_log_at_trx_commit = 0
  16. innodb_doublewrite = 0
  17. skip-innodb-checksum
  18. innodb_locks_unsafe_for_binlog=1
  19. innodb_log_file_size=128M
  20. innodb_log_buffer_size=8388608
  21. innodb_support_xa=0
  22. innodb_autoextend_increment=16

Now I'd recommend uncompress the dump so it's easier to trace the whole process if it's taking too long:

CODE:
  1. [myself@speedy ~]$ gunzip ptwiki-20090926-stub-meta-history.sql.gz
  2. [myself@speedy ~]$ cat ptwiki-20090926-stub-meta-history.sql | mysql wmfdumps

After some minutes on a Dual Quad Core Xeon 2.0GHz and 2.4 GB of datafiles we are ready to rock! I will probably also need later the user table, which Wikimedia doesn't distribute, so I'll rebuild it now:

MySQL:
  1. mysql> ALTER TABLE user modify COLUMN user_id INT(10) UNSIGNED NOT NULL;
  2. Query OK, 0 rows affected (0.12 sec)
  3. Records: 0  Duplicates: 0  WARNINGS: 0
  4.  
  5. mysql> ALTER TABLE user DROP INDEX user_email_token, DROP INDEX user_name;
  6. Query OK, 0 rows affected (0.03 sec)
  7. Records: 0  Duplicates: 0  WARNINGS: 0
  8.  
  9. mysql> INSERT INTO user(user_id,user_name) SELECT DISTINCT rev_user,rev_user_text FROM revision WHERE rev_user <> 0;
  10. Query OK, 119140 rows affected, 4 WARNINGS (2 min 4.45 sec)
  11. Records: 119140  Duplicates: 0  WARNINGS: 0
  12.  
  13. mysql> ALTER TABLE user DROP PRIMARY KEY;
  14. Query OK, 0 rows affected (0.13 sec)
  15. Records: 0  Duplicates: 0  WARNINGS: 0
  16.  
  17. mysql> INSERT INTO user(user_id,user_name) VALUES(0,'anonymous');
  18. Query OK, 1 row affected, 4 WARNINGS (0.00 sec)

It's preferable to join on INT's rather than VARCHAR(255) that's why I reconstructed the user table. I actually removed the PRIMARY KEY but I set it after the process. What happens is that there are users that have been renamed and thus they appear with same id, different user_name. The query to list them all is this:

MySQL:
  1. mysql> SELECT a.user_id,a.user_name FROM user a JOIN (SELECT user_id,count(1) as counter FROM user GROUP BY user_id HAVING counter > 1 ORDER BY counter desc) as b on a.user_id = b.user_id ORDER BY user_id DESC;
  2. ....
  3. 14 rows in SET (0.34 sec)
  4.  
  5. mysql> UPDATE user a JOIN (SELECT user_id,GROUP_CONCAT(user_name) as user_name,count(1) as counter FROM user GROUP BY user_id HAVING counter > 1) as b SET a.user_name = b.user_name WHERE a.user_id = b.user_id;
  6. Query OK, 14 rows affected (2.49 sec)
  7. Rows matched: 14  Changed: 14  WARNINGS: 0

The duplicates were removed manually (they're just 7). Now, let's start to go deeper. I'm not concerned about optimizing for now. What I wanted to run right away was the query I asked on Toolserver more than a month ago:

MySQL:
  1. mysql>  CREATE TABLE `teste` (
  2.     ->   `rev_user` INT(10) UNSIGNED NOT NULL DEFAULT '0',
  3.     ->   `page_namespace` INT(11) NOT NULL,
  4.     ->   `rev_page` INT(10) UNSIGNED NOT NULL,
  5.     ->   `edits` INT(1) UNSIGNED NOT NULL,
  6.     ->   PRIMARY KEY (`rev_user`,`page_namespace`,`rev_page`)
  7.     -> ) ENGINE=INNODB DEFAULT CHARSET=latin1 ;
  8. Query OK, 0 rows affected (0.04 sec)
  9.  
  10. mysql> INSERT INTO teste SELECT r.rev_user, p.page_namespace, r.rev_page, count(1) AS edits FROM revision r JOIN page p ON r.rev_page = p.page_id GROUP BY r.rev_user,p.page_namespace,r.rev_page;
  11. Query OK, 7444039 rows affected (8 min 28.98 sec)
  12. Records: 7444039  Duplicates: 0  WARNINGS: 0
  13.  
  14. mysql> CREATE TABLE edits_per_namespace SELECT STRAIGHT_JOIN u.user_id,u.user_name, page_namespace,count(1) as edits FROM teste JOIN user u on u.user_id = rev_user GROUP BY rev_user,page_namespace;
  15. Query OK, 187624 rows affected (3.65 sec)
  16. Records: 187624  Duplicates: 0  WARNINGS: 0
  17.  
  18. mysql> SELECT * FROM edits_per_namespace ORDER BY edits desc limit 5;
  19. +---------+---------------+----------------+--------+
  20. | user_id | user_name     | page_namespace | edits  |
  21. +---------+---------------+----------------+--------+
  22. |   76240 | Rei-bot       |              0 | 365800 |
  23. |       0 | anonymous     |              0 | 253238 |
  24. |   76240 | Rei-bot       |              3 | 219085 |
  25. |    1740 | LeonardoRob0t |              0 | 145418 |
  26. 170627 | SieBot        |              0 | 121647 |
  27. +---------+---------------+----------------+--------+
  28. 5 rows in SET (0.09 sec)

Well, that's funny Rei-artur's bot beats all summed anonymous edits on the main namespace :) I still need to setup a way of discarding the bots, they usually don't count on stats. I'll probably set a flag on the user table myself, but this is enough to get us started..


You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

AddThis Social Bookmark Button

3 Responses to “Importing wikimedia dumps”

  1. hehehe, não fazia ideia disso…

    não sei de ajuda mas podes filtrar os bots usando a API,

    lista de bots: http://pt.wikipedia.org/w/api.php?action=query&list=allusers&aufrom=A&augroup=bot&aulimit=500&format=xml

    ou individualmente:
    http://pt.wikipedia.org/w/api.php?action=query&list=users&ususers=Rei-bot&usprop=editcount|groups&format=xml

    um forte abraço

  2. Oi Rei-artur! De facto, não me lembrei de explorar a API. A última vez que olhei para ela estava sob intenso desenvolvimento. O primeiro link já me ajuda bastante, mas ainda vou espremer a API e ver o que sai ehhehehe. Um abraço, e obrigado!

  3. [...] vez em português, decidi dar [alguma] continuidade ao que comecei há uns dias com a importação dos dumps da Wikipédia. Graças à dica do Rei-artur foi fácil extrair a lista de robôs, para excluir das estatísticas. [...]

Leave a Reply