Ainda sobre o tamanho dos campos VARCHAR…

January 25th, 2010 ntavares Posted in mysql, performance, pt_PT No Comments »

Ouvir com webReader

Em Setembro escrevi sobre as consequências de deixar ao acaso o tamanho e character set de campos VARCHAR. Entretanto gostaria de complementar as observações com o que escrevo abaixo.

A mistura de charsets — como ter uma tabela num charset e um determinado campo dessa tabela com outro charset diferente, por exemplo — deve ser cuidadosamente analisada. Existe alguma probabilidade de ocorrerem complicações, caso se trate esta questão levianamente.

À parte do que se deve ou não fazer, o que eu gostaria era que se compreendesse o benefício que se pode atingir no caso particular da utilização de UUIDs. Já vimos que UTF-8 para este caso particular é um desperdício, mas o que gostaria de demonstrar são se, de facto, compensa alterar campos específicos quando nem todos podem ser alterados.

Vejamos: vão ser criadas 2 tabelas MyISAM em UTF-8, uma com um campo Latin1 e outra com o campo natural, em UTF-8. Vou usar uma tabela que tinha para aqui com 2M de UUIDs apenas para acelerar o processo. Essa tabela, table_uuid, é original Latin1, embora o foco não seja sobre o carregamento dos dados, mas sim do seu cruzamento entre as duas primeiras:

MySQL:
  1. mysql> CREATE TABLE charset_latin1 ( id CHAR(36) CHARSET latin1, PRIMARY KEY (id) ) CHARSET utf8;
  2. Query OK, 0 rows affected (0.01 sec)
  3.  
  4. mysql> CREATE TABLE charset_utf8 ( id CHAR(36) CHARSET utf8, PRIMARY KEY (id) ) CHARSET utf8;
  5. Query OK, 0 rows affected (0.01 sec)
  6.  
  7. mysql> INSERT INTO charset_utf8 SELECT id FROM table_uuid ;
  8. Query OK, 2097152 rows affected (2 min 29.77 sec)
  9. Records: 2097152  Duplicates: 0  WARNINGS: 0
  10.  
  11. mysql> INSERT INTO charset_latin1 SELECT id FROM table_uuid ;
  12. Query OK, 2097152 rows affected (50.15 sec)
  13. Records: 2097152  Duplicates: 0  WARNINGS: 0

Agora, suponhamos que a tabela charset_latin1 é uma tabela originalmente em UTF-8, para a qual já fizemos a transformação do campo (que vai albergar UUIDs) para Latin1. O que pretendemos demonstrar são os ganhos/prejuízos do cruzamento de tabelas com as mesmas características, e com diferentes:

MySQL:
  1. mysql> SELECT count(1) FROM charset_utf8 a, charset_latin1 b WHERE a.id = b.id;
  2. +----------+
  3. | count(1) |
  4. +----------+
  5. 2097152 |
  6. +----------+
  7. 1 row in SET (1 min 8.82 sec)
  8.  
  9. mysql> SELECT count(1) FROM charset_utf8 a, charset_utf8 b  WHERE a.id = b.id;
  10. +----------+
  11. | count(1) |
  12. +----------+
  13. 2097152 |
  14. +----------+
  15. 1 row in SET (58.00 sec)
  16.  
  17. mysql> SELECT count(1) FROM charset_latin1 a, charset_latin1 b  WHERE a.id = b.id;
  18. +----------+
  19. | count(1) |
  20. +----------+
  21. 2097152 |
  22. +----------+
  23. 1 row in SET (24.43 sec)

Ou seja, como já se disse, fazer JOINs com Latin1 é muito rápido, relativamente às outras opções. Mas repare-se quando se cruzam diferentes charsets (1º JOIN): ocorre uma degradação de 19% face ao pior caso.

Ou seja, tendo em conta que estas tabelas podem ser apenas tabelas de relação [quero com isto dizer que não contêm dados que não sirvam senão para cruzamentos/JOINs] é expectável que a performance se degrade sempre que sejam cruzadas com outras cuja chave não esteja no mesmo charset. Deduz-se daqui que existe, portanto, overhead relativo às conversões de charset durante o JOIN.

Apesar de estarmos perante um index scan (ie, com os dados todos em cache), ocorreu-me também experimentar com InnoDB.

MySQL:
  1. mysql> ALTER TABLE charset_utf8 engine=INNODB;
  2. Query OK, 2097152 rows affected (48.18 sec)
  3. Records: 2097152  Duplicates: 0  WARNINGS: 0
  4.  
  5. mysql> ALTER TABLE charset_latin1 engine=INNODB;
  6. Query OK, 2097152 rows affected (39.43 sec)
  7. Records: 2097152  Duplicates: 0  WARNINGS: 0

Convém ressalvar aqui que fiz uns SELECTs primeiro para ter a certeza que os datafiles estavam carregados na buffer pool.

MySQL:
  1. mysql> SELECT count(1) FROM charset_utf8 a, charset_latin1 b WHERE a.id = b.id;
  2. +----------+
  3. | count(1) |
  4. +----------+
  5. 2097152 |
  6. +----------+
  7. 1 row in SET (21.65 sec)
  8.  
  9. mysql> SELECT count(1) FROM charset_utf8 a, charset_utf8 b  WHERE a.id = b.id;
  10. +----------+
  11. | count(1) |
  12. +----------+
  13. 2097152 |
  14. +----------+
  15. 1 row in SET (12.61 sec)
  16.  
  17. mysql> SELECT count(1) FROM charset_latin1 a, charset_latin1 b  WHERE a.id = b.id;
  18. +----------+
  19. | count(1) |
  20. +----------+
  21. 2097152 |
  22. +----------+
  23. 1 row in SET (8.25 sec)

Ou seja, a conclusão que se pode tirar daqui é que: não só devemos pensar na optimização que se ganha com a escolha do charset, mas também devemos analisar o impacto de não podermos levar a tarefa de reconversão avante na totalidade. A necessidade de cruzar campos com charsets diferentes torna-se, efectivamente, mais penoso. Resta comparar, caso a caso, o benefício que se obtém em queries pouco/nada optimizadas versus o prejuízo de performance que poderá reflectir-se em cruzamentos/JOINs com campos em charsets diferentes.

AddThis Social Bookmark Button

As prendas da MySQL para 2009

December 30th, 2009 ntavares Posted in clustering, mysql, pt_PT 1 Comment »

Ouvir com webReader

Bem, já lá vai algum tempito, mas aproveitei o tempo de férias para dar algum feedback do que de novo apareceu relativamente ao MySQL.

Começo por sinalizar o aparente descontinuamento do ramo 5.0.x do MySQL, pelo menos no que toca à manutenção activa. Também já não era sem tempo, já que a versão 5.1 trouxe demasiadas coisas boas para ser ignorada.

A seguir deve seguir-se a 5.4.x; as novidades foram muito bem recebidas, sobretudo pela malta do Solaris e do InnoDB que, por sua vez - e devido ao facto do InnoDB passar a ser desenvolvido em forma de plugin - deverá evoluir nestas questões de forma independente do core. Mas a grande grande novidade...

... é o surgimento da versão 5.5 como milestone 2 a caminho do próximo ramo GA que, além de fundir as novidades da 5.4, oferece-nos (finalmente!):

  • a possibilidade de alargar o particionamento para além das funções de particionamento, que eram pouquíssimas e até para particionar por data eram necessários alguns estratagemas;
  • o suporte para key caches por partição (fantástico!);
  • as tão esperadas funções SIGNAL e RESIGNAL - as marteladas que eram necessárias para contornar esta lacuna eram terríveis de manter :);
  • o suporte para a replicação semi-síncrona sob a forma de plugin;
  • e outras, que não obstante não serem enumeradas por mim, não devem ser descuradas na leitura!!

Claro que já houve experimentadores destas tecnologias, e aqui vos deixo mais material de leitura:

Convido-vos a deixarem aqui links para os vossos artigos e comentários sobre as restantes funcionalidades.

AddThis Social Bookmark Button

Finding for each time interval, how many records are “ocurring” during that interval

September 28th, 2009 ntavares Posted in en_US, mysql, performance No Comments »

Ouvir com webReader

This is a complex problem: You are mapping events (of some kind) with a start and end timestamp, but how do you know, for a specific interval [ti,tf] (timeslot), how many records of those have start<ti and end>tf? This problem is complex because you have no records defining the timeslot to serve either as a grouping field or comparison field. This is a problem I've seen people tend to approach with a procedural approach, and that's the big problem to understand SQL, which tipically are set problems.

The main issue around this problem is that you need to count existences for a list you don't have. In my real scenario, there are some restrictions to have in mind:

  • The data set is extremely large, so this operation is daily generated for a specific day.
  • Due to the above, the table is partitioned on a filtering field (stoptime below).

Immediatelly, some solutions pop in my head:

  • Use a summary table for each time slot: when a record is inserted, increment all respective time slots by one. This is cool, but I'd like to avoid insert delay. This solution also implies having a persistent table for each timeslot during the whole times of the whole records, right? That could be from 2009-08 to 2009-09, but also could start on 1989-09 to 2009-09, which represent ~10.5M records, some of them possibly zero.
  • Another option could be to use cursors to iterate through the selection of records which cross a specific minute and perhaps fill a temporary table with the results. Cursors are slow, it is a procedural approach, and represents programming overhead.

But then again, these are both procedural solutions and that's why they don't seem so effective - actually, the first is not quite the same as the second and is pretty (well) used, however it induces some extra effort and schema changes.
The solution I'm proposing is a set theory approach: IF we had a table of timeslots (minute slots), we could just join the two tables and apply the rules we want. But we don't have. But perhaps we can generate it. This idea came out after reading the brilliant examples of Roland Bouman's MySQL: Another Ranking trick and Shlomi Noach's SQL: Ranking without self join.

Let's build an example table:

MySQL:
  1. mysql> CREATE TABLE `phone_calls` (
  2.     ->   `starttime` DATETIME NOT NULL,
  3.     ->   `stoptime` DATETIME NOT NULL,
  4.     ->   `id` INT(11) NOT NULL,
  5.     ->   PRIMARY KEY (`id`),
  6.     ->   KEY `idx_stoptime` (`stoptime`)
  7.     -> ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
  8. Query OK, 0 rows affected (0.04 sec)

Now manually insert some interesting records:

MySQL:
  1. mysql> SELECT * FROM phone_calls;
  2. +---------------------+---------------------+----+
  3. | starttime           | stoptime            | id |
  4. +---------------------+---------------------+----+
  5. | 2009-08-03 09:23:42 | 2009-08-03 09:24:540 |
  6. | 2009-08-03 11:32:11 | 2009-08-03 11:34:552 |
  7. | 2009-08-03 10:23:12 | 2009-08-03 10:23:131 |
  8. | 2009-08-03 16:12:53 | 2009-08-03 16:20:213 |
  9. | 2009-08-03 11:29:09 | 2009-08-03 11:34:514 |
  10. +---------------------+---------------------+----+
  11. 5 rows in SET (0.00 sec)

As an example, you may verify that record id=2 crosses only time slot '2009-08-03 11:33:00' and no other, and record id=0 crosses none. These are perfectly legitimate call start and end timestamps.

Let's see a couple of premisses:

  • A record that traverses a single minute can be described by this:

    MINUTESLOT(starttime) - MINUTESLOT(stoptime) >= 2

    You can think of MINUTESLOT(x) as the timeslot record associated with field x in the record. It actually represents CONCAT(LEFT(x,16),":00") and the difference is actually a TIMEDIFF();

  • A JOIN will give you a product of records for each match, which means if I could "know" a specific timeslot I could multiply it by the number of records that cross it and then GROUP BY with a COUNT(1). But I don't have the timeslots...

As I've said, I'm generating this recordset for a specific day, and that's why these records all refer to 2009-08-03. Let's confirm I can select the recordspace I'm interested in:

MySQL:
  1. mysql> SELECT starttime,stoptime
  2.     -> FROM phone_calls
  3.     -> WHERE
  4.     ->    /* partition pruning */
  5.     ->    stoptime >= '2009-08-03 00:00:00'
  6.     ->    AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR)
  7.     ->
  8.     ->    /* the real filtering:
  9.    /*>       FIRST: only consider call where start+stop boundaries are out of the
  10.    /*>       minute slot being analysed (seed.timeslot)
  11.    /*>    */
  12.     ->    AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2
  13.     ->
  14.     ->    /* consequence of the broader interval that we had set to cope
  15.    /*>       with calls taking place beyond midnight
  16.    /*>    */
  17.     ->    AND starttime <= '2009-08-03 23:59:59';
  18. +---------------------+---------------------+
  19. | starttime           | stoptime            |
  20. +---------------------+---------------------+
  21. | 2009-08-03 11:32:11 | 2009-08-03 11:34:55 |
  22. | 2009-08-03 16:12:53 | 2009-08-03 16:20:21 |
  23. | 2009-08-03 11:29:09 | 2009-08-03 11:34:51 |
  24. +---------------------+---------------------+
  25. 3 rows in SET (0.00 sec)

These are the 'calls' that cross any minute in the selected day. I deliberately showed specific restrictions so you understand the many aspects involved:

  • Partition pruning is fundamental, unless you want to scan the whole 500GB table. This means you are forced to limit the scope of analysed records. Now, if you have a call starting at 23:58:00 and stopping at 00:01:02 the next day, pruning would leave that record out. So I've given 1 HOUR of margin to catch those records;
  • We had to set stoptime later than the end of the day being analysed. That also means we might catch unwanted records starting between 00:00:00 and that 1 HOUR margin, so we'll need to filter them out;
  • Finally, there's also our rule about "crossing a minute".

In the end, maybe some of these restrictions (WHERE clauses) can be removed as redundant.

Now let's see if we can generate a table of timeslots:

MySQL:
  1. mysql> SELECT CONVERT(@a,DATETIME) AS timeslot
  2.     ->    FROM phone_calls_helper, (
  3.     ->       SELECT @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
  4.     ->    WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
  5.     ->    LIMIT 1440;
  6. +---------------------+
  7. | timeslot            |
  8. +---------------------+
  9. | 2009-08-03 00:00:00 |
  10. | 2009-08-03 00:01:00 |
  11. ....
  12. | 2009-08-03 23:58:00 |
  13. | 2009-08-03 23:59:00 |
  14. +---------------------+
  15. 1440 rows in SET (0.01 sec)

This is the exciting part: We generate the timeslots using user variables and this might be only possible to do in MySQL. Notice that I need to recur to a table, since I can't produce results from the void: its records are actually used as a product of my join to generate what I want. You can use any table, as long as it has at least 1440 records (number of minutes in a day). But your should also have in mind the kind of access is being made to that table because it can translate to unnecessary I/O if you're not carefull:

MySQL:
  1. mysql> EXPLAIN SELECT CONVERT(@a,DATETIME) AS timeslot
  2.     ->    FROM phone_calls_helper, (
  3.     ->       SELECT @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
  4.     ->    WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
  5.     ->    LIMIT 1440;
  6. +----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
  7. | id | select_type | table              | type   | possible_keys | key     | key_len | ref  | rows | Extra                    |
  8. +----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
  9. 1 | PRIMARY     |          | system | NULL          | NULL    | NULL    | NULL |    1 |                          |
  10. 1 | PRIMARY     | phone_calls_helper | index  | NULL          | PRIMARY | 4       | NULL | 1440 | USING WHERE; USING index |
  11. 2 | DERIVED     | NULL               | NULL   | NULL          | NULL    | NULL    | NULL | NULL | No tables used           |
  12. +----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
  13. 3 rows in SET (0.00 sec)

In my case I see scanning the 1400 records is being made on the PRIMARY key, which is great. You should choose a table whose keycache has high probability to be in RAM so the index scanning don't go I/O bound either. Scanning 1440 PRIMARY KEY entries shouldn't be quite an I/O effort even in cold datasets, but if you can avoid it anyway, the better.

At this moment you are probably starting to see the solution: either way the Optimizer choosed the first or the last table, it's always a win-win case, since the 1440 are RAM based: you can choose to think of 1440 timeslots being generated and then multiplied by the number of records that cross each timeslot (Rc), or you can choose to think of the 3 records that cross any timeslot and generate timeslots that fall between the start/stop boundaries of each record (Tr). The mathematical result is

timeslots_per_records vs. records_per_timeslots

Well, they might not represent the same effort. Remember that the timeslots are memory and seeking back and forth from it is less costly than seeking back and forth from possibly I/O bound data. However, due to our "imaginary" way of generating the timeslots (which aren't made persistent anyhow by that subquery), we'd need to materialize it so that we could seek on it. But that would also give us the change to optimize some other issues, like CONVERT(), the DATE_ADD()s, etc, and scan only the timeslots that are crossed by a specific call, which is optimal. However, if you're going to GROUP BY the timeslot you could use an index on the timeslot table and fetch each record that cross each timeslot. Tough decision, eh? I have both solutions, I won't benchmark them here, but since the "timeslots per record" made me materialize the table, I'll leave it here as an example:

MySQL:
  1. mysql> CREATE TEMPORARY TABLE `phone_calls_helper2` (
  2.     ->   `tslot` DATETIME NOT NULL,
  3.     ->   PRIMARY KEY (`tslot`)
  4.     -> ) ENGINE=MEMORY DEFAULT CHARSET=latin1 ;
  5. Query OK, 0 rows affected (0.00 sec)
  6.  
  7. mysql> INSERT INTO phone_calls_helper2 SELECT CONVERT(@a,DATETIME) AS timeslot
  8.     ->    FROM phone_calls_helper, (
  9.     ->       SELECT @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
  10.     ->    WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
  11.     ->    LIMIT 1440;
  12. Query OK, 1440 rows affected (0.01 sec)
  13. Records: 1440  Duplicates: 0  WARNINGS: 0

So now, the "timeslots per records" query should look like this:

MySQL:
  1. mysql> EXPLAIN SELECT tslot
  2.     -> FROM phone_calls FORCE INDEX(idx_stoptime)
  3.     -> JOIN phone_calls_helper2 FORCE INDEX (PRIMARY) ON
  4.     ->       tslot > CONCAT(LEFT(starttime,16),":00")
  5.     ->       AND tslot < CONCAT(LEFT(stoptime,16),":00")
  6.     ->
  7.     -> WHERE
  8.     ->       /* partition pruning */
  9.     ->       stoptime >= '2009-08-03 00:00:00'
  10.     ->       AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR)
  11.     ->
  12.     ->       /* the real filtering:
  13.    /*>       FIRST: only consider call where start+stop boundaries are out of the
  14.    /*>              minute slot being analysed (seed.timeslot)
  15.    /*>       */
  16.     ->       AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2
  17.     ->
  18.     ->       /* consequence of the broader interval that we had set to cope
  19.    /*>          with calls taking place beyond midnight
  20.    /*>       */
  21.     ->       AND starttime <= '2009-08-03 23:59:59'
  22.     -> GROUP BY tslot;
  23. +----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
  24. | id | select_type | table               | type  | possible_keys | key          | key_len | ref  | rows | Extra                                          |
  25. +----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
  26. 1 | SIMPLE      | phone_calls         | range | idx_stoptime  | idx_stoptime | 8       | NULL |    4 | USING WHERE; USING temporary; USING filesort   |
  27. 1 | SIMPLE      | phone_calls_helper2 | ALL   | PRIMARY       | NULL         | NULL    | NULL | 1440 | Range checked for each record (index map: 0x1) |
  28. +----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
  29. 2 rows in SET (0.00 sec)

It's interesting to see «Range checked for each record (index map: 0x1)» for which the manual states:

MySQL found no good index to use, but found that some of indexes might be used after column values from preceding tables are known.

I can't explain why shouldn't it use the PRIMARY KEY - I tried using CONVERT() for the CONCAT()s to ensure the same data type, but no luck - , but I'm probably safe as it'll probably use it. And this is the final result:

MySQL:
  1. mysql> SELECT tslot,count(1) FROM phone_calls FORCE INDEX(idx_stoptime) JOIN phone_calls_helper2 FORCE INDEX (PRIMARY) ON       tslot > CONVERT(CONCAT(LEFT(starttime,16),":00"),DATETIME)       AND tslot < CONVERT(CONCAT(LEFT(stoptime,16),":00"),DATETIME)  WHERE              stoptime >= '2009-08-03 00:00:00'       AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR)               AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2               AND starttime <= '2009-08-03 23:59:59' GROUP BY tslot;
  2. +---------------------+----------+
  3. | tslot               | count(1) |
  4. +---------------------+----------+
  5. | 2009-08-03 11:30:00 |        1 |
  6. | 2009-08-03 11:31:00 |        1 |
  7. | 2009-08-03 11:32:00 |        1 |
  8. | 2009-08-03 11:33:00 |        2 |
  9. | 2009-08-03 16:13:00 |        1 |
  10. | 2009-08-03 16:14:00 |        1 |
  11. | 2009-08-03 16:15:00 |        1 |
  12. | 2009-08-03 16:16:00 |        1 |
  13. | 2009-08-03 16:17:00 |        1 |
  14. | 2009-08-03 16:18:00 |        1 |
  15. | 2009-08-03 16:19:00 |        1 |
  16. +---------------------+----------+
  17. 11 rows in SET (0.00 sec)

Notice that I already did the GROUP BY and that it forces a temporary table and filesort, so it's better to be careful on how many records this will generate. In my (real) case the grouping is done on more phone_calls fields, so I can probably reuse the index later. As regarding the post-execution, since the helper table is TEMPORARY, everything will be dismissed automatically without further programming overhead.

I hope you understand this solution opens a wide range of "set"-based solutions to problems you might try to solve in a procedural way - which is the reason your solution might turn to be painfull.

AddThis Social Bookmark Button

Importing wikimedia dumps

September 28th, 2009 ntavares Posted in en_US, mysql, wikipedia 3 Comments »

Ouvir com webReader

We are trying to gather some particular statistics about portuguese wikipedia usage.
I proposed myself for import the ptwiki-20090926-stub-meta-history dump, which is a XML file, and we'll be running very heavy queries (it's my task to optimize them, somehow).

What I'd like to mention is that the importing mechanism seems to be tremendously simplified. I remember testing a couple of tools in the past, without much success (or robustness). However, I gave a try to mwdumper this time, and it seems it does it. Note however that there were schema changes from the last mwdumper release, so you should have a look at WMF Bug #18328: mwdumper java.lang.IllegalArgumentException: Invalid contributor which releases a proposed fix which seems to work well. Special note to its memory efficiency: RAM is barely touched!

The xml.gz file is ~550MB, and was converted to a ~499MB sql.gz:

1,992,543 pages (3,458.297/sec), 15,713,915 revs (27,273.384/sec)

I've copied the schema from a running (updated!) mediawiki to spare some time. The tables seem to be InnoDB default, so let's simplify I/O a bit (I'm on my laptop). This will also allow to speed up loading times a lot:

MySQL:
  1. mysql> ALTER TABLE `TEXT` ENGINE=Blackhole;
  2. Query OK, 0 rows affected (0.01 sec)
  3. Records: 0  Duplicates: 0  WARNINGS: 0
  4.  
  5. mysql> ALTER TABLE page DROP INDEX page_random, DROP INDEX page_len;
  6. Query OK, 0 rows affected (0.01 sec)
  7. Records: 0  Duplicates: 0  WARNINGS: 0
  8.  
  9. mysql> ALTER TABLE revision DROP INDEX rev_timestamp, DROP INDEX page_timestamp, DROP INDEX user_timestamp, DROP INDEX usertext_timestamp;
  10. Query OK, 0 rows affected (0.01 sec)
  11. Records: 0  Duplicates: 0  WARNINGS: 0

The important here is to avoid the larger I/O if you don't need it at all. Table text has page/revision content which I'm not interested at all. As regarding MySQL's configuration (and as a personal note, anyway), the following configuration will give you great InnoDB speeds:

CODE:
  1. key_buffer = 512K
  2. sort_buffer_size = 16K
  3. read_buffer_size = 2M
  4. read_rnd_buffer_size = 1M
  5. myisam_sort_buffer_size = 512K
  6. query_cache_size = 0
  7. query_cache_type = 0
  8. bulk_insert_buffer_size = 2M
  9.  
  10. innodb_file_per_table
  11. transaction-isolation = READ-COMMITTED
  12. innodb_buffer_pool_size = 2700M
  13. innodb_additional_mem_pool_size = 20M
  14. innodb_autoinc_lock_mode = 2
  15. innodb_flush_log_at_trx_commit = 0
  16. innodb_doublewrite = 0
  17. skip-innodb-checksum
  18. innodb_locks_unsafe_for_binlog=1
  19. innodb_log_file_size=128M
  20. innodb_log_buffer_size=8388608
  21. innodb_support_xa=0
  22. innodb_autoextend_increment=16

Now I'd recommend uncompress the dump so it's easier to trace the whole process if it's taking too long:

CODE:
  1. [myself@speedy ~]$ gunzip ptwiki-20090926-stub-meta-history.sql.gz
  2. [myself@speedy ~]$ cat ptwiki-20090926-stub-meta-history.sql | mysql wmfdumps

After some minutes on a Dual Quad Core Xeon 2.0GHz and 2.4 GB of datafiles we are ready to rock! I will probably also need later the user table, which Wikimedia doesn't distribute, so I'll rebuild it now:

MySQL:
  1. mysql> ALTER TABLE user modify COLUMN user_id INT(10) UNSIGNED NOT NULL;
  2. Query OK, 0 rows affected (0.12 sec)
  3. Records: 0  Duplicates: 0  WARNINGS: 0
  4.  
  5. mysql> ALTER TABLE user DROP INDEX user_email_token, DROP INDEX user_name;
  6. Query OK, 0 rows affected (0.03 sec)
  7. Records: 0  Duplicates: 0  WARNINGS: 0
  8.  
  9. mysql> INSERT INTO user(user_id,user_name) SELECT DISTINCT rev_user,rev_user_text FROM revision WHERE rev_user <> 0;
  10. Query OK, 119140 rows affected, 4 WARNINGS (2 min 4.45 sec)
  11. Records: 119140  Duplicates: 0  WARNINGS: 0
  12.  
  13. mysql> ALTER TABLE user DROP PRIMARY KEY;
  14. Query OK, 0 rows affected (0.13 sec)
  15. Records: 0  Duplicates: 0  WARNINGS: 0
  16.  
  17. mysql> INSERT INTO user(user_id,user_name) VALUES(0,'anonymous');
  18. Query OK, 1 row affected, 4 WARNINGS (0.00 sec)

It's preferable to join on INT's rather than VARCHAR(255) that's why I reconstructed the user table. I actually removed the PRIMARY KEY but I set it after the process. What happens is that there are users that have been renamed and thus they appear with same id, different user_name. The query to list them all is this:

MySQL:
  1. mysql> SELECT a.user_id,a.user_name FROM user a JOIN (SELECT user_id,count(1) as counter FROM user GROUP BY user_id HAVING counter > 1 ORDER BY counter desc) as b on a.user_id = b.user_id ORDER BY user_id DESC;
  2. ....
  3. 14 rows in SET (0.34 sec)
  4.  
  5. mysql> UPDATE user a JOIN (SELECT user_id,GROUP_CONCAT(user_name) as user_name,count(1) as counter FROM user GROUP BY user_id HAVING counter > 1) as b SET a.user_name = b.user_name WHERE a.user_id = b.user_id;
  6. Query OK, 14 rows affected (2.49 sec)
  7. Rows matched: 14  Changed: 14  WARNINGS: 0

The duplicates were removed manually (they're just 7). Now, let's start to go deeper. I'm not concerned about optimizing for now. What I wanted to run right away was the query I asked on Toolserver more than a month ago:

MySQL:
  1. mysql>  CREATE TABLE `teste` (
  2.     ->   `rev_user` INT(10) UNSIGNED NOT NULL DEFAULT '0',
  3.     ->   `page_namespace` INT(11) NOT NULL,
  4.     ->   `rev_page` INT(10) UNSIGNED NOT NULL,
  5.     ->   `edits` INT(1) UNSIGNED NOT NULL,
  6.     ->   PRIMARY KEY (`rev_user`,`page_namespace`,`rev_page`)
  7.     -> ) ENGINE=INNODB DEFAULT CHARSET=latin1 ;
  8. Query OK, 0 rows affected (0.04 sec)
  9.  
  10. mysql> INSERT INTO teste SELECT r.rev_user, p.page_namespace, r.rev_page, count(1) AS edits FROM revision r JOIN page p ON r.rev_page = p.page_id GROUP BY r.rev_user,p.page_namespace,r.rev_page;
  11. Query OK, 7444039 rows affected (8 min 28.98 sec)
  12. Records: 7444039  Duplicates: 0  WARNINGS: 0
  13.  
  14. mysql> CREATE TABLE edits_per_namespace SELECT STRAIGHT_JOIN u.user_id,u.user_name, page_namespace,count(1) as edits FROM teste JOIN user u on u.user_id = rev_user GROUP BY rev_user,page_namespace;
  15. Query OK, 187624 rows affected (3.65 sec)
  16. Records: 187624  Duplicates: 0  WARNINGS: 0
  17.  
  18. mysql> SELECT * FROM edits_per_namespace ORDER BY edits desc limit 5;
  19. +---------+---------------+----------------+--------+
  20. | user_id | user_name     | page_namespace | edits  |
  21. +---------+---------------+----------------+--------+
  22. |   76240 | Rei-bot       |              0 | 365800 |
  23. |       0 | anonymous     |              0 | 253238 |
  24. |   76240 | Rei-bot       |              3 | 219085 |
  25. |    1740 | LeonardoRob0t |              0 | 145418 |
  26. 170627 | SieBot        |              0 | 121647 |
  27. +---------+---------------+----------------+--------+
  28. 5 rows in SET (0.09 sec)

Well, that's funny Rei-artur's bot beats all summed anonymous edits on the main namespace :) I still need to setup a way of discarding the bots, they usually don't count on stats. I'll probably set a flag on the user table myself, but this is enough to get us started..

AddThis Social Bookmark Button

Side-effect of mysqlhotcopy and LVM snapshots on active READ server

September 26th, 2009 ntavares Posted in en_US, monitorização, mysql, performance No Comments »

Ouvir com webReader

I just came across a particular feature of MySQL while using inspecting a Query Cache being wiped out at backup times. Whenever you run FLUSH TABLES, the whole Query Cache gets flushed as well, even if you FLUSH TABLES a particular table. And guess what, mysqlhotcopy issues FLUSH TABLES so the tables get in sync on storage.

I actually noticed the problem with Query Cache on a server reporting the cache flush at a [too] round time (backup time).

flush_tables_affects_query_cache

First thought was «there's something wrong about mysqlhotcopy. But actually this is expected behaviour:

When no tables are named, closes all open tables, forces all tables in use to be closed, and flushes the query cache. With one or more table names, flushes only the given tables. FLUSH TABLES also removes all query results from the query cache, like the RESET QUERY CACHE statement.

I got curious about why the heck closing a table should invalidate the cache - maybe the "close table" mechanism is overly cautious?

Anyway, it's not mysqlhotcopy's fault. And since you should issue FLUSH TABLES for LVM snapshost for consistentency as well, this method is also affected, which renders both methods pretty counter-perfomance in a single production server, comparing to mysqldump, unless you do post-backup warmup process. For that, it would be interesting to be able to dump the QC contents and reload them after the backup - which is not possible, at the moment... bummer...

AddThis Social Bookmark Button

Tamanho e character set de campos VARCHAR e consequências

September 24th, 2009 ntavares Posted in mysql, performance, pt_PT 3 Comments »

Ouvir com webReader

Apesar do tamanho dos campos VARCHAR não influenciar o tamanho dos índices, este tipo de dados (e outros semelhantes) comportam uma série de características que podem afectar seriamente a performance. Vamos testar o impacto destes campos em dois cenários: com character sets diferentes e com tamanhos diferentes.

Tendo em conta que o storage engine MEMORY apenas trabalha com tamanhos de tuplos fixos (fixed-length rows), e que este engine é o utilizado para tabelas temporárias (uma coisa a evitar, embora nem sempre seja possível), as consequências podem ser desastrosas.

Para esta demonstração, vamos definir o tamanho máximo das tabelas MEMORY para um valor que possamos atingir em alguns segundos, o mínimo:

MySQL:
  1. mysql> SET max_heap_table_size = 1;
  2. Query OK, 0 rows affected, 1 warning (0.00 sec)
  3.  
  4. mysql> SHOW variables like '%heap%';
  5. +---------------------+-------+
  6. | Variable_name       | Value |
  7. +---------------------+-------+
  8. | max_heap_table_size | 16384 |
  9. +---------------------+-------+
  10. 1 row in SET (0.00 sec)

Observamos que o mínimo que conseguimos será 16KB. Vamos ver o que acontece com campos VARCHAR (ie, de comprimento [VAR]iável):

MySQL:
  1. mysql> CREATE TABLE `varchar_small` (
  2.     ->   `id` VARCHAR(36) NOT NULL
  3.     -> ) ENGINE=MEMORY DEFAULT CHARSET=utf8;
  4. Query OK, 0 rows affected (0.01 sec)
  5.  
  6. mysql> INSERT INTO varchar_small VALUES('abcd');
  7. Query OK, 1 row affected (0.00 sec)
  8.  
  9. mysql> -- Vamos enchendo ate nao dar mais....
  10. mysql> INSERT INTO varchar_small SELECT * FROM varchar_small;
  11. ERROR 1114 (HY000): The table 'varchar_small' is full
  12.  
  13. mysql> CREATE TABLE `var_char` (
  14.     ->   `id` VARCHAR(36) NOT NULL
  15.     -> ) ENGINE=MEMORY DEFAULT CHARSET=utf8;
  16. Query OK, 0 rows affected (0.00 sec)
  17.  
  18. mysql> INSERT INTO var_char VALUES('abcdefgh-ijkl-mnop-qrst-uvwxyzabcedf');
  19. Query OK, 1 row affected (0.00 sec)
  20.  
  21. mysql> -- O mesmo: vamos enchendo ate nao dar mais...
  22. mysql> INSERT INTO var_char SELECT * FROM var_char;
  23. ERROR 1114 (HY000): The table 'var_char' is full
  24.  
  25. mysql> SELECT count(1) FROM var_char;
  26. +----------+
  27. | count(1) |
  28. +----------+
  29. |      320 |
  30. +----------+
  31. 1 row in SET (0.00 sec)
  32.  
  33. mysql> SELECT count(1) FROM varchar_small;
  34. +----------+
  35. | count(1) |
  36. +----------+
  37. |      320 |
  38. +----------+
  39. 1 row in SET (0.00 sec)

Ora bem, o que fiz foi preencher a primeira tabela com conteúdo bem menor que 36 carácteres (apenas 4) e, na segunda, conteúdo que preenchesse o campo todo. O que podemos observar é que, neste storage engine, o espaço ocupado por um campo pouco cheio ou muito cheio é sempre o mesmo: é o tamanho total associado ao campo. É isso que significa fixed-length, e isto acarreta como consequência que o tamanho de um campo VARCHAR em tabelas do tipo MEMORY (leia-se também: tabelas temporárias) é invariavelmente o tamanho máximo do campo. Consequentemente, ainda que um tamanho máximo de 255 não influencie o tamanho de índice sobre esse campo, sempre que seja necessário transportar esses dados para tabelas temporárias pode verificar-se um desperdício enorme de espaço (multiplicar esse desperdício pelo número de linhas que terão que ser transportadas para a tabela temporária!).

Mas isto não fica por aqui: para além de cuidado e bom senso na definição do tamanho do campo, é necessário também ter em atenção o seu encoding. O segundo ponto é sobre a influência do charset. Tipicamente trabalhávamos em Latin1, mas com a disseminação da localização (i18n) passou-se praticamente a usar sempre UTF-8. Tudo muito bem, deixámos de ter uma série de problemas com a formatação das sequências de carácteres (vulgo strings). Mas nem tudo são rosas: o charset UTf-8 pode consumir até 4 carácteres (em MySQL, 3), ao passo que Latin1 usava apenas 1:

MySQL:
  1. mysql> CREATE TABLE `varchar_utf8` (
  2.     ->   `id` VARCHAR(36) NOT NULL
  3.     -> ) ENGINE=MEMORY DEFAULT CHARSET=utf8;
  4. Query OK, 0 rows affected (0.01 sec)
  5.  
  6. mysql> CREATE TABLE `varchar_latin1` (
  7.     ->   `id` VARCHAR(36) NOT NULL
  8.     -> ) ENGINE=MEMORY DEFAULT CHARSET=latin1;
  9. Query OK, 0 rows affected (0.00 sec)

A cláusula geral de encoding da tabela propaga-se como default para as colunas. De qualquer forma, poderia especificar o encoding para a coluna de forma individual. Para ter a certeza que não estamos a ser enganados pelo encoding corrente no cliente mysql, usei dois scripts em PHP para criar uma única linha para cada uma destas tabelas (o encoding no meu terminal é UTF-8, como já vai sendo comum):

PHP:
  1. mysql_connect('localhost','root','');
  2. mysql_query("SET NAMES UTF8");
  3. mysql_query("INSERT INTO varchar_utf8 VALUES('ãããããããã-ãããã-ãããã-ãããã-ãããããããããããã')");

PHP:
  1. mysql_connect('localhost','root','');
  2. mysql_query("SET NAMES Latin1");
  3. mysql_query("INSERT INTO varchar_latin1 VALUES('ãããããããã-ãããã-ãããã-ãããã-ãããããããããããã')");
  4. EOF

Gravei os dois scripts acima para varchar_utf8.php e varchar_latin1.php.temp, respectivamente. Por fim, alterei o encoding do script para em Latin1 (porque faça o que fizer, o encoding do meu terminal é UTF-8):

CODE:
  1. [root@speedy ~]# cat varchar_latin1.php.temp | iconv -f utf8 -t iso-8859-1 > varchar_latin1.php

E executei os dois:

CODE:
  1. [root@speedy ~]# php -f varchar_utf8.php
  2. [root@speedy ~]# php -f varchar_latin1.php

OK, agora tenho 1 registo de cada em cada tabela. Vamos usar esse único registo em cada para encher as tabelas até não ser possível adicionar mais registos:

MySQL:
  1. mysql> -- ...
  2. mysql> INSERT INTO varchar_utf8 SELECT * FROM varchar_utf8;
  3. ERROR 1114 (HY000): The table 'mem_utf8' is full
  4.  
  5. mysql> -- ...
  6. INSERT INTO varchar_latin1 SELECT * FROM varchar_latin1;
  7. ERROR 1114 (HY000): The table 'mem_latin1' is full
  8.  
  9. mysql> SELECT count(1) FROM mem_utf8;
  10. +----------+
  11. | count(1) |
  12. +----------+
  13. |      126 |
  14. +----------+
  15. 1 row in SET (0.00 sec)
  16.  
  17. mysql> SELECT count(1) FROM mem_latin1;
  18. +----------+
  19. | count(1) |
  20. +----------+
  21. |      320 |
  22. +----------+
  23. 1 row in SET (0.00 sec)

Pois é, verificámos o segundo problema: no engine MEMORY couberam muito menos linhas na primeira tabela (UTF-8) que na primeira (Latin1) porque que os campos são do seu tamanho máximo possível e, no caso dos VARCHAR em UTF8, cada carácter pode ocupar até 3 bytes. Ora, para caberem 36 carácteres, serão necessários, pelo menos, 36*3 = 108 bytes! Isto representa um consumo de 300% que pode não ter sido estimado no que respeita ao tamanho de memória permitido para tabelas temporárias.

Ambos cenários são bastante frequentes, especialmente em aplicações que usem frameworks - e também nestes casos, pela generalidade de backends que tentam suportar (MySQL, PostgreSQL, Oracle, etc), as queries envolvidas não costumam ser optimizadas para nenhum motor em particular; no caso do MySQL essas queries geram, muitas vezes, tabelas temporárias sem necessidade e optimizar este tipo de queries pode implicar alterações no núcleo da framework ou, no mínimo, criar excepções à regra.

A escolha do nome da coluna nos exemplos acima não foi ao acaso: com efeito, os campos UUID() costumam ser UTF-8 apenas porque toda a tabela é UTF-8 e, imagine-se, não há nenhum carácter num UUID() que não seja ASCII (na verdade, não há nenhum carácter que não seja [a-zA-Z-1-9]!):

MySQL:
  1. mysql> SELECT UUID();
  2. +--------------------------------------+
  3. | UUID()                               |
  4. +--------------------------------------+
  5. | 27d6a670-a64b-11de-866d-0017083bf00f |
  6. +--------------------------------------+
  7. 1 row in SET (0.00 sec)

Como o conjunto de caracteres de um UUID() é charset compatible entre os vários, esta é uma das optimizações que poderão fazer nas vossas aplicações facilmente, contribuindo para o aumento de performance do MySQL sem qualquer impacto na vossa aplicação.

AddThis Social Bookmark Button

Automatically cleaning up SPAM Wordpress comments

September 6th, 2009 ntavares Posted in dri, en_US, mysql 2 Comments »

Ouvir com webReader

Doing the maintenance of our blogs (Wordpress), I bumped over one that had fallen on an active botnet. It was receiving like 5 or 6 spam comments per minute. It was nearly the only one in such an harassment, so I suspect the botnet loved it for being open on commenting.

Since I've activated reCaptcha I've been monitoring my "spam folder" and I'm really confident on his guesses, so I just wrote a STORED PROCEDURE to clean up these spam comments on a periodic basis, so I can do a sitewide cleanup:

MySQL:
  1. DELIMITER $$
  2.  
  3. DROP PROCEDURE IF EXISTS `our_blog_db`.`REMOVE_OLD_SPAM`$$
  4. CREATE PROCEDURE `our_blog_db`.`REMOVE_OLD_SPAM` ()
  5.     MODIFIES SQL DATA
  6.     COMMENT 'remove comentarios marcados como SPAM'
  7. BEGIN
  8.  
  9. DECLARE done BIT(1) DEFAULT FALSE;
  10. DECLARE commtbl VARCHAR(50);
  11. DECLARE comments_tbls CURSOR FOR SELECT TABLE_NAME
  12.     FROM information_schema.TABLES  
  13.     WHERE TABLE_SCHEMA = 'our_blog_db' AND TABLE_NAME LIKE '%comments';
  14. DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
  15.  
  16.  
  17. OPEN comments_tbls;
  18.  
  19. REPEAT
  20.     FETCH comments_tbls INTO commtbl;
  21.     SET @next_tbl = CONCAT('DELETE FROM our_blog_db.',commtbl,'
  22.         WHERE comment_approved = "spam"
  23.         AND comment_date_gmt < DATE_SUB(UTC_TIMESTAMP(), INTERVAL 15 DAYS)');
  24.     PREPARE get_next_tbl FROM @next_tbl;
  25.     EXECUTE get_next_tbl;
  26.  
  27. UNTIL done END REPEAT;
  28.  
  29. CLOSE comments_tbls;
  30.  
  31.  
  32. END$$
  33.  
  34. DELIMITER ;

It's very easy to stick it into an EVENT, if you have MySQL 5.1 or bigger, and which to do a daily clean up automatically:

MySQL:
  1. CREATE EVENT `EV_REMOVE_OLD_SPAM` ON SCHEDULE EVERY 1 DAY STARTS '2009-08-01 21:00:00' ON COMPLETION NOT PRESERVE ENABLE
  2. COMMENT 'remove comentarios marcados como SPAM' DO
  3. BEGIN
  4.  
  5. SELECT GET_LOCK('remove_spam',5) INTO @remove_spam_lock;
  6.  
  7. IF @remove_spam_lock THEN
  8.     CALL REMOVE_OLD_SPAM();
  9.  
  10. END IF;
  11.  
  12. END

Enjoy!

AddThis Social Bookmark Button

Multiple field index vs external primary key calculation

August 29th, 2009 ntavares Posted in en_US, mysql, performance No Comments »

Ouvir com webReader

I bumped over a client which was using a VARCHAR(200) as a MyISAM PRIMARY KEY key for detecting colisions which was externally (by the app) calculated and fed into MySQL. I supsected the reason for this was to [try to] speed up inserts or being fear that there were colisions and not wanting to have unnecessary data fields. The table was just a bunch of e-mail log rows.

The rows were to be quickly processed (selected between a DATETIME range) and archived. Obvious consequences are: big index size, poor performance. Let's see why:

  • The index was the concatenation of some fields, and an extra one: "Message-ID" - which could be not unique, AFAIWT. In the end, the key itself was randomly inserted, since it started with the 'sender' address (see below the maillogsmiorig schema). This obvious lead to poor key insertion (random order), of course, but also to duplicate content in the key itself, as the table engine was MyISAM.
  • More, after the loading, the operations were slower: based on MyISAM. An index would eventually be used, but the data access was unavoidable. That could also be circumvented by InnoDB, since the data is close to the index.

Due to various facts - including that DATETIME field - I suggested migrating to InnoDB with a PK composed of the necessary fields. Moreover, the first field would be the DATETIME which would impose some kind of chronological order for better key insertion - at the same time, the processing was being done for specific intervals of that field.

I've sampled 1M rows of the live data for a test: I wanted to compare one method to my overall solution, which you can check in maillogsinnonew below. I took the opportunity to isolate the storage engine benefit, by running the test for the both versions of both solutions (previous/orig and new):

MySQL:
  1. mysql> SELECT * FROM maillogs INTO OUTFILE '/export/home/gpshumano/teste/maillogsorig.sql';
  2. Query OK, 1000000 rows affected (6.47 sec)
  3.  
  4. mysql> SELECT ip,sender,receiver,date_time,SUBSTRING(chave,LENGTH(SUBSTRING_INDEX(chave,'|',2))+2) FROM maillogsminew INTO OUTFILE '/export/home/gpshumano/teste/maillogsnew.sql';
  5. Query OK, 1000000 rows affected (4.42 sec)
  6.  
  7.  
  8. mysql> CREATE TABLE `maillogsinnonew` (
  9.     ->   `ip` VARCHAR(16) NOT NULL,
  10.     ->   `sender` VARCHAR(320) NOT NULL,
  11.     ->   `receiver` VARCHAR(320) NOT NULL,
  12.     ->   `date_time` DATETIME NOT NULL,
  13.     ->   `message_id` VARCHAR(200) NOT NULL,
  14.     ->   PRIMARY KEY (`date_time`,`sender`,`receiver`,`message_id`)
  15.     -> ) ENGINE=INNODB DEFAULT CHARSET=latin1;
  16. Query OK, 0 rows affected (1.27 sec)
  17.  
  18.  
  19. mysql> CREATE TABLE `maillogsminew` (
  20.     ->   `ip` VARCHAR(16) NOT NULL,
  21.     ->   `sender` VARCHAR(320) NOT NULL,
  22.     ->   `receiver` VARCHAR(320) NOT NULL,
  23.     ->   `date_time` DATETIME NOT NULL,
  24.     ->   `message_id` VARCHAR(200) NOT NULL,
  25.     ->   PRIMARY KEY (`date_time`,`sender`,`receiver`,`message_id`)
  26.     -> ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
  27. Query OK, 0 rows affected (0.13 sec)
  28.  
  29. mysql> CREATE TABLE `maillogsmiorig` (
  30.     ->   `chave` VARCHAR(200) NOT NULL,
  31.     ->   `ip` VARCHAR(16) NOT NULL,
  32.     ->   `sender` VARCHAR(320) NOT NULL,
  33.     ->   `receiver` VARCHAR(320) NOT NULL,
  34.     ->   `date_time` DATETIME NOT NULL,
  35.     ->   PRIMARY KEY (`chave`),
  36.     ->   KEY `maillogs_date_time_idx` (`date_time`)
  37.     -> ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
  38. Query OK, 0 rows affected (0.14 sec)
  39.  
  40. mysql> CREATE TABLE `maillogsinnoorig` (
  41.     ->   `chave` VARCHAR(200) NOT NULL,
  42.     ->   `ip` VARCHAR(16) NOT NULL,
  43.     ->   `sender` VARCHAR(320) NOT NULL,
  44.     ->   `receiver` VARCHAR(320) NOT NULL,
  45.     ->   `date_time` DATETIME NOT NULL,
  46.     ->   PRIMARY KEY (`chave`),
  47.     ->   KEY `maillogs_date_time_idx` (`date_time`)
  48.     -> ) ENGINE=INNODB DEFAULT CHARSET=latin1;
  49. Query OK, 0 rows affected (2.81 sec)
  50.  
  51. mysql> LOAD DATA INFILE '/export/home/gpshumano/teste/maillogsnew.sql' INTO TABLE maillogsminew;
  52. Query OK, 1000000 rows affected (34.83 sec)
  53. Records: 1000000  Deleted: 0  Skipped: 0  WARNINGS: 0
  54.  
  55. mysql> LOAD DATA INFILE '/export/home/gpshumano/teste/maillogsnew.sql' INTO TABLE maillogsinnonew;
  56. Query OK, 1000000 rows affected (1 min 40.56 sec)
  57. Records: 1000000  Deleted: 0  Skipped: 0  WARNINGS: 0
  58.  
  59. mysql> LOAD DATA INFILE '/export/home/gpshumano/teste/maillogsorig.sql' INTO TABLE maillogsinnoorig;
  60. Query OK, 1000000 rows affected (6 min 54.14 sec)
  61. Records: 1000000  Deleted: 0  Skipped: 0  WARNINGS: 0
  62.  
  63. mysql> LOAD DATA INFILE '/export/home/gpshumano/teste/maillogsorig.sql' INTO TABLE maillogsmiorig;
  64. Query OK, 1000000 rows affected (1 min 17.06 sec)
  65. Records: 1000000  Deleted: 0  Skipped: 0  WARNINGS: 0

This was the aftermath. Comparing the results you'll see a 75% gain in speed for InnoDB, and a 55% in MyISAM. More, after the loadings just did an OPTIMIZE TABLE to get more precise table status:

MySQL:
  1. mysql> SHOW table status; 
  2. +----------------------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+-------------------+----------+----------------+---------+
  3. | Name                 | Engine | Version | Row_format | Rows    | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | AUTO_INCREMENT | Create_time         | Update_time         | Check_time          | Collation         | Checksum | Create_options | Comment |
  4. +----------------------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+-------------------+----------+----------------+---------+
  5. | maillogsinnonew      | INNODB |      10 | Compact    | 1048709 |            144 |   152043520 |               0 |            0 |      6144 |           NULL | 2009-08-24 20:00:44 | NULL                | NULL                | latin1_swedish_ci |     NULL |                |         |
  6. | maillogsinnoorig     | INNODB |      10 | Compact    | 1041388 |            392 |   408944640 |               0 |    216006656 |      4096 |           NULL | 2009-08-24 20:01:35 | NULL                | NULL                | latin1_swedish_ci |     NULL |                |         |
  7. | maillogsminew        | MyISAM |      10 | Dynamic    | 1000000 |            126 |   126120088 | 281474976710655 |    100151296 |         0 |           NULL | 2009-08-24 20:00:55 | 2009-08-24 20:06:08 | NULL                | latin1_swedish_ci |     NULL |                |         |
  8. | maillogsmiorig       | MyISAM |      10 | Dynamic    | 1000000 |            173 |   173720872 | 281474976710655 |    166667264 |         0 |           NULL | 2009-08-24 20:01:01 | 2009-08-24 20:18:24 | 2009-08-24 20:18:31 | latin1_swedish_ci |     NULL |                |         |
  9. +----------------------+--------+---------+------------+---------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+---------------------+-------------------+----------+----------------+---------+
  10. 4 rows in SET (0.03 sec)

CODE:
  1. bash-3.00# ls -lah /iscsipool/DATA/tavares/
  2. total 2288740
  3. drwx------   2 mysql    mysql         13 ago 24 20:37 .
  4. drwxr-xr-x  10 mysql    mysql         17 ago 24 19:45 ..
  5. -rw-rw----   1 mysql    mysql         65 ago 24 19:45 db.opt
  6. -rw-rw----   1 mysql    mysql       8,5K ago 24 20:29 maillogsinnonew.frm
  7. -rw-rw----   1 mysql    mysql       156M ago 24 20:32 maillogsinnonew.ibd
  8. -rw-rw----   1 mysql    mysql       8,5K ago 24 20:31 maillogsinnoorig.frm
  9. -rw-rw----   1 mysql    mysql       420M ago 24 20:40 maillogsinnoorig.ibd
  10. -rw-rw----   1 mysql    mysql       8,5K ago 24 20:00 maillogsminew.frm
  11. -rw-rw----   1 mysql    mysql       120M ago 24 20:06 maillogsminew.MYD
  12. -rw-rw----   1 mysql    mysql        96M ago 24 20:37 maillogsminew.MYI
  13. -rw-rw----   1 mysql    mysql       8,5K ago 24 20:01 maillogsmiorig.frm
  14. -rw-rw----   1 mysql    mysql       166M ago 24 20:18 maillogsmiorig.MYD
  15. -rw-rw----   1 mysql    mysql       159M ago 24 20:37 maillogsmiorig.MYI

As you seen, you not only gain in speed, but also in smaller data files size - 63% for InnoDB and 33% for MyISAM. This smaller sizes also contribute for a longer stay in the buffer pool and reduced I/O. In any case, since the rows were processed immediatelly after the loading, they would probably be all in InnoDB's buffer pool, avoiding the extra I/O to refetch the data from the MyISAM data files.

Also notice as maillogsinnonew considers it's index (PK) size as 0 - that's because in InnoDB the fields in the PK are actually part of the data node itself. In practical terms, this index is just a matter of data ordering, ie, no extra space is required to store the index itself!

AddThis Social Bookmark Button

Update on mysql-query-browser “affected rows”

August 27th, 2009 ntavares Posted in en_US, mysql No Comments »

Ouvir com webReader

Yesterday I tried massaging mysql-gui-tools a bit to see if I could make affected rows show up on modification DMLs, sucha as INSERT, DELETE and UPDATE. Here is a briefing about it, along with all the patches I used, some taken from the current Fedora 10 source RPM, along a rude one by myself to show affected rows.

Update

Fixing the problem on the root will take a bit more time. This kind of protocol information, such as affected_rows is lost because the guys at MySQL considered that DML changing data never return usefull results - but actually they do: the response protocol packet comes with logs of info, like if you used an index, if you are in a transaction, etc. It could have been due to the protocol changed over time and the Query Browser didn't catchup.

This translates to a lot of fixes: adapting the methods discarding results only when variable result is NULL for not doing that, and find a way to leave affected_rows set somewhere.

So, for the moment, here is a list of patches I used, the bold are my own. The RPM will have to wait, since I wanted to release mysql-gui-tools-5.0r14 (which are the versions you should apply these patches against) instead of the r12 currently available. In the meantime, if I find more patches to submit, may be I have a real look at it. Here are the patches:

AddThis Social Bookmark Button

Compiling mysql-gui-tools

August 27th, 2009 ntavares Posted in en_US, mysql 1 Comment »

Ouvir com webReader

Me and a colleague were missing some features in mysql-query-browser and am trying to have a look at them, since no one at MySQL AB is very interested in supporting it. So I thought I could have a look at it. System is Fedora 10 (still), and I use it mainly because it's small, simple to use, and it's GTK!

Setting up the building (compile) environment

Got the sources from the notes at MySQL Forge: Building MySQL GUI Tools on Linux:

CODE:
  1. svn co http://svn.mysql.com/svnpublic/mysql-gui-common/trunk mysql-gui-common
  2. svn co http://svn.mysql.com/svnpublic/mysql-query-browser/trunk mysql-query-browser

You'll need a patch from Oden Eriksson attached to Bug #32184, or you can use the one from the RPM - otherwise you'll get the error error: ‘SigC’ has not been declared found on that bug report. I had to cut it for building from the SVN tree, and patched mysql-gui-common and mysql-query-browser independently (split the patch).

Building mysql-gui-common is straightforward:

CODE:
  1. ./autogen.sh
  2. ./configure --prefix=/home/nmct/mysql-query-browser/fake 
  3. make -j 2
  4. make install

Building mysql-query-browser seems to need explicit pointing to the libgtkhtml besides the packages it mentions on error:

CODE:
  1. [root@speedy ~]# rpm -qa | grep gtkhtml | grep devel
  2. gtkhtml3-devel-3.24.5-1.fc10.i386
  3. [root@speedy ~]# rpm -ql gtkhtml3-devel | grep pc
  4. /usr/lib/pkgconfig/gtkhtml-editor.pc
  5. /usr/lib/pkgconfig/libgtkhtml-3.14.pc

So it's easy to spot the needed --with switch. I had to apply several other patches that I just took source RPM. Most of them were applied with -p2.

CODE:
  1. [nmct@speedy mysql-query-browser]$ patch -p 2 < mysql-gui-tools-5.0_p12-libsigc++-2.2.patch
  2. patching file source/linux/MQResultTab.h
  3. [nmct@speedy mysql-query-browser]$ patch -p2 < mysql-gui-tools-gtksourceview-cflags.patch
  4. patching file source/linux/Makefile.in
  5. Hunk #1 succeeded at 119 (offset 17 lines).
  6. [nmct@speedy mysql-query-browser]$ patch -p2 < gtk_deprecated_typedefs.patch
  7. patching file source/linux/gtksourceview/gtksourceview/Makefile.in
  8. ...
  9.  
  10. ...
  11. ./configure --with-gtkhtml=libgtkhtml-3.14 --prefix=/home/nmct/mysql-query-browser/fake
  12. make -j 2
  13. make install

And that should be it - actually there was a path concatenation issue (looking for ...fake/usr/local/share...) which I quickly fixed with symlinks. After that, we should be ready to rock.

First patch: mysql_affected_rows

One of the features I miss most is the number of affected rows of some DML commands, such as UPDATE and INSERT. This was not easy to do in five minutes because of the UI split: mysql_affected_rows() doesn't seem to reach the GUI. So I've made a simple test, and succeeded.

mysql-query-browser-affected-rows1

This looks promising. I just set a global var, which will do for now. I still have to check for potential race conditions, but expect the polished patch, along with a new RPM for Fedora 10, at least, in the near future.

AddThis Social Bookmark Button