Berlin Wikimedia Chapters Meeting 2010 a.k.a. [well actually...] VulcanoCon and Porto Wikipedia Academy 2010

August 14th, 2010 ntavares Posted in en_US, wikipedia No Comments »

Ouvir com webReader

Berlin, Germany - Wikimedia Chapters Meeting 2010

These have been busy days! At the same time the first Portuguese Wikipedia Academy was running I was supposed to be in Berlin to the 2010 Wikimedia Chapters Meeting. I actually got there, indeed, but we were in the air when the massive ash cloud from the Eyjafjallajökull glacier/vulcano spread above Central Europe, and we got stuck as soon as we landed in Frankfurt. But I won’t regret catching a 10h train to Berlin, arriving in the morning, except for missing the whole morning at the Meeting. The fellow mates around the world are just awesome people, and doing a great job, and they have just shown how excellent fellows they are to work with.

Here is the wrapup photo we took in the end (after 20min trying to position ourselfs according to the Mapa Mundi and.. well.. having given up :)):

29421_400584147744_150775207744_4074764_4005963_n

Picking a train to Portugal was very risky as the French saw an excellent opportunity to put up a strike (I was told some frieds took 3 days from Netherlands to Portugal, so I’m glad we considered the odds)! While stranded, along with a lot of (mainly) intercontinental fellows, we spent the days getting to know Berlin and the famous Bratwurst. I won’t definitelly forget the trio I had near Charlottenburg, yummm.. :P

Wikimedia Deutchland has proven to be an excellent hoster for such an event. Everything we handled smoothly, Don and Anjia were also great facilitators, being careful enough to document in detail each session.

A special remark to the strange title of this post, I think no one will forget the good laughs we gave around some expressions… like VulcanoCon… hrm hrm :)

Hugs to everyone I met, I hope you arrived safely (AFAIK, you *did* arrived…). Hope to see you all next year, or maybe at Wikimania.

Porto, Portugal - Wikipedia Academy

As we already confirmed in person in the last WMP General Assembly (GA), the First Wikipedia Academy was definitely a success. The media push was spectacular and, of course, it had to give some results. We got an increase of associates right there on the GA, with more people willing to help on the tasks. I thank all the participants all the input, it was really great, and I would also like to thank the opportunity of exposing everything we learned at the Chapters Conferece (above). We also discussed the plan for the upcoming year, now, more than ever, targetting specific goals from the list of ideas we have been summing up - I’ll be participating in a specific GLAM project (more on this later), I’ll be the IT lead, I’ll try to help Gil on a joint Communication project targetting Transportation companies, and I’ll be trying to push forward a possible UMIC partnership (more on this later).

AddThis Social Bookmark Button

Update on KTEmailPush KnowledgeTree Plugin

February 17th, 2010 ntavares Posted in en_US, knowledgetree No Comments »

Ouvir com webReader

I took the time to make some improvements to KTEmailPush KnowledgeTree Plugin. It’s now a real KT Plugin (instead of external script) with its own scheduler task. Version 1.0 publishes the plugin as a KT pluggable plugin, with its scheduler’s own task. Only configuration is still file-based only. It should be a matter of 4 or 5 steps to get it up and running in your system.

Features

  • Upload to specific folder, to user’s group/unit folder, to user’s Dropped Documents folder or you may use an external script to determine where the file should be uploaded to;
  • Upload message body as document, optionally setting links between body and attachments;
  • Restrict emails based on sender or recipient;
  • Attachments can be uniquely distinguished, or be uploaded as new revisions of same document (based on filename);
  • Multiple notifications can be sent: message without attachment detection (and removal), discarded non-existing KT user originated mail, sender notification of new documents (can be complemented with Workflow notifications for notifying recipients), notify sender of uploaded documents, notify system admininstration;
  • Full debugging support and exclusive lock to avoid multiple runs;

The default behaviour is to upload to a specific folder, based on the address found in ‘From:’, which should be an unique email bound to an username.

It can be complemented with KT Workflows to fire notifications.

Also, there are lots of options (in config.inc.php) to decide what exactly the plugin should do with attachments, debugging, etc.

Installation

  • You should create the following mail addresses to receive the documents, which should match $mail_username, $mail_cc and $mail_admin in config.inc.php: kt_queue@domain.com, kt_copy@domain.com, kt_admin@domain.com;
  • Unpack the source (or SVN checkout) to the directory ‘plugins’
      svn checkout https://svn.forge.dri.pt/svn/kt-emailpush/trunk kt-emailpush
    	

    Note that kt_copy should maintain an endless growing copy of all email sent (which should be maintained by YOU, since it’s untouched), and that kt_admin is the admin’s destination address for notifications.

  • Adjust the values in the config.inc.php script.
  • Go to “Administration > Miscellaneous > Manage plugins” and reread plugins. The refreshed listing should now show “KT EmailPush Plugin”. Activate it and click on the “Update” button.
  • Visit “Administration > Miscellaneous > Manage Task Scheduler” to ensure the “Fetch Mail” tasks is installed and active and it’s running with the desired frequency.
  • Be sure to have your KT cron running.

Maintenance

The file specified in config $lockfile will prevent running concurrent copies. Eventually you can use this mutex facility to auto-unlock if you need to perform some kind of maintenance.

→ Find out more at KT’s Wiki: http://wiki.knowledgetree.com/Email_Gateway_Plugin.

AddThis Social Bookmark Button

Acer Aspire One D250

January 29th, 2010 ntavares Posted in en_US, hardware, linux driver No Comments »

Ouvir com webReader

I did it. I went to FNAC and bought it. I really did it :)

I just got this new Acer Aspire One (D250).

I’ve installed UNetbootin on my Fedora and downloaded an Ubuntu 9.10 image. After deploying the image to a 2GB USB stick, and booting the netbook, it just went on booting without any trouble at all. That’s what I like in Ubuntu, and that’s what I need for this laptop. After booting, just went through the installer and, after a couple of questions, all was ready: Wi-fi, Ethernet, Microphone, Webcam, everything worked out-of-the-box. Even the Wifi led is working, against all odds :)

I went through a shared installation to keep things simple. I still have to investigate this further, but awesome as well is the fact the Ubuntu installer respected both Android and Windows 7. Both are still bootable, the first being booted without option (I think it’s my fault when I was asked how I’d like it to boot), then GRUB, and from there I can go wherever: Ubuntu or Windows 7.

Windows 7, as usual, didn’t seem prepared to partition adjustments. After first boot it started a filecheck (still the good’old CHKDSK, just imagine :)… after rebooting again, it was ready to stay. Luck for it that handled it afterall, as its days of existence here are probably limited.

So, this page is needing an update. Pending testing is now the card reader and the webcam. After that, I should update the page.

In the meantime, between reboots, I just found my newest addiction (on the phone, so far) online as well: Tower Bloxx :)

AddThis Social Bookmark Button

NRPE for Endian Firewall

December 31st, 2009 ntavares Posted in en_US, monitorização, nagios No Comments »

Ouvir com webReader

Finally I had the time to compile NRPE (the nagios addon) for Endian Firewall. If you are in a hurry, look for the download section below.

First of all, an important finding is that Endian Firewall (EFW) seems to be based on Fedora Core 3, so maybe sometimes you can spare some work by installing FC3 RPMs directly. And that’s what we’ll do right away, so we can move around EFW more easily.

Packaging and installing NRPE

Packaging and installing nagios-plugins

  • Grab the source at: http://sourceforge.net/projects/nagiosplug/files/


    cd /root
    wget http://sourceforge.net/projects/nagiosplug/files/nagiosplug/1.4.14/nagios-plugins-1.4.14.tar.gz/download

  • Repeat the procedure you did for NRPE: place the tarball on SOURCES and the spec file on SPECS:

    cp nagios-plugins-1.4.14.tar.gz /usr/src/endian/SOURCES/
    cd /usr/src/endian/SOURCES/
    tar xfvz nagios-plugins-1.4.14.tar.gz
    chown -R root:root nagios-plugins-1.4.14
    cp nagios-plugins-1.4.14/nagios-plugins.spec ../SPECS/

  • This bundle of plugins includes the so-called standard plugins for nagios. They are a lot and you maybe can just cut some off so the building is quicker. Also, you may avoid depend on perl(Net::SNMP), perl(Crypt::DES) and perl(Socket6) - which you can grab from DAG’s RPM repo (remember the FC3 branching).

  • cd /root
    wget http://dag.wieers.com/rpm/packages/perl-Net-SNMP/perl-Net-SNMP-5.2.0-1.1.fc3.rf.noarch.rpm
    wget http://dag.wieers.com/rpm/packages/perl-Crypt-DES/perl-Crypt-DES-2.05-3.1.fc3.rf.i386.rpm
    wget http://dag.wieers.com/rpm/packages/perl-Socket6/perl-Socket6-0.19-1.1.fc3.rf.i386.rpm

  • Finally, install everything:

    rpm -ivh perl-Net-SNMP-5.2.0-1.1.fc3.rf.noarch.rpm \
    perl-Crypt-DES-2.05-3.1.fc3.rf.i386.rpm \
    perl-Socket6-0.19-1.1.fc3.rf.i386.rpm \
    /usr/src/endian/RPMS/i386/nagios-plugins-1.4.14-1.i386.rpm

Final notes

Be aware that this is a sample demonstration. I was more interested in having it done for my case - since I can fix future problems - rather than doing a proper/full EFW integration. If you think you can contribute with tweaking this build process just drop me a note.


Download


Here are the RPMs which include the supra-mentioned tweaks (this required extra patching on the .spec file and include the patch within the source):

AddThis Social Bookmark Button

Finding for each time interval, how many records are “ocurring” during that interval

September 28th, 2009 ntavares Posted in en_US, mysql, performance No Comments »

Ouvir com webReader

This is a complex problem: You are mapping events (of some kind) with a start and end timestamp, but how do you know, for a specific interval [ti,tf] (timeslot), how many records of those have start<ti and end>tf? This problem is complex because you have no records defining the timeslot to serve either as a grouping field or comparison field. This is a problem I've seen people tend to approach with a procedural approach, and that's the big problem to understand SQL, which tipically are set problems.

The main issue around this problem is that you need to count existences for a list you don't have. In my real scenario, there are some restrictions to have in mind:

  • The data set is extremely large, so this operation is daily generated for a specific day.
  • Due to the above, the table is partitioned on a filtering field (stoptime below).

Immediatelly, some solutions pop in my head:

  • Use a summary table for each time slot: when a record is inserted, increment all respective time slots by one. This is cool, but I'd like to avoid insert delay. This solution also implies having a persistent table for each timeslot during the whole times of the whole records, right? That could be from 2009-08 to 2009-09, but also could start on 1989-09 to 2009-09, which represent ~10.5M records, some of them possibly zero.
  • Another option could be to use cursors to iterate through the selection of records which cross a specific minute and perhaps fill a temporary table with the results. Cursors are slow, it is a procedural approach, and represents programming overhead.

But then again, these are both procedural solutions and that's why they don't seem so effective - actually, the first is not quite the same as the second and is pretty (well) used, however it induces some extra effort and schema changes.
The solution I'm proposing is a set theory approach: IF we had a table of timeslots (minute slots), we could just join the two tables and apply the rules we want. But we don't have. But perhaps we can generate it. This idea came out after reading the brilliant examples of Roland Bouman's MySQL: Another Ranking trick and Shlomi Noach's SQL: Ranking without self join.

Let's build an example table:

MySQL:
  1. mysql> CREATE TABLE `phone_calls` (
  2.     ->   `starttime` DATETIME NOT NULL,
  3.     ->   `stoptime` DATETIME NOT NULL,
  4.     ->   `id` INT(11) NOT NULL,
  5.     ->   PRIMARY KEY (`id`),
  6.     ->   KEY `idx_stoptime` (`stoptime`)
  7.     -> ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
  8. Query OK, 0 rows affected (0.04 sec)

Now manually insert some interesting records:

MySQL:
  1. mysql> SELECT * FROM phone_calls;
  2. +---------------------+---------------------+----+
  3. | starttime           | stoptime            | id |
  4. +---------------------+---------------------+----+
  5. | 2009-08-03 09:23:42 | 2009-08-03 09:24:540 |
  6. | 2009-08-03 11:32:11 | 2009-08-03 11:34:552 |
  7. | 2009-08-03 10:23:12 | 2009-08-03 10:23:131 |
  8. | 2009-08-03 16:12:53 | 2009-08-03 16:20:213 |
  9. | 2009-08-03 11:29:09 | 2009-08-03 11:34:514 |
  10. +---------------------+---------------------+----+
  11. 5 rows in SET (0.00 sec)

As an example, you may verify that record id=2 crosses only time slot '2009-08-03 11:33:00' and no other, and record id=0 crosses none. These are perfectly legitimate call start and end timestamps.

Let's see a couple of premisses:

  • A record that traverses a single minute can be described by this:

    MINUTESLOT(starttime) - MINUTESLOT(stoptime) >= 2

    You can think of MINUTESLOT(x) as the timeslot record associated with field x in the record. It actually represents CONCAT(LEFT(x,16),":00") and the difference is actually a TIMEDIFF();

  • A JOIN will give you a product of records for each match, which means if I could "know" a specific timeslot I could multiply it by the number of records that cross it and then GROUP BY with a COUNT(1). But I don't have the timeslots...

As I've said, I'm generating this recordset for a specific day, and that's why these records all refer to 2009-08-03. Let's confirm I can select the recordspace I'm interested in:

MySQL:
  1. mysql> SELECT starttime,stoptime
  2.     -> FROM phone_calls
  3.     -> WHERE
  4.     ->    /* partition pruning */
  5.     ->    stoptime >= '2009-08-03 00:00:00'
  6.     ->    AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR)
  7.     ->
  8.     ->    /* the real filtering:
  9.    /*>       FIRST: only consider call where start+stop boundaries are out of the
  10.    /*>       minute slot being analysed (seed.timeslot)
  11.    /*>    */
  12.     ->    AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2
  13.     ->
  14.     ->    /* consequence of the broader interval that we had set to cope
  15.    /*>       with calls taking place beyond midnight
  16.    /*>    */
  17.     ->    AND starttime <= '2009-08-03 23:59:59';
  18. +---------------------+---------------------+
  19. | starttime           | stoptime            |
  20. +---------------------+---------------------+
  21. | 2009-08-03 11:32:11 | 2009-08-03 11:34:55 |
  22. | 2009-08-03 16:12:53 | 2009-08-03 16:20:21 |
  23. | 2009-08-03 11:29:09 | 2009-08-03 11:34:51 |
  24. +---------------------+---------------------+
  25. 3 rows in SET (0.00 sec)

These are the 'calls' that cross any minute in the selected day. I deliberately showed specific restrictions so you understand the many aspects involved:

  • Partition pruning is fundamental, unless you want to scan the whole 500GB table. This means you are forced to limit the scope of analysed records. Now, if you have a call starting at 23:58:00 and stopping at 00:01:02 the next day, pruning would leave that record out. So I've given 1 HOUR of margin to catch those records;
  • We had to set stoptime later than the end of the day being analysed. That also means we might catch unwanted records starting between 00:00:00 and that 1 HOUR margin, so we'll need to filter them out;
  • Finally, there's also our rule about "crossing a minute".

In the end, maybe some of these restrictions (WHERE clauses) can be removed as redundant.

Now let's see if we can generate a table of timeslots:

MySQL:
  1. mysql> SELECT CONVERT(@a,DATETIME) AS timeslot
  2.     ->    FROM phone_calls_helper, (
  3.     ->       SELECT @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
  4.     ->    WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
  5.     ->    LIMIT 1440;
  6. +---------------------+
  7. | timeslot            |
  8. +---------------------+
  9. | 2009-08-03 00:00:00 |
  10. | 2009-08-03 00:01:00 |
  11. ....
  12. | 2009-08-03 23:58:00 |
  13. | 2009-08-03 23:59:00 |
  14. +---------------------+
  15. 1440 rows in SET (0.01 sec)

This is the exciting part: We generate the timeslots using user variables and this might be only possible to do in MySQL. Notice that I need to recur to a table, since I can't produce results from the void: its records are actually used as a product of my join to generate what I want. You can use any table, as long as it has at least 1440 records (number of minutes in a day). But your should also have in mind the kind of access is being made to that table because it can translate to unnecessary I/O if you're not carefull:

MySQL:
  1. mysql> EXPLAIN SELECT CONVERT(@a,DATETIME) AS timeslot
  2.     ->    FROM phone_calls_helper, (
  3.     ->       SELECT @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
  4.     ->    WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
  5.     ->    LIMIT 1440;
  6. +----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
  7. | id | select_type | table              | type   | possible_keys | key     | key_len | ref  | rows | Extra                    |
  8. +----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
  9. 1 | PRIMARY     |          | system | NULL          | NULL    | NULL    | NULL |    1 |                          |
  10. 1 | PRIMARY     | phone_calls_helper | index  | NULL          | PRIMARY | 4       | NULL | 1440 | USING WHERE; USING index |
  11. 2 | DERIVED     | NULL               | NULL   | NULL          | NULL    | NULL    | NULL | NULL | No tables used           |
  12. +----+-------------+--------------------+--------+---------------+---------+---------+------+------+--------------------------+
  13. 3 rows in SET (0.00 sec)

In my case I see scanning the 1400 records is being made on the PRIMARY key, which is great. You should choose a table whose keycache has high probability to be in RAM so the index scanning don't go I/O bound either. Scanning 1440 PRIMARY KEY entries shouldn't be quite an I/O effort even in cold datasets, but if you can avoid it anyway, the better.

At this moment you are probably starting to see the solution: either way the Optimizer choosed the first or the last table, it's always a win-win case, since the 1440 are RAM based: you can choose to think of 1440 timeslots being generated and then multiplied by the number of records that cross each timeslot (Rc), or you can choose to think of the 3 records that cross any timeslot and generate timeslots that fall between the start/stop boundaries of each record (Tr). The mathematical result is

timeslots_per_records vs. records_per_timeslots

Well, they might not represent the same effort. Remember that the timeslots are memory and seeking back and forth from it is less costly than seeking back and forth from possibly I/O bound data. However, due to our "imaginary" way of generating the timeslots (which aren't made persistent anyhow by that subquery), we'd need to materialize it so that we could seek on it. But that would also give us the change to optimize some other issues, like CONVERT(), the DATE_ADD()s, etc, and scan only the timeslots that are crossed by a specific call, which is optimal. However, if you're going to GROUP BY the timeslot you could use an index on the timeslot table and fetch each record that cross each timeslot. Tough decision, eh? I have both solutions, I won't benchmark them here, but since the "timeslots per record" made me materialize the table, I'll leave it here as an example:

MySQL:
  1. mysql> CREATE TEMPORARY TABLE `phone_calls_helper2` (
  2.     ->   `tslot` DATETIME NOT NULL,
  3.     ->   PRIMARY KEY (`tslot`)
  4.     -> ) ENGINE=MEMORY DEFAULT CHARSET=latin1 ;
  5. Query OK, 0 rows affected (0.00 sec)
  6.  
  7. mysql> INSERT INTO phone_calls_helper2 SELECT CONVERT(@a,DATETIME) AS timeslot
  8.     ->    FROM phone_calls_helper, (
  9.     ->       SELECT @a := DATE_SUB('2009-08-03 00:00:00', INTERVAL 1 MINUTE)) as init
  10.     ->    WHERE (@a := DATE_ADD(@a, INTERVAL 1 MINUTE)) <= '2009-08-03 23:59:59'
  11.     ->    LIMIT 1440;
  12. Query OK, 1440 rows affected (0.01 sec)
  13. Records: 1440  Duplicates: 0  WARNINGS: 0

So now, the "timeslots per records" query should look like this:

MySQL:
  1. mysql> EXPLAIN SELECT tslot
  2.     -> FROM phone_calls FORCE INDEX(idx_stoptime)
  3.     -> JOIN phone_calls_helper2 FORCE INDEX (PRIMARY) ON
  4.     ->       tslot > CONCAT(LEFT(starttime,16),":00")
  5.     ->       AND tslot < CONCAT(LEFT(stoptime,16),":00")
  6.     ->
  7.     -> WHERE
  8.     ->       /* partition pruning */
  9.     ->       stoptime >= '2009-08-03 00:00:00'
  10.     ->       AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR)
  11.     ->
  12.     ->       /* the real filtering:
  13.    /*>       FIRST: only consider call where start+stop boundaries are out of the
  14.    /*>              minute slot being analysed (seed.timeslot)
  15.    /*>       */
  16.     ->       AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2
  17.     ->
  18.     ->       /* consequence of the broader interval that we had set to cope
  19.    /*>          with calls taking place beyond midnight
  20.    /*>       */
  21.     ->       AND starttime <= '2009-08-03 23:59:59'
  22.     -> GROUP BY tslot;
  23. +----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
  24. | id | select_type | table               | type  | possible_keys | key          | key_len | ref  | rows | Extra                                          |
  25. +----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
  26. 1 | SIMPLE      | phone_calls         | range | idx_stoptime  | idx_stoptime | 8       | NULL |    4 | USING WHERE; USING temporary; USING filesort   |
  27. 1 | SIMPLE      | phone_calls_helper2 | ALL   | PRIMARY       | NULL         | NULL    | NULL | 1440 | Range checked for each record (index map: 0x1) |
  28. +----+-------------+---------------------+-------+---------------+--------------+---------+------+------+------------------------------------------------+
  29. 2 rows in SET (0.00 sec)

It's interesting to see «Range checked for each record (index map: 0x1)» for which the manual states:

MySQL found no good index to use, but found that some of indexes might be used after column values from preceding tables are known.

I can't explain why shouldn't it use the PRIMARY KEY - I tried using CONVERT() for the CONCAT()s to ensure the same data type, but no luck - , but I'm probably safe as it'll probably use it. And this is the final result:

MySQL:
  1. mysql> SELECT tslot,count(1) FROM phone_calls FORCE INDEX(idx_stoptime) JOIN phone_calls_helper2 FORCE INDEX (PRIMARY) ON       tslot > CONVERT(CONCAT(LEFT(starttime,16),":00"),DATETIME)       AND tslot < CONVERT(CONCAT(LEFT(stoptime,16),":00"),DATETIME)  WHERE              stoptime >= '2009-08-03 00:00:00'       AND stoptime <= DATE_ADD('2009-08-03 23:59:59', INTERVAL 1 HOUR)               AND TIMESTAMPDIFF(MINUTE, CONCAT(LEFT(starttime,16),":00"), CONCAT(LEFT(stoptime,16),":00")) >= 2               AND starttime <= '2009-08-03 23:59:59' GROUP BY tslot;
  2. +---------------------+----------+
  3. | tslot               | count(1) |
  4. +---------------------+----------+
  5. | 2009-08-03 11:30:00 |        1 |
  6. | 2009-08-03 11:31:00 |        1 |
  7. | 2009-08-03 11:32:00 |        1 |
  8. | 2009-08-03 11:33:00 |        2 |
  9. | 2009-08-03 16:13:00 |        1 |
  10. | 2009-08-03 16:14:00 |        1 |
  11. | 2009-08-03 16:15:00 |        1 |
  12. | 2009-08-03 16:16:00 |        1 |
  13. | 2009-08-03 16:17:00 |        1 |
  14. | 2009-08-03 16:18:00 |        1 |
  15. | 2009-08-03 16:19:00 |        1 |
  16. +---------------------+----------+
  17. 11 rows in SET (0.00 sec)

Notice that I already did the GROUP BY and that it forces a temporary table and filesort, so it's better to be careful on how many records this will generate. In my (real) case the grouping is done on more phone_calls fields, so I can probably reuse the index later. As regarding the post-execution, since the helper table is TEMPORARY, everything will be dismissed automatically without further programming overhead.

I hope you understand this solution opens a wide range of "set"-based solutions to problems you might try to solve in a procedural way - which is the reason your solution might turn to be painfull.

AddThis Social Bookmark Button

Importing wikimedia dumps

September 28th, 2009 ntavares Posted in en_US, mysql, wikipedia 3 Comments »

Ouvir com webReader

We are trying to gather some particular statistics about portuguese wikipedia usage.
I proposed myself for import the ptwiki-20090926-stub-meta-history dump, which is a XML file, and we'll be running very heavy queries (it's my task to optimize them, somehow).

What I'd like to mention is that the importing mechanism seems to be tremendously simplified. I remember testing a couple of tools in the past, without much success (or robustness). However, I gave a try to mwdumper this time, and it seems it does it. Note however that there were schema changes from the last mwdumper release, so you should have a look at WMF Bug #18328: mwdumper java.lang.IllegalArgumentException: Invalid contributor which releases a proposed fix which seems to work well. Special note to its memory efficiency: RAM is barely touched!

The xml.gz file is ~550MB, and was converted to a ~499MB sql.gz:

1,992,543 pages (3,458.297/sec), 15,713,915 revs (27,273.384/sec)

I've copied the schema from a running (updated!) mediawiki to spare some time. The tables seem to be InnoDB default, so let's simplify I/O a bit (I'm on my laptop). This will also allow to speed up loading times a lot:

MySQL:
  1. mysql> ALTER TABLE `TEXT` ENGINE=Blackhole;
  2. Query OK, 0 rows affected (0.01 sec)
  3. Records: 0  Duplicates: 0  WARNINGS: 0
  4.  
  5. mysql> ALTER TABLE page DROP INDEX page_random, DROP INDEX page_len;
  6. Query OK, 0 rows affected (0.01 sec)
  7. Records: 0  Duplicates: 0  WARNINGS: 0
  8.  
  9. mysql> ALTER TABLE revision DROP INDEX rev_timestamp, DROP INDEX page_timestamp, DROP INDEX user_timestamp, DROP INDEX usertext_timestamp;
  10. Query OK, 0 rows affected (0.01 sec)
  11. Records: 0  Duplicates: 0  WARNINGS: 0

The important here is to avoid the larger I/O if you don't need it at all. Table text has page/revision content which I'm not interested at all. As regarding MySQL's configuration (and as a personal note, anyway), the following configuration will give you great InnoDB speeds:

CODE:
  1. key_buffer = 512K
  2. sort_buffer_size = 16K
  3. read_buffer_size = 2M
  4. read_rnd_buffer_size = 1M
  5. myisam_sort_buffer_size = 512K
  6. query_cache_size = 0
  7. query_cache_type = 0
  8. bulk_insert_buffer_size = 2M
  9.  
  10. innodb_file_per_table
  11. transaction-isolation = READ-COMMITTED
  12. innodb_buffer_pool_size = 2700M
  13. innodb_additional_mem_pool_size = 20M
  14. innodb_autoinc_lock_mode = 2
  15. innodb_flush_log_at_trx_commit = 0
  16. innodb_doublewrite = 0
  17. skip-innodb-checksum
  18. innodb_locks_unsafe_for_binlog=1
  19. innodb_log_file_size=128M
  20. innodb_log_buffer_size=8388608
  21. innodb_support_xa=0
  22. innodb_autoextend_increment=16

Now I'd recommend uncompress the dump so it's easier to trace the whole process if it's taking too long:

CODE:
  1. [myself@speedy ~]$ gunzip ptwiki-20090926-stub-meta-history.sql.gz
  2. [myself@speedy ~]$ cat ptwiki-20090926-stub-meta-history.sql | mysql wmfdumps

After some minutes on a Dual Quad Core Xeon 2.0GHz and 2.4 GB of datafiles we are ready to rock! I will probably also need later the user table, which Wikimedia doesn't distribute, so I'll rebuild it now:

MySQL:
  1. mysql> ALTER TABLE user modify COLUMN user_id INT(10) UNSIGNED NOT NULL;
  2. Query OK, 0 rows affected (0.12 sec)
  3. Records: 0  Duplicates: 0  WARNINGS: 0
  4.  
  5. mysql> ALTER TABLE user DROP INDEX user_email_token, DROP INDEX user_name;
  6. Query OK, 0 rows affected (0.03 sec)
  7. Records: 0  Duplicates: 0  WARNINGS: 0
  8.  
  9. mysql> INSERT INTO user(user_id,user_name) SELECT DISTINCT rev_user,rev_user_text FROM revision WHERE rev_user <> 0;
  10. Query OK, 119140 rows affected, 4 WARNINGS (2 min 4.45 sec)
  11. Records: 119140  Duplicates: 0  WARNINGS: 0
  12.  
  13. mysql> ALTER TABLE user DROP PRIMARY KEY;
  14. Query OK, 0 rows affected (0.13 sec)
  15. Records: 0  Duplicates: 0  WARNINGS: 0
  16.  
  17. mysql> INSERT INTO user(user_id,user_name) VALUES(0,'anonymous');
  18. Query OK, 1 row affected, 4 WARNINGS (0.00 sec)

It's preferable to join on INT's rather than VARCHAR(255) that's why I reconstructed the user table. I actually removed the PRIMARY KEY but I set it after the process. What happens is that there are users that have been renamed and thus they appear with same id, different user_name. The query to list them all is this:

MySQL:
  1. mysql> SELECT a.user_id,a.user_name FROM user a JOIN (SELECT user_id,count(1) as counter FROM user GROUP BY user_id HAVING counter > 1 ORDER BY counter desc) as b on a.user_id = b.user_id ORDER BY user_id DESC;
  2. ....
  3. 14 rows in SET (0.34 sec)
  4.  
  5. mysql> UPDATE user a JOIN (SELECT user_id,GROUP_CONCAT(user_name) as user_name,count(1) as counter FROM user GROUP BY user_id HAVING counter > 1) as b SET a.user_name = b.user_name WHERE a.user_id = b.user_id;
  6. Query OK, 14 rows affected (2.49 sec)
  7. Rows matched: 14  Changed: 14  WARNINGS: 0

The duplicates were removed manually (they're just 7). Now, let's start to go deeper. I'm not concerned about optimizing for now. What I wanted to run right away was the query I asked on Toolserver more than a month ago:

MySQL:
  1. mysql>  CREATE TABLE `teste` (
  2.     ->   `rev_user` INT(10) UNSIGNED NOT NULL DEFAULT '0',
  3.     ->   `page_namespace` INT(11) NOT NULL,
  4.     ->   `rev_page` INT(10) UNSIGNED NOT NULL,
  5.     ->   `edits` INT(1) UNSIGNED NOT NULL,
  6.     ->   PRIMARY KEY (`rev_user`,`page_namespace`,`rev_page`)
  7.     -> ) ENGINE=INNODB DEFAULT CHARSET=latin1 ;
  8. Query OK, 0 rows affected (0.04 sec)
  9.  
  10. mysql> INSERT INTO teste SELECT r.rev_user, p.page_namespace, r.rev_page, count(1) AS edits FROM revision r JOIN page p ON r.rev_page = p.page_id GROUP BY r.rev_user,p.page_namespace,r.rev_page;
  11. Query OK, 7444039 rows affected (8 min 28.98 sec)
  12. Records: 7444039  Duplicates: 0  WARNINGS: 0
  13.  
  14. mysql> CREATE TABLE edits_per_namespace SELECT STRAIGHT_JOIN u.user_id,u.user_name, page_namespace,count(1) as edits FROM teste JOIN user u on u.user_id = rev_user GROUP BY rev_user,page_namespace;
  15. Query OK, 187624 rows affected (3.65 sec)
  16. Records: 187624  Duplicates: 0  WARNINGS: 0
  17.  
  18. mysql> SELECT * FROM edits_per_namespace ORDER BY edits desc limit 5;
  19. +---------+---------------+----------------+--------+
  20. | user_id | user_name     | page_namespace | edits  |
  21. +---------+---------------+----------------+--------+
  22. |   76240 | Rei-bot       |              0 | 365800 |
  23. |       0 | anonymous     |              0 | 253238 |
  24. |   76240 | Rei-bot       |              3 | 219085 |
  25. |    1740 | LeonardoRob0t |              0 | 145418 |
  26. 170627 | SieBot        |              0 | 121647 |
  27. +---------+---------------+----------------+--------+
  28. 5 rows in SET (0.09 sec)

Well, that's funny Rei-artur's bot beats all summed anonymous edits on the main namespace :) I still need to setup a way of discarding the bots, they usually don't count on stats. I'll probably set a flag on the user table myself, but this is enough to get us started..

AddThis Social Bookmark Button

Listing miscellaneous Apache parameters in SNMP

September 28th, 2009 ntavares Posted in en_US, monitorização No Comments »

Ouvir com webReader

We recently had to look at a server which occasionaly died with DoS. I was manually monitoring a lot of stuff while I was watching a persistent BIG apache worker popping up occasionally and then disappear (probably being recycled). Yet more rarely I caught two of them. This machine was being flood with blog spam from a botnet. I did the math and soon I found that if the current number of allowed workers was filled the way this was, the machine would start swapping like nuts. This seemed to be the cause.

After corrected the problem (many measures were taken, see below), I searched for cacti templates that could evidence this behaviour. I found that ApacheStats nor the better Apache templates didn't report about Virtual Memory Size (VSZ) nor Resident Set Size (RSS), which is exaplained by mod_status not reporting it either (and they fetch the data by querying mod_status).

So here's a simple way of monitoring these. Suppose there is a server which runs some apache workers you want to monitor, and there is machine to where you want to collect data:

Edit your server's /etc/snmp/snmpd.conf

CODE:
  1. # .... other configuration directives
  2. exec .1.3.6.1.4.1.111111.1 ApacheRSS /usr/local/bin/apache-snmp-rss.sh

'1.3.6.1.4.1.111111.1' OID is a branch of '.1.3.6.1.4.1' which was assigned with meaning '.iso.org.dod.internet.private.enterprises', which is where one enterprise without IANA assignmed code should place its OIDs. Anyway, you can use any sequence you want.

Create a file named /usr/local/bin/apache-snmp-rss.sh with following contents:

CODE:
  1. #!/bin/sh
  2. WORKERS=4
  3. ps h -C httpd -o rss | sort -rn | head -n $WORKERS

Notice that httpd is apache's process name in CentOS. In Debian, eg, that would be apache. Now give the script execution rights. Now go to your poller machine, from where you'll do the SNMP queries:

CODE:
  1. [root@poller ~]# snmpwalk -v 2c -c public targetserver .1.3.6.1.4.1.111111.1.101
  2. SNMPv2-SMI::enterprises.111111.1.101.1 = STRING: "27856"
  3. SNMPv2-SMI::enterprises.111111.1.101.2 = STRING: "25552"
  4. SNMPv2-SMI::enterprises.111111.1.101.3 = STRING: "24588"
  5. SNMPv2-SMI::enterprises.111111.1.101.4 = STRING: "12040"

So this is reporting the 4 most consuming workers (which is the value specified in the script variable WORKERS) with their RSS usage (that's the output of '-o rss' on the script).

Now graphing these values is a bit more complicated, specially because the graphs are usually created on a "fixed number of values" basis. That means whenever your workers number increases or decreases, the script has to cope with it. That's why there is filtering ocurring on the script: first we reverse order them by RSS size, then we get only the first 4 - this means you'll be listing the most consuming workers. To avoid having your graphs asking for more values than the scripts generates, the WORKERS script variable should be adjusted to the minimum apache workers you'll ever have on your system - that should be the httpd.conf StartServers directive.

Now going for the graphs: this is the tricky part as I find cacti a little overcomplicated. However you should be OK with this Netuality post. You should place individual data sources for each of the workers, and group the four in a Graph Template. This is the final result, after lots of struggling trying to get the correct values (I still didn't manage to get the right values, which are ~22KB):

cacti_apache_rss_stats

In this graph you won't notice the events I exposed in the beginning because other measures were taken, including dynamic firewalling, apache tuning and auditing the blogs for comment and track/pingback permissions - we had an user wide open to spam, and that was when the automatic process of cleaning up the blog spam was implemented. In any case, this graph will evidence future similar situations which I hope are over.

I'll try to post the cacti templates as well, as soon as I recover from the struggling :) Drop me a note if you're interested.

AddThis Social Bookmark Button

Side-effect of mysqlhotcopy and LVM snapshots on active READ server

September 26th, 2009 ntavares Posted in en_US, monitorização, mysql, performance No Comments »

Ouvir com webReader

I just came across a particular feature of MySQL while using inspecting a Query Cache being wiped out at backup times. Whenever you run FLUSH TABLES, the whole Query Cache gets flushed as well, even if you FLUSH TABLES a particular table. And guess what, mysqlhotcopy issues FLUSH TABLES so the tables get in sync on storage.

I actually noticed the problem with Query Cache on a server reporting the cache flush at a [too] round time (backup time).

flush_tables_affects_query_cache

First thought was «there's something wrong about mysqlhotcopy. But actually this is expected behaviour:

When no tables are named, closes all open tables, forces all tables in use to be closed, and flushes the query cache. With one or more table names, flushes only the given tables. FLUSH TABLES also removes all query results from the query cache, like the RESET QUERY CACHE statement.

I got curious about why the heck closing a table should invalidate the cache - maybe the "close table" mechanism is overly cautious?

Anyway, it's not mysqlhotcopy's fault. And since you should issue FLUSH TABLES for LVM snapshost for consistentency as well, this method is also affected, which renders both methods pretty counter-perfomance in a single production server, comparing to mysqldump, unless you do post-backup warmup process. For that, it would be interesting to be able to dump the QC contents and reload them after the backup - which is not possible, at the moment... bummer...

AddThis Social Bookmark Button

Automatically cleaning up SPAM Wordpress comments

September 6th, 2009 ntavares Posted in dri, en_US, mysql 2 Comments »

Ouvir com webReader

Doing the maintenance of our blogs (Wordpress), I bumped over one that had fallen on an active botnet. It was receiving like 5 or 6 spam comments per minute. It was nearly the only one in such an harassment, so I suspect the botnet loved it for being open on commenting.

Since I've activated reCaptcha I've been monitoring my "spam folder" and I'm really confident on his guesses, so I just wrote a STORED PROCEDURE to clean up these spam comments on a periodic basis, so I can do a sitewide cleanup:

MySQL:
  1. DELIMITER $$
  2.  
  3. DROP PROCEDURE IF EXISTS `our_blog_db`.`REMOVE_OLD_SPAM`$$
  4. CREATE PROCEDURE `our_blog_db`.`REMOVE_OLD_SPAM` ()
  5.     MODIFIES SQL DATA
  6.     COMMENT 'remove comentarios marcados como SPAM'
  7. BEGIN
  8.  
  9. DECLARE done BIT(1) DEFAULT FALSE;
  10. DECLARE commtbl VARCHAR(50);
  11. DECLARE comments_tbls CURSOR FOR SELECT TABLE_NAME
  12.     FROM information_schema.TABLES  
  13.     WHERE TABLE_SCHEMA = 'our_blog_db' AND TABLE_NAME LIKE '%comments';
  14. DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
  15.  
  16.  
  17. OPEN comments_tbls;
  18.  
  19. REPEAT
  20.     FETCH comments_tbls INTO commtbl;
  21.     SET @next_tbl = CONCAT('DELETE FROM our_blog_db.',commtbl,'
  22.         WHERE comment_approved = "spam"
  23.         AND comment_date_gmt < DATE_SUB(UTC_TIMESTAMP(), INTERVAL 15 DAYS)');
  24.     PREPARE get_next_tbl FROM @next_tbl;
  25.     EXECUTE get_next_tbl;
  26.  
  27. UNTIL done END REPEAT;
  28.  
  29. CLOSE comments_tbls;
  30.  
  31.  
  32. END$$
  33.  
  34. DELIMITER ;

It's very easy to stick it into an EVENT, if you have MySQL 5.1 or bigger, and which to do a daily clean up automatically:

MySQL:
  1. CREATE EVENT `EV_REMOVE_OLD_SPAM` ON SCHEDULE EVERY 1 DAY STARTS '2009-08-01 21:00:00' ON COMPLETION NOT PRESERVE ENABLE
  2. COMMENT 'remove comentarios marcados como SPAM' DO
  3. BEGIN
  4.  
  5. SELECT GET_LOCK('remove_spam',5) INTO @remove_spam_lock;
  6.  
  7. IF @remove_spam_lock THEN
  8.     CALL REMOVE_OLD_SPAM();
  9.  
  10. END IF;
  11.  
  12. END

Enjoy!

AddThis Social Bookmark Button

About cloud computing

September 2nd, 2009 ntavares Posted in en_US, linux driver, scaling No Comments »

Ouvir com webReader

Last Sunday I commented about Pedro's opinion about cloud computing and thought I could give my blog a reversed trackback :) Here it goes:

I think Pedro's message is important. Cloud marketing and fuzzing seems to be targetted to business decision making personnel. However, no matter what they try to look like, that’s a technical decision and I really think that companies just following this marketing hype will eventually get caught on those small contract letters. As a technician, I agree with Pedro on the enterprise [not] moving its core to the cloud, and that the prices are [still] overrated.

However, for medium-to-large traffic platforms, such that they require a complex setup (meaning >4 machines) cloud can be a solution very similar to what could be called Hardware-as-a-Service. Unavoidabily, you have to move this kind of platforms outside the core, even if they are on a DMZ. More, you don’t usually want to mix corporate traffic with specific platforms (eg. a multinational’s CRM, the company’s website, etc.). In this context, cloud adds as much value as a regular hosting company would do, IMO. No more, no less.

Having said that, I still think it has lots of potential for intermediate companies (and again, this lives in technical scope) to provide HW solutions to costumers by clicking and adding “resources” to a [kind of] shop cart and then split them accordingly to their needs. That’s pretty much how Amazon seems to work – not some VPS/sliced hosting we are getting used to. Also, I see benefit for large hosting companies (now these could be those VPS/sliced ones :) ) because they can turn the income on periodic basis to match the periodic costs. From this intermediate’s perspective, one of the great features of this cloud thing is that they have setup quite heterogeneous provising systems, which a regular company can’t handle – that is to say you could setup a small/medium/full-blown pile of servers with a few clicks. Time also costs money.

Of course, this is all theoretical while the prices remain so high. It seemed even worst from my searches (although I confess I didn’t explore in depth): you will pay much more with cloud to have there available the same resources you can find on typical dedicated hosting servers – but it’s also true you rarely use them at 100%, so you may eventually get more cost/performance benefit in the near future (because when you buy or rent hardware it’s very difficult to recover the cost).

My conclusion is that the cloud is trying to attract customers on the hype, and that makes our technical advice more needed than ever: explain to the client how to plan, how to implement, and how to scale and where exactly the cloud fits in. To them, my recommendation is this: being on the cloud just because “it’s cool” or because it (seems) so simple you won’t need specialized IT staff, will eventually turn against you.

AddThis Social Bookmark Button