[Mod_log_sql] How many records is to many?

Travis Morgan mls at bigfiber.net
Mon May 16 22:48:46 EST 2005


It depends if you plan ahead and what you index.

I added indexing to several columns so that I can easily pull data from
it without it taking forever even when the table grows quite large.

One of the systems that I manage had an interesting hiccup though. I ran
into the 4 GB size limit for the table and my apache was segfaulting all
over the place. It took me a while to figure out what was going on
because there were no meaningful error messages. I actually figured it
was a problem with mod_log_sql because I had, coincidentally, moved to
apache 1.3.33 from 1.3.32 and thought maybe it wasn't compiling against
it. I ended up figuring out the problem after I had taken the time to
change the whole server to apache 2 and tried to do a manual insertion
of a log entry with no better results. :P

        *** glibc detected *** corrupted double-linked list: 0x095719a0
        ***
        [Sun May  1 02:18:52 2005] [notice] child pid 19964 exit signal
        Aborted (6)
        *** glibc detected *** corrupted double-linked list: 0x095719a0
        ***
        *** glibc detected *** corrupted double-linked list: 0x095719a0
        ***
        
and..

        [Mon May  2 13:44:31 2005] [notice] child pid 7015 exit signal
        Segmentation fault (11)
        [Mon May  2 13:44:32 2005] [notice] child pid 6218 exit signal
        Segmentation fault (11)
        [Mon May  2 13:44:32 2005] [notice] child pid 12995 exit signal
        Segmentation fault (11)
        [Mon May  2 13:44:32 2005] [notice] child pid 6290 exit signal
        Segmentation fault (11)
        [Mon May  2 13:44:32 2005] [notice] child pid 1152 exit signal
        Segmentation fault (11)
        [Mon May  2 13:44:33 2005] [notice] child pid 8841 exit signal

etc..

Anyways, you can avoid that problem if you want your tables to grow
larger than 4GB as well.

http://dev.mysql.com/doc/mysql/en/table-size.html

If you have many clients that will be accessing the logs somehow or you
do a lot of stats generation on them I'd suggest keeping them small so
you don't kill your server when you try to do queries.

If you haven't seen it yet I have a nice tool for grabbing logs from the
db for piping into analyzers such as awstats or webalizer.

http://sourceforge.net/projects/mls2clf/

Cheers,
Travis Morgan
BigFiber.net


On Mon, 2005-05-16 at 16:28 -0700, Carl Edwards wrote:
> Hello,
> 
> Could someone give me a feel for how much data you store in
> one table?  Do you move your stats to other tables by month,
> year, or some threshold #?  There must be a tradeoff between
> not having indexes which makes the INSERTs faster and having
> them which makes the SELECTs faster, but at only 70k records
> my multiple table joins take about 3 min.
> 
> Thanks,
> /Carl
> _______________________________________________
> Download the latest version at http://www.outoforder.cc/projects/apache/mod_log_sql/
> 
> To unsubscribe send an e-mail to 
> mod_log_sql-unsubscribe at lists.outoforder.cc





More information about the Mod_log_sql mailing list