Archive

Archive for the ‘devel’ Category

OpenVPN OCC ping patch

May 28th, 2012 No comments

Hello,

i’ve created simple patch for OpenVPN implementing OCC ping. Main difference of OCC ping and existing OpenVPN ping is that OCC ping is being actively replied on other side of the communication channel. This way you can configure various per-client channel reliability policies:

  • Non-mobile clients might ping more frequently to ensure stable connection, and reconnect as soon as possible in case of failure.
  • Mobile clients (ie. Android phones) might ping less frequently to save battery.

OCC ping can be enabled with (boolean) occ-ping directive and it integrates with all existing ping settings (ping/ping-restart/… directives) – simply instead of ‘normal’ pings OCC pings will be send.

Additionally occ-ping-compat directive makes it possible to use backward compatible OCC pings, that sends instead of newly implemented OCC_PING message, already existing OCC_REQUEST that will be always replied  by other side with OCC_REPLY. This makes it possible to use this new behavior with clients running openvpn without having OCC ping implemented.

Patch can be found here: openvpn-2.2.2-occ-ping.patch.

Categories: devel Tags:

Fixing pg_dump invalid memory alloc request size

May 19th, 2012 No comments

I’ve encountered unusual problem while dumping one postgresql database. Every try to run pg_dump resulted in this:

sam@cerberus:~/backup$ pg_dump -v -c js1 >x
pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
pg_dump: The command was: COPY job.description (create_time, job_id, content, hash, last_check_time) TO stdout;
pg_dump: *** aborted because of error

or this

sam@cerberus:~/backup$ pg_dump -v -c --inserts js1 >x
pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
pg_dump: *** aborted because of error

Few weeks ago I’ve encountered almost fatal failure of two(!) disks in my RAID5 array – well really funny situation, and a bit uncomfortable way how to find out why not having monitoring and alarming system is a really bad idea ;)
Fortunately no serious data damage happened, postgresql seemed to recover from this, without any problems and my databases worked fine – until I’ve tried to perform a database dump.

And now comes the important question, how to find out which table rows are incorrect (and pg_filedump says everything is OK) ?
And the answer is, use custom function:

CREATE OR REPLACE FUNCTION
find_bad_row(tableName TEXT)
RETURNS tid
as $find_bad_row$
DECLARE
result tid;
curs REFCURSOR;
row1 RECORD;
row2 RECORD;
tabName TEXT;
count BIGINT := 0;
BEGIN
SELECT reverse(split_part(reverse($1), '.', 1)) INTO tabName;
OPEN curs FOR EXECUTE 'SELECT ctid FROM ' || tableName;
count := 1;
FETCH curs INTO row1;
WHILE row1.ctid IS NOT NULL LOOP
result = row1.ctid;
count := count + 1;
FETCH curs INTO row1;
EXECUTE 'SELECT (each(hstore(' || tabName || '))).* FROM '
|| tableName || ' WHERE ctid = $1' INTO row2
USING row1.ctid;
IF count % 100000 = 0 THEN
RAISE NOTICE 'rows processed: %', count;
END IF;
END LOOP;
CLOSE curs;
RETURN row1.ctid;
EXCEPTION
WHEN OTHERS THEN
RAISE NOTICE 'LAST CTID: %', result;
RAISE NOTICE '%: %', SQLSTATE, SQLERRM;
RETURN result;
END
$find_bad_row$
LANGUAGE plpgsql;

It goes over all records in given table, expands them one by one – what will results in exception if some expansion occurs. Exception will also contain CTID of last correctly processed row. And next row with higher CTID will be the corrupted one.
Like this:

js1=# select find_bad_row('public.description');
NOTICE: LAST CTID: (78497,6)
NOTICE: XX000: invalid memory alloc request size 18446744073709551613
find_bad_row
--------------
(78497,6)
(1 row)
js1=# select * from job.description where ctid = '(78498,1)';
ERROR: invalid memory alloc request size 18446744073709551613
js1=# delete from job.description where ctid = '(78498,1)';
DELETE 1
js1=# select find_bad_row('job.description');
NOTICE: rows processed: 100000
NOTICE: rows processed: 200000
NOTICE: rows processed: 300000
NOTICE: rows processed: 400000
NOTICE: rows processed: 500000
NOTICE: rows processed: 600000
NOTICE: rows processed: 700000
NOTICE: rows processed: 800000
find_bad_row
--------------
(1 row)

 

Note: this function requires hstore postgresql extension – it is part of postgresql distribution, you may need to create it with:

CREATE EXTENSION hstore;

 

Records in this table are not that important, and I can restore them from external source – so I could delete this corrupted row. If you can’t you will have to play directly with data files – like described here – good luck :)

Categories: admin, rdbms, time saver Tags:

Monitoring disk drives via collectd

April 17th, 2012 No comments

I’ve made two simple (but useful) disk drive monitoring scripts for collectd exec plugin. You can find them on http://devel.dob.sk/collectd-scripts/.

smartmon.sh

This script monitors SMART attributes of given disks using smartctl (smartmontools).

megamon.sh

This one monitors some interesting values of MegaRaid adapter physical drives using MegaCli tool.

 

Description how to use them can be found within scripts itself – enjoy ;)

Categories: admin, devel, time saver Tags:

log-malloc2 v0.2 released

February 9th, 2012 No comments

I’ve actually released log-malloc2 library for linux, that logs calls to memory allocation functions and should be very helpful when trying to locate memory leaks. It can be used without recompiling application, simply by preloading it using LD_PRELOAD .

Logged functions:

  • malloc
  • realloc
  • calloc
  • free
  • memalign

Every function call  is logged with their parameters, amount of allocated/deallocated memory, total amount of allocated memory, copy of /proc/self/statm content and backtrace (call stack) if possible. Additionally function call counts is logged and printed out when application exits.

Typical usage:

LD_PRELOAD=../liblog-malloc2.so PROGRAM ARGUMENTS 1022>/tmp/malloc.log

More complete description of library, its usage, logging format and internals can be found in README file.

Library can be downloaded from project homepage. Actually there is no script helping with malloc log file analysis included, but logfile format is very simple and there should be no big problem write your own doing exactly what you want.

You should note that using this library harms application performance, is intensively uses allocate/deallocate functions. Consider logging to file located on tmpfs (in memory) filesystem, to improve logging IO throughput ;)

Categories: devel, log- malloc2 Tags:

perl-XML::LibXML + global external entity loader

May 15th, 2011 No comments

Just created a quick patch against perl XML::LibXML module, that adds global external entity loader support. Till now it was only possible to have per instance entity loader, but this is not enough if you want i.e., XML::LibXSLT to also use yours entity loader for imports, and input callbacks doesn’t suit all your needs.

Usage is simple:

XML::LibXML::externalEntityLoader(\&_entity_handler);

where _entity_handler is subroutine like by option ext_ent_handler described in (http://search.cpan.org/dist/XML-LibXML/lib/XML/LibXML/Parser.pod#PARSER_OPTIONS).

Note: when you define global entity loader, per instance entity loader is simply ignored.

You can download this patch  from http://devel.dob.sk/patches/perl-XML::LibXML+global_entity_loader-0.1.diff. Just download XML::LibXML from cpan, patch it and install. I’ll try to push it to CPAN if possible ;-)

Categories: devel, perl, xml Tags: ,