Latest version of log-malloc2 library provides (IMHO) unique little feature, that makes it well suited for unit testing memory allocations. It provides simple API for inquiring actual memory usage at runtime. This way, it is possible to compare usage before entering and after leaving some function, to ensure that there are no memory leaks inside of it.
New version of log-malloc2 provides new helpful functions and scripts that make backtrace printing and analyzing easy and convenient.
log-malloc2_util.h provides few fully inlined functions:
- Pre-initializes backtrace() function, to avoid any later memory allocations. Use of this function is optional, but it’s good to use it on program start if you want to generate backtrace in SIGSEGV signal handler (memory allocations in SIGSEGV should be avoided if possible).
2. ssize_t log_malloc_backtrace(int fd)
- Prints current backtrace to given file descriptor, including process memory map (/proc/self/maps) to make backtrace symbol conversion easier (this is needed because of ASLR).
- Generated output can be directly pasted to backtrace2line script, that will convert it to human readable stack trace (ASLR is supported).
Because both functions are inlined, it is not needed to link program against log-malloc2 library, and this makes it also bit easier to use it in segfault (SIGSEGV) signal handler.
Answer to question how to display zero instead of NaN in XSLT for non existing node containing number values (kind of ifnull or coallesce functions that are available in SQL).
You can do it by standard expressive XSLT way, with using variable and <xsl:choose>, or abuse built-in sum() function and do whole thing in one line.
<!– read the value –>
<xsl:when test=”//number”><xsl:value-of select=”//number”/></xsl:when>
<!– print the value out –>
<!– read and printout –>
Both codes will print value of first node named number or zero if the node is not present. Because it is a sum() function, it’s a good idea to limit nodeset only to first one, otherwise you will get a sum of all existing number nodes.
Btw. do you know the best XSLT reference out there ? No ? Look at ZVON XSLT reference.
i’ve created simple patch for OpenVPN implementing OCC ping. Main difference of OCC ping and existing OpenVPN ping is that OCC ping is being actively replied on other side of the communication channel. This way you can configure various per-client channel reliability policies:
- Non-mobile clients might ping more frequently to ensure stable connection, and reconnect as soon as possible in case of failure.
- Mobile clients (ie. Android phones) might ping less frequently to save battery.
OCC ping can be enabled with (boolean) occ-ping directive and it integrates with all existing ping settings (ping/ping-restart/… directives) – simply instead of ‘normal’ pings OCC pings will be send.
Additionally occ-ping-compat directive makes it possible to use backward compatible OCC pings, that sends instead of newly implemented OCC_PING message, already existing OCC_REQUEST that will be always replied by other side with OCC_REPLY. This makes it possible to use this new behavior with clients running openvpn without having OCC ping implemented.
Patch can be found here: openvpn-2.2.2-occ-ping.patch.
I’ve encountered unusual problem while dumping one postgresql database. Every try to run pg_dump resulted in this:
sam@cerberus:~/backup$ pg_dump -v -c js1 >x pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613 pg_dump: The command was: COPY job.description (create_time, job_id, content, hash, last_check_time) TO stdout; pg_dump: *** aborted because of error
sam@cerberus:~/backup$ pg_dump -v -c --inserts js1 >x pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613 pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor pg_dump: *** aborted because of error
Few weeks ago I’ve encountered almost fatal failure of two(!) disks in my RAID5 array – well really funny situation, and a bit uncomfortable way how to find out why not having monitoring and alarming system is a really bad idea 😉
Fortunately no serious data damage happened, postgresql seemed to recover from this, without any problems and my databases worked fine – until I’ve tried to perform a database dump.
And now comes the important question, how to find out which table rows are incorrect (and pg_filedump says everything is OK) ?
And the answer is, use custom function:
CREATE OR REPLACE FUNCTION find_bad_row(tableName TEXT) RETURNS tid as $find_bad_row$ DECLARE result tid; curs REFCURSOR; row1 RECORD; row2 RECORD; tabName TEXT; count BIGINT := 0; BEGIN SELECT reverse(split_part(reverse($1), '.', 1)) INTO tabName; OPEN curs FOR EXECUTE 'SELECT ctid FROM ' || tableName; count := 1; FETCH curs INTO row1; WHILE row1.ctid IS NOT NULL LOOP result = row1.ctid; count := count + 1; FETCH curs INTO row1; EXECUTE 'SELECT (each(hstore(' || tabName || '))).* FROM ' || tableName || ' WHERE ctid = $1' INTO row2 USING row1.ctid; IF count % 100000 = 0 THEN RAISE NOTICE 'rows processed: %', count; END IF; END LOOP; CLOSE curs; RETURN row1.ctid; EXCEPTION WHEN OTHERS THEN RAISE NOTICE 'LAST CTID: %', result; RAISE NOTICE '%: %', SQLSTATE, SQLERRM; RETURN result; END $find_bad_row$ LANGUAGE plpgsql;
It goes over all records in given table, expands them one by one – what will results in exception if some expansion occurs. Exception will also contain CTID of last correctly processed row. And next row with higher CTID will be the corrupted one.
js1=# select find_bad_row('public.description'); NOTICE: LAST CTID: (78497,6) NOTICE: XX000: invalid memory alloc request size 18446744073709551613 find_bad_row -------------- (78497,6) (1 row) js1=# select * from job.description where ctid = '(78498,1)'; ERROR: invalid memory alloc request size 18446744073709551613 js1=# delete from job.description where ctid = '(78498,1)'; DELETE 1 js1=# select find_bad_row('job.description'); NOTICE: rows processed: 100000 NOTICE: rows processed: 200000 NOTICE: rows processed: 300000 NOTICE: rows processed: 400000 NOTICE: rows processed: 500000 NOTICE: rows processed: 600000 NOTICE: rows processed: 700000 NOTICE: rows processed: 800000 find_bad_row -------------- (1 row)
Note: this function requires hstore postgresql extension – it is part of postgresql distribution, you may need to create it with:
CREATE EXTENSION hstore;
Records in this table are not that important, and I can restore them from external source – so I could delete this corrupted row. If you can’t you will have to play directly with data files – like described here – good luck