Fixing has stopped on Lollipop

Another fix for “The process has stopped unexpectedly…” that can (also) appear after you upgrade to Lollipop (Android 5.0) from some lower Android version without doing factory reset.

Try to uninstall “DRM Protected Content Storage” app via Settings -> Apps. In my case, this was a left-over of Android 4.3, and had some problem with it, resulting in that super annoying informative dialog…

Btw. if it doesn’t helps, and even nothing else – you’ll have to play with logcat, to discover the problem. “Simply” look for FATAL EXCEPTIONs.

firefox 31 + self-signed certificate = sec_error_ca_cert_invalid

If you are trying to access site with self-signed certificate with Firefox 31 (or later) and get Issuer certificate is invalid error (sec_error_ca_cert_invalid), you have to disable new mozilla::pkix certificate verification.

In about:config set

security.use_mozillapkix_verification = false

To find out more about mozilla::pkix and why your firefox just got so super secure and paranoid, that it doesn’t allows you to access you own site without googling – see I’m only wondering, why did they renamed it from insanity::pkix to mozilla::pkix – do they confess that ‘mozilla’ is slowly becoming a synonym for ‘insane’ ?-) Throwing such an error without any hint or possiblity to add an exception (as usual) is IMHO insane – but, who cares about power users today…

Update: As noted in comments, this should not work in Firefox 33 (or later).

Update2: As noted by #29 and referenced bugs, there seem to be (at least) 2 major cases, where new insane::pkix will refuse to accept a https site.

  1. Your internal CA certificate doesn’t specifies CA:TRUE in X509v3 Basic Constraints section
  2. You self-signed server certificate (the last one in certificate chain) specifies CA:TRUE – what is default for certificates generated by pkitool script from easy-rsa suite – and you have your CA certificate installed in FF.

See also FF bug #1042889.

Update3: Thanks to the work of Kai Engert, there is a fix for this in Firefox 31 ESR (download from and hopefully the same comes also with Firefox 33.1.


Simple XSLT ifnull for numbers

Answer to question how to display zero instead of NaN in XSLT for non existing node containing number values (kind of ifnull or coallesce functions that are available in SQL).

You can do it by standard expressive XSLT way, with using variable and <xsl:choose>, or abuse built-in sum() function and do whole thing in one line.

Standard way:

<!– read the value –>
<xsl:variable name=”val”>
<xsl:when test=”//number[1]”><xsl:value-of select=”//number[1]”/></xsl:when>
<!– print the value out –>
<xsl:value-of select=”$val“/>


Quick way:

<!– read and printout –>
<xsl:value-of select=”sum(//number[1])“/>


Both codes will print value of first node named number or zero if the node is not present.  Because it is a sum() function, it’s a good idea to limit nodeset only to first one, otherwise you will get a sum of all existing number nodes.

Btw. do you know the best XSLT reference out there ? No ? Look at ZVON XSLT reference.


Fixing pg_dump invalid memory alloc request size

I’ve encountered unusual problem while dumping one postgresql database. Every try to run pg_dump resulted in this:

sam@cerberus:~/backup$ pg_dump -v -c js1 >x
pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
pg_dump: The command was: COPY job.description (create_time, job_id, content, hash, last_check_time) TO stdout;
pg_dump: *** aborted because of error

or this

sam@cerberus:~/backup$ pg_dump -v -c --inserts js1 >x
pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
pg_dump: *** aborted because of error

Few weeks ago I’ve encountered almost fatal failure of two(!) disks in my RAID5 array – well really funny situation, and a bit uncomfortable way how to find out why not having monitoring and alarming system is a really bad idea 😉
Fortunately no serious data damage happened, postgresql seemed to recover from this, without any problems and my databases worked fine – until I’ve tried to perform a database dump.

And now comes the important question, how to find out which table rows are incorrect (and pg_filedump says everything is OK) ?
And the answer is, use custom function:

find_bad_row(tableName TEXT)
as $find_bad_row$
result tid;
row1 RECORD;
row2 RECORD;
tabName TEXT;
count BIGINT := 0;
SELECT reverse(split_part(reverse($1), '.', 1)) INTO tabName;

OPEN curs FOR EXECUTE 'SELECT ctid FROM ' || tableName;

count := 1;
FETCH curs INTO row1;

result = row1.ctid;

count := count + 1;
FETCH curs INTO row1;

EXECUTE 'SELECT (each(hstore(' || tabName || '))).* FROM '
|| tableName || ' WHERE ctid = $1' INTO row2
USING row1.ctid;

IF count % 100000 = 0 THEN
RAISE NOTICE 'rows processed: %', count;

CLOSE curs;
RETURN row1.ctid;
RETURN result;
LANGUAGE plpgsql;

It goes over all records in given table, expands them one by one – what will results in exception if some expansion occurs. Exception will also contain CTID of last correctly processed row. And next row with higher CTID will be the corrupted one.
Like this:

js1=# select find_bad_row('public.description');
NOTICE: LAST CTID: (78497,6)
NOTICE: XX000: invalid memory alloc request size 18446744073709551613
(1 row)

js1=# select * from job.description where ctid = '(78498,1)';
ERROR: invalid memory alloc request size 18446744073709551613
js1=# delete from job.description where ctid = '(78498,1)';
js1=# select find_bad_row('job.description');
NOTICE: rows processed: 100000
NOTICE: rows processed: 200000
NOTICE: rows processed: 300000
NOTICE: rows processed: 400000
NOTICE: rows processed: 500000
NOTICE: rows processed: 600000
NOTICE: rows processed: 700000
NOTICE: rows processed: 800000

(1 row)


Note: this function requires hstore postgresql extension – it is part of postgresql distribution, you may need to create it with:



Records in this table are not that important, and I can restore them from external source – so I could delete this corrupted row. If you can’t you will have to play directly with data files – like described here – good luck :)