Archive

Archive for the ‘admin’ Category

firefox 31 + self-signed certificate = sec_error_ca_cert_invalid

July 23rd, 2014 25 comments

If you are trying to access site with self-signed certificate with Firefox 31 (or later) and get Issuer certificate is invalid error (sec_error_ca_cert_invalid), you have to disable new mozilla::pkix certificate verification.

In about:config set

security.use_mozillapkix_verification = false

 

To find out more about mozilla::pkix and why your firefox just got so super secure and paranoid, that it doesn’t allows you to access you own site without googling see https://wiki.mozilla.org/SecurityEngineering/Certificate_Verification. I’m only wondering why did they renamed it from insanity::pkix to mozilla::pkix – do they confess that ‘mozilla’ is slowly becoming a synonym for ‘insane’ ?-) Throwing such an error without any hint or possiblity to add an exception (as usual) is IMHO insane – but, who cares about power users today…

Update: As noted by comments, this will be not work in Firefox 33 (or later).

Categories: admin, how-to, security, time saver Tags:

Fixing pg_dump invalid memory alloc request size

May 19th, 2012 No comments

I’ve encountered unusual problem while dumping one postgresql database. Every try to run pg_dump resulted in this:

sam@cerberus:~/backup$ pg_dump -v -c js1 >x
pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
pg_dump: The command was: COPY job.description (create_time, job_id, content, hash, last_check_time) TO stdout;
pg_dump: *** aborted because of error

or this

sam@cerberus:~/backup$ pg_dump -v -c --inserts js1 >x
pg_dump: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
pg_dump: *** aborted because of error

Few weeks ago I’ve encountered almost fatal failure of two(!) disks in my RAID5 array – well really funny situation, and a bit uncomfortable way how to find out why not having monitoring and alarming system is a really bad idea ;)
Fortunately no serious data damage happened, postgresql seemed to recover from this, without any problems and my databases worked fine – until I’ve tried to perform a database dump.

And now comes the important question, how to find out which table rows are incorrect (and pg_filedump says everything is OK) ?
And the answer is, use custom function:

CREATE OR REPLACE FUNCTION
find_bad_row(tableName TEXT)
RETURNS tid
as $find_bad_row$
DECLARE
result tid;
curs REFCURSOR;
row1 RECORD;
row2 RECORD;
tabName TEXT;
count BIGINT := 0;
BEGIN
SELECT reverse(split_part(reverse($1), '.', 1)) INTO tabName;
OPEN curs FOR EXECUTE 'SELECT ctid FROM ' || tableName;
count := 1;
FETCH curs INTO row1;
WHILE row1.ctid IS NOT NULL LOOP
result = row1.ctid;
count := count + 1;
FETCH curs INTO row1;
EXECUTE 'SELECT (each(hstore(' || tabName || '))).* FROM '
|| tableName || ' WHERE ctid = $1' INTO row2
USING row1.ctid;
IF count % 100000 = 0 THEN
RAISE NOTICE 'rows processed: %', count;
END IF;
END LOOP;
CLOSE curs;
RETURN row1.ctid;
EXCEPTION
WHEN OTHERS THEN
RAISE NOTICE 'LAST CTID: %', result;
RAISE NOTICE '%: %', SQLSTATE, SQLERRM;
RETURN result;
END
$find_bad_row$
LANGUAGE plpgsql;

It goes over all records in given table, expands them one by one – what will results in exception if some expansion occurs. Exception will also contain CTID of last correctly processed row. And next row with higher CTID will be the corrupted one.
Like this:

js1=# select find_bad_row('public.description');
NOTICE: LAST CTID: (78497,6)
NOTICE: XX000: invalid memory alloc request size 18446744073709551613
find_bad_row
--------------
(78497,6)
(1 row)
js1=# select * from job.description where ctid = '(78498,1)';
ERROR: invalid memory alloc request size 18446744073709551613
js1=# delete from job.description where ctid = '(78498,1)';
DELETE 1
js1=# select find_bad_row('job.description');
NOTICE: rows processed: 100000
NOTICE: rows processed: 200000
NOTICE: rows processed: 300000
NOTICE: rows processed: 400000
NOTICE: rows processed: 500000
NOTICE: rows processed: 600000
NOTICE: rows processed: 700000
NOTICE: rows processed: 800000
find_bad_row
--------------
(1 row)

 

Note: this function requires hstore postgresql extension – it is part of postgresql distribution, you may need to create it with:

CREATE EXTENSION hstore;

 

Records in this table are not that important, and I can restore them from external source – so I could delete this corrupted row. If you can’t you will have to play directly with data files – like described here – good luck :)

Categories: admin, rdbms, time saver Tags:

Monitoring disk drives via collectd

April 17th, 2012 No comments

I’ve made two simple (but useful) disk drive monitoring scripts for collectd exec plugin. You can find them on http://devel.dob.sk/collectd-scripts/.

smartmon.sh

This script monitors SMART attributes of given disks using smartctl (smartmontools).

megamon.sh

This one monitors some interesting values of MegaRaid adapter physical drives using MegaCli tool.

 

Description how to use them can be found within scripts itself – enjoy ;)

Categories: admin, devel, time saver Tags:

windows – exporting non-exportable private key

April 5th, 2012 No comments

If you are trying to export windows certificate with private key, and windows export wizard provides no such possibility (export with private key is grayed out) because private key has been install as non-exportable (what is the default when importing, what almost nobody changes), there is a great tool mimikatz that makes this possible.

Download it from http://blog.gentilkiwi.com/mimikatz.

And follow this procedure:

  1. crypto::patchcapi (or crypto::patchcng if previous did not work)
  2. crypto::listKeys (or crypto::listCertificates) to list keys/certificates
  3. crypto::exportKeys (or crypto::exportCertificates) to export what you want

That’s all. Exported keys will be protected with password ‘mimikatz‘ – you will need to enter it when importing certificate again.

 

Categories: admin, how-to, security, time saver Tags:

solution: subversion not working under redmine

July 5th, 2011 No comments

If you have problem to use subversion under redmine, but svn command itself works ok, the problem might be in incorrect home directory configured for user which is running redmine (can be apache user, fcgi user id…etc). Incorrect here means home directory points to a file instead of directory (ie. /dev/null) . One can reproduce this with setting HOME to point to file.

Example:

$ HOME=/dev/null svn --version
svn: Can't open file '/dev/null/.subversion/servers': Not a directory
$ HOME=/dev/null svn --version --quiet
svn: Can't open file '/dev/null/.subversion/servers': Not a directory

Solution is pretty simple, just change user home directory configuration (via usermod) or set somehow $HOME variable of the redmine execution environment to point to some directory (ie. HOME=/var/empty).

This misbehavior has been reported as subversion defect.

Categories: admin, how-to, time saver Tags: