Check if network is allowing inbound UDP fragments

while looking for additional fragments with tcpump

tcpdump -i any -nnvvXS  '((ip[6:2] > 0) and (not ip[6] = 64))'

run a DNS query that produces a fragmented reply

dig ANY financialresearch.gov @208.67.222.222

centos 7 raid5 rebuild + grub from live ubuntu 18.04 rescue env

sudo su -
 
modprobe raid5
mdadm --stop /dev/md0
mdadm --stop /dev/md1
stop any other active md device (you can see them in /prod/mdstat)
mdadm --stop /dev/md127
mdadm --stop /dev/md126
check if they are all stopped
cat /proc/mdstat
run a new scan
mdadm --assemble --scan
check if the raid5 is properly detected
cat /proc/mdstat
copy over the partition tables from working disk (sdc here) to the new disk (sdb here)
sfdisk -d /dev/sdc | sfdisk /dev/sdb
add the devices to their corresponding arrays (start with the /boot one if you have a dedicated boot one)
mdadm /dev/md126 -a /dev/sdb2
mdadm /dev/md127 -a /dev/sdb3
now let’s get ready to fix grub on the new disk, in our example md126 is boot, md127 is /
mkdir /mnt-boot
mount /dev/md126 /mnt
mount /dev/md127 /mnt-boot
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
mount --bind /mnt-boot /mnt/boot
chroot /mnt
you may get an error when doing the chroot if the shell is on a different path. this works for default centos install
chroot /mnt /usr/bin/bash
install grub on all the disks
grub2-install /dev/sda
grub2-install /dev/sdb
grub2-install /dev/sdc
regenerate grub.cfg
grub2-mkconfig > /boot/grub2/grub.cfg
now monitor the progress of the array rebuild process and reboot once completed
watch cat /proc/mdstat

Plesk unpack splitted backups

mkdir unpacked_data
find ./ -name "backup_1805230147.tar*" | sort -V | xargs cat | tar --overwrite -xvf - -i -C ./unpacked_data/

tested from 11.5 to 17.5

Blocking outgoing wordpress bruteforces

Just an emergency fix to deploy while searching for the root cause of outgoing bruteforce hacks

iptables -I OUTPUT -p tcp -m multiport --dports 80 -m tcp -m string --algo bm --string "wp-login.php" -j DROP

Convert Prestashop tables from mysisam to innodb using phpmyadmin

First run this query replacing databasetoconvert with the database name you want to convert

SELECT CONCAT('ALTER TABLE ', table_name, ' ENGINE=InnoDB;') AS sql_statements 
FROM information_schema.tables AS tb 
WHERE table_schema = 'databasetoconvert' 
AND `ENGINE` = 'MyISAM' 
AND `TABLE_TYPE` = 'BASE TABLE' 
ORDER BY table_name DESC LIMIT 0, 10000 ;

then copy the output and run it again against the database you want to convert

whmcs {php}base64decode tickets

create a .php file with this content:

<?php 
$checkvars = array('subject','message'); 
foreach ($checkvars AS $checkvar){
	if(strpos($_REQUEST[$checkvar],'{php}') !== false){
		header('Location: http://www.interpol.int/');
		die('now'); 
		exit;
	}
}
?>

and place it into whmcs /includes/hooks/ directory

Processing mysql dumps in hurry (convert single insert to extended insert)

Most time there’s little time, sometime there’s NO TIME!

A few days ago I had no time, and had to manipulate a badly exported database (2million+ single myisam insert statements) tuning mysqld was useless, insert delayed useless, increasing buffers useless… and so on… import was taking hours (many hours) on the target box due to impressively high disk io!

So I just fired up a vmware instance with 32gb of ram, 10gb hdd and 8cpu cores (of a xeon L56xx) and did everything in ram.
What was going to take hours on the target box took just 2minutes on the vmware instance…
Then I did a proper “mysqldump –opt” and imported it back into the target box in just 20seconds 😀

yum upgrade -y
wget -q -O - http://www.atomicorp.com/installers/atomic | sh
mkdir -p /var/lib/mysql && mount -v -t tmpfs -o size=24G none /var/lib/mysql
yum install mysql mysql-server -y
nano -w /etc/my.cnf

tune it up a little, in my case

thread_concurrency=16

was enough 🙂

service mysqld restart
mysql_secure_installation

and you are good to go!

import the bad export and after that export it making use of all the proper settings (extended queries, locking and so on) … –opt handles all of them by default 🙂

So yes… sometime I make use of “the cloud” too :O

PS: I do the same (storage on ramdisk) when I’ve to compile a linux kernel.

wget ftp download specific directory content – no recursion

This one command allows you to download the content of a directory to a local directory without doing recuirsive searches

wget -np -N --cut-dirs=1 -A .dem ftp://user:password@host.tld/tf2/orangebox/tf/*

specifically this one downloads all the “.dem” (-A .dem) (team fortress demo files) located into the remote “/tf2/orangebox/tf/” directory.
Files are saved into the current directory (–cut-dirs=1)

Additionally it makes use of timestamping (-N) in order to not download already existing files when doing a subsequent run.

Map a network – PTR / reverse DNS values [php]

<?PHP
 
$start = '149.3.176.1';
$end = '149.3.177.254';
 
$first_ip = ip2long($start);
$last_ip = ip2long($end);
$current_ip = ip2long($start);
 
if($last_ip <= $first_ip){
	die('I saved you from an infinite loop.');
	exit;
}
 
echo "IP\t\tREVERSE\n";
while ($current_ip < $last_ip){
	echo long2ip($current_ip)."\t\t".gethostbyaddr(long2ip($current_ip))."\n";
	$current_ip++;
}
 
?>