while looking for additional fragments with tcpump
tcpdump -i any -nnvvXS '((ip[6:2] > 0) and (not ip[6] = 64))' |
run a DNS query that produces a fragmented reply
dig ANY financialresearch.gov @208.67.222.222 |
my own pastebin
while looking for additional fragments with tcpump
tcpdump -i any -nnvvXS '((ip[6:2] > 0) and (not ip[6] = 64))' |
run a DNS query that produces a fragmented reply
dig ANY financialresearch.gov @208.67.222.222 |
sudo su - modprobe raid5 mdadm --stop /dev/md0 mdadm --stop /dev/md1 |
mdadm --stop /dev/md127 mdadm --stop /dev/md126 |
cat /proc/mdstat |
mdadm --assemble --scan |
cat /proc/mdstat |
sfdisk -d /dev/sdc | sfdisk /dev/sdb |
mdadm /dev/md126 -a /dev/sdb2 mdadm /dev/md127 -a /dev/sdb3 |
mkdir /mnt-boot mount /dev/md126 /mnt mount /dev/md127 /mnt-boot mount --bind /dev /mnt/dev mount --bind /proc /mnt/proc mount --bind /sys /mnt/sys mount --bind /mnt-boot /mnt/boot chroot /mnt |
chroot /mnt /usr/bin/bash |
grub2-install /dev/sda grub2-install /dev/sdb grub2-install /dev/sdc |
grub2-mkconfig > /boot/grub2/grub.cfg |
watch cat /proc/mdstat |
mkdir unpacked_data find ./ -name "backup_1805230147.tar*" | sort -V | xargs cat | tar --overwrite -xvf - -i -C ./unpacked_data/ |
tested from 11.5 to 17.5
Just an emergency fix to deploy while searching for the root cause of outgoing bruteforce hacks
iptables -I OUTPUT -p tcp -m multiport --dports 80 -m tcp -m string --algo bm --string "wp-login.php" -j DROP |
First run this query replacing databasetoconvert with the database name you want to convert
SELECT CONCAT('ALTER TABLE ', table_name, ' ENGINE=InnoDB;') AS sql_statements FROM information_schema.tables AS tb WHERE table_schema = 'databasetoconvert' AND `ENGINE` = 'MyISAM' AND `TABLE_TYPE` = 'BASE TABLE' ORDER BY table_name DESC LIMIT 0, 10000 ; |
then copy the output and run it again against the database you want to convert
create a .php file with this content:
<?php $checkvars = array('subject','message'); foreach ($checkvars AS $checkvar){ if(strpos($_REQUEST[$checkvar],'{php}') !== false){ header('Location: http://www.interpol.int/'); die('now'); exit; } } ?> |
and place it into whmcs /includes/hooks/ directory
Most time there’s little time, sometime there’s NO TIME!
A few days ago I had no time, and had to manipulate a badly exported database (2million+ single myisam insert statements) tuning mysqld was useless, insert delayed useless, increasing buffers useless… and so on… import was taking hours (many hours) on the target box due to impressively high disk io!
So I just fired up a vmware instance with 32gb of ram, 10gb hdd and 8cpu cores (of a xeon L56xx) and did everything in ram.
What was going to take hours on the target box took just 2minutes on the vmware instance…
Then I did a proper “mysqldump –opt” and imported it back into the target box in just 20seconds 😀
yum upgrade -y wget -q -O - http://www.atomicorp.com/installers/atomic | sh mkdir -p /var/lib/mysql && mount -v -t tmpfs -o size=24G none /var/lib/mysql yum install mysql mysql-server -y nano -w /etc/my.cnf |
tune it up a little, in my case
thread_concurrency=16 |
was enough 🙂
service mysqld restart mysql_secure_installation |
and you are good to go!
import the bad export and after that export it making use of all the proper settings (extended queries, locking and so on) … –opt handles all of them by default 🙂
So yes… sometime I make use of “the cloud” too :O
PS: I do the same (storage on ramdisk) when I’ve to compile a linux kernel.
This one command allows you to download the content of a directory to a local directory without doing recuirsive searches
wget -np -N --cut-dirs=1 -A .dem ftp://user:password@host.tld/tf2/orangebox/tf/* |
specifically this one downloads all the “.dem” (-A .dem) (team fortress demo files) located into the remote “/tf2/orangebox/tf/” directory.
Files are saved into the current directory (–cut-dirs=1)
Additionally it makes use of timestamping (-N) in order to not download already existing files when doing a subsequent run.
<?PHP $start = '149.3.176.1'; $end = '149.3.177.254'; $first_ip = ip2long($start); $last_ip = ip2long($end); $current_ip = ip2long($start); if($last_ip <= $first_ip){ die('I saved you from an infinite loop.'); exit; } echo "IP\t\tREVERSE\n"; while ($current_ip < $last_ip){ echo long2ip($current_ip)."\t\t".gethostbyaddr(long2ip($current_ip))."\n"; $current_ip++; } ?> |
This is what happens when you move a munin master node from CRON to CGI graphs:
😀