Create a raw backup of a device with dd

I am sure there are lots of clever programs for backing up memory cards or disks with pretty graphic interfaces but there is a very simple command-line for doing that called “dd”. It simply copies an entire file to another completely as a mirror image. How is that useful? Remember Linux sees every device as a file. This means your SDCARD you have plugged in can be copied away. So does that USB disk or even your hard disk. All you need is a spare place to copy the device to and the dd command.

Before you start make sure the device is not mounted or being used. The clone will work but the resulting information might be horribly corrupt

run
df -h
and maybe run
ps -aux | grep DEVICE_NAME
eg
ps -aux | grep sdc1

to make sure the device is not being used by a process. If everything is ok run

dd if=DEVICE of=/path/file

e.g
dd if=/dev/sdc of=/pi-sdcard.dd

The file does not need to be called .dd it is just a handy way to end a filename when you go back to it.
Once it is finished you can compress the file as it will be the same size as the source. A 16Mb SDcard will create a 16Mb file and a 20Gb USB disk will create a 20Gb file.

A image file like this can be investigated like a real device so

sfdisk -l /pi-sdcard.dd

will show what partitions are in the image

You restore an image by running a very similar command just with the if and of reversed so

dd if=/path/fileDEVICE of=DEVICE

e.g
dd if=/pi-sdcard.dd of=/dev/sdc

Finally for storage you probably want to compress the file. For example using gzip –

gzip /pi-sdcard.dd

Stop the attacks with Fail2ban

There are lots of ways to protect servers. It is best to follow the way the army works or how animals work in th wild. First and foremost you armour up. The firewall and good passwords are IT armour. Next you can camoflage. Althouh it’s les effective nowaday I would still say put things like SSH on a different port. You will still get attacked so set strick passwords but it slows down the attacks. Finally use a sentry to keep a look out. In terms of servers that means a prograsm that keeps a look out for people trying to hack into the server and block them. Enter Fail2ban. What it does is watches your log files and if it sees consistent attacks from a certain host blocks them.

There are therefore two main things to configure in Fail2ban – the filter (which logs to follow and what to look for) and the action (what to do). These get put in the jail configuration file. For Fail2ban I would recommend leaving the default configuration alone and creating a new file for this install.

Start by installing it.

apt update

apt install fail2ban

Fail2ban configuration files are in /etc/fail2ban/. The default configuration is jail.conf however rather than changing the file you are advised to create a new configuration based on that under jail.local and so we create it in here by copying the default file.

cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

now edit /etc/fail2ban/jail.conf

The first line to change is “ignoreip =”. This allows you to define IP addresses to ignore in the logs. Obviously local networks and remote servers or networks you control should go in here

so change the line –

# “ignoreip” can be an IP address, a CIDR mask or a DNS host
ignoreip = 127.0.0.1/8 192.168.0.0/24 8.8.8.8 [YOUR IPs HERE.$

[DEFAULT]

# “ignoreip” can be an IP address, a CIDR mask or a DNS host
ignoreip = 127.0.0.1/8 192.168.0.0/24 8.8.8.8 81.187.157.0/25 88.96.156.192/29 88.96.141.$

bantime = 360
maxretry = 3

Net we change the Action parameter. This allows you to set the action to apply is fail2ban sees an active attack. I suggest at this point just go for route. This simply creates a dummy route to the offending IP address. Other rules use things like iptables that can block just the port being attacked but for now just stick to route.

Again make sure you put your local IP range under “Ignore” in case you lock your self out!

#
# ACTIONS
#

banaction = route

Net set some other defaults.

# Default protocol
protocol = tcp

# Specify chain where jumps would need to be added in iptables-* actions
chain = INPUT

Now add some specific rules for specific services. For example –

[ssh]

enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3

So this says keep watching /var/log/auth.log for failed logins on SSH and if you see three failures from a certain IP address

If you have SSH enabled (on whatever port you want) I would recommend always enabling this as SSH is attacked a lot.

After a while running

ip route

might show lines such as

unreachable 89.252.246.3
unreachable 193.56.28.145
unreachable 193.56.28.193
unreachable 193.169.252.206
unreachable 212.70.149.5

Meaning these hosts have tried to SSH in and failed 3 times so been banned by a route line

Take a look under /etc/fail2ban/filter.d/ for other filters you can use. For email servers I suggest usin the postfix or exim filters as they cover lots of attempted attacks. Just follow the ssh config in jail.local

For example

[apache]

enabled = true
port = http,https
filter = apache-auth
logpath = /var/log/apache*/*error.log
maxretry = 6

Just run

service fail2ban restart

when done.

The Joy of NMAP

A program I use on Linux a huge amount is NMAP. It was first written 21 years ago! It can be used a variety of ways but its simplest use is as a simple scan of what is on your network. I will document some very simple uses.

First of all see what is responding to pings on your local network.

nmap -sn 192.168.1.0/24 (I used to always use -sP. This was the old way and it still works)

gives a variety of hosts – e.g.

Nmap scan report for host1 (192.168.1.10)
Host is up (0.00021s latency).
Nmap scan report for host2 (192.168.1.11)
Host is up (0.00044s latency).

Very useful for a basic scan of what is out there. It simply uses ping to find out what is out there. As noted it uses ping which may well be blocked by a local firewall so some hosts might not appear.

The next use of nmap requires root permission. Lets see the MAC address of each host. We will run the same scan but this time with sudo.

sudo nmap -sn 192.168.1.0/24

gives the MAC address of each host. You should see under each host –

MAC Address: AA:BB:CC:DD:EE:11 (Unknown)

Right now let us look at a particular host. Let us see what ports are open.

nmap 192.168.1.10

Starting Nmap 6.40 ( http://nmap.org ) at 2020-10-20 13:28 BST
Nmap scan report for host1 (192.168.1.10)
Host is up (0.0066s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
3128/tcp open squid-http

We can see that ports 22,111 and 3128 are open. As it says ssh and squid are open. You do not want SSH open to the whole world without other limitations but this is a local scan so fine.

What about seeing what other servers are listening to SSH? Again assuming we are in the local network 192.168.1.0/24 run

nmap -p 22 192.168.1.0/24

The lowercase p followed by the port you want to check. in this case 22 for SSH. You will see all hosts whether they have port 22 open or not. On hosts that are on but have no SSH server you will see CLOSED

On hosts that have a firewall you will see “filtered” meaning the host is on but not allowed a check of whether the port is open or not.

It is often handy to check that your host is firewalled correctly externally by running nmap from a host outside your network against your external IP address to check the ports are blocked or open.

Finally with root permissions you can get inmap to check the host to try and work out what OS it is running. This is not perfect but actually works better than you would think. Run with

sudo nmap -O 192.168.1.11

You will see back something like

Running: Linux 3.X|4.X
OS CPE: cpe:/o:linux:linux_kernel:3 cpe:/o:linux:linux_kernel:4
OS details: Linux 3.2 – 4.0
Network Distance: 1 hop

Handy for that rogue host on the network that you have forgotten is plugged in!

Remote access to SystemrescueCD

There are lots of good rescue CD’s out there. The one I have used for the last 12 years is System Rescue CD. Why? As well as lots handy tools the key feature I have used countless times is that it easily runs a SSH server. Imagine you need to fix a machine that is in a difficult place to work. Imagine it is in a different office even a different country! You can SSH in and try and fix it. I have used this disk to work on servers thousands of miles away. It works happily on workstations and laptops even if they do not run Linux. Importantly it works well on VM’s that you can boot from CD

Getting remote access is easy. Boot from the CD, choose a keymap (or take the default), set the root password with

passwd

and finally find the IP address you have been given with

ifcofig

Connect to it as root. From there you have full access to command line tools to try and fix or recover files from the machine. Often it is a case of mounting the disk, and using rsync to copy everything off. For windows networks the CD can run a Samba share (just look at the /etc/smb.conf for more information.

Being a Linux based CD you can run chroot on a mounted disk to sometimes run a service from the broken install if it too is Linux based.

One very useful tool that only runs on the supplied Xwindows desktop is gparted. This allows you easily to modify partitions including allocating more space to partitions as long as they are Ext3 or Ext4. As I say it only works on Xwindows but if you have a Linux desktop you can use SSH to run programs from one machine to another so this works too. Simply connect to the machine running SystemRescue with ssh -X

Go to the homepage at – https://www.system-rescue.org/ – for lots of really useful ideas on how to fix things for both Linux and Windows.

Emergency Reboot/Power off

I posted it before on an old blog but I thought I would put it here. Nowadays most machines run as VM’s so it is not an issue to reboot them but just in case you have physical server that you do not have access to or it is stuck in a server room.

You can force an immediate reboot with the following:

echo 1 > /proc/sys/kernel/sysrq
echo b > /proc/sysrq-trigger

For an immediate power off

echo 1 > /proc/sys/kernel/sysrq
echo o > /proc/sysrq-trigger

Using “Screen”

A command I use a lot is the screen commaned. What does this do? It creates a completely separate session on the machine you are on. You can leave it and come back to it. Even if you log off the machine. To start type screen. It will take you into the new session. If you type exit it will take you back to the screen you were in. If instead you simply want to disconnect from the session but leave it running hit Ctrl + a then immeadiately hit the d key. You can recreate by running screen -r.

This is a very useful way of using sessions for two reasons. Firstly if you are on a wobbly connection (for example a mobile) or a connection that timers out. I use this on a server that is on a connection that disconnects after 5 minutes of inactivity. Fine for email or web browsing but annoying if you are using SSH and need to go do something else for a little while. If you are in one office and in the middle of working then need to head to another office you can disconnect and reconnect from somewhere else using the Ctrl + a d then screen -r.

For example you can run top in a session, disconnect and reconnect to find it still running. This brings us to the second use of the screen command. As a cheating way of running a daemon process. If you start a process running you can disconnect the session an leave it running. When you want to stop it reconnect and stop it.

It is oftn useful to create multiple sessions that you want to reconnect to. If you do that running screen -r will show you a list of sessions. For example –

22188.pts-16.xen1 (01/10/20 13:35:41) (Detached
22175.pts-16.xen1 (01/10/20 13:35:32) (Detached)

To reconnect to a particular session run

screen -r NAME

e.g.

screen -r 22188.pts-16.xen1

It is useful to name each of these sessions when you create then. You do this by using screen -S NANE for example screen -S database. Reconnect with screen -r database.

The only difference between a screen session and a normal session is when text spools off the screen you cannot just scroll back.

Keep your history…

I was looking through twittter recently and saw a good post on the bash history –

https://anto.online/guides/bash-history/

However I thought I would say how I use my Bash History…

Firstly a warning on using Bash (or most shells actually). Leave them properly! I was recently showing someone some tips on using Linux and I told him to have a look at his history. He didn’t have one. Why? Because once he was finished doing his changes he just closed the SSH client. He did not log out first. Although this will not break anything this is nasty, may leave processes lying around and will not leave a useful personal history. Be sensible and before just closing the SSH cliennt type
exit
or hit CTRL + d.

Right back to the history. As the other post suggests apart from just looking at your history with “history” it is handy to look or a particular command. For example if you are looking for the last times you used the mount command run
history | grep mount
maybe you ran that a few days ago and it was a pretty easy command. What if you ran it six months ago and involved a lot of options?
The history goes back a long time so it maybe there but on a machine you use a lot maybe not. I would suggest therefore every so often creating a file with your history file so you can go back to it. Run
history > /home/user/info_install.txt
to create a file “info_install.txt” in your home directory (assuming /home/user/ is your home directory) with all your commands in it. Maybe you are migrating to a new server but want to keep a note of what you did. You really should document things properly but this is a start.
Create a file of your history regularly in different files and remember to copy them away. I often have a number of files and I often type echo “COMMENT” after something useful such as

apt install program
vi /etc/program.cfg
run program 2
service restart program
echo “Add config to /etc/program.cfg”
history > history_program.txt

Remenber that when you redirect output to a file > will overwrite whereas >> adds to a file.

Create a secure backup media with LUKS

Linux helps you create a very secure disk backup. This can be to create a safe backup of files that can be stored away or even given to friends! It will not be visible to anyone without a password

I will use the example of a USB portable disk

First install the software

apt-get install cryptsetup

You can now create an encrypted partition with
cryptsetup -v luksFormat DISK-PARTITION

For example our new USB disk mounts on the computer as /dev/sdh1 so

cryptsetup -v luksFormat /dev/sdh1

You will now see

WARNING!
========
This will overwrite data on /dev/sdh1 irrevocably.

Now enter a password which we will use “newpassword”

Are you sure? (Type uppercase yes): YES
Enter passphrase:
Verify passphrase:

Command successful.

Make a directory /mnt/NAME. In this case /mnt/encrypt/

Now open the disk for mounting with

echo PASSWORD | /sbin/cryptsetup luksOpen DISK-PARTITION NAME

In this case

echo newpassword | /sbin/cryptsetup luksOpen /dev/sdh1 crypt

We have decided to use crypt as the encrypted name but this is arbitrary. The echo command simply puts the password into the command without it prompting you. Good for a safe script but on another device NOT!

Make a new file system on this THE FIrST TIME ONLY!!!!

mkfs.ext4 /dev/mapper/crypt

Use mount /dev/mapper/NAME /mnt/NAME/ so in this case

mount /dev/mapper/crypt /mnt/encrypt/

Once finished make sure you run

umount /mnt/NAME
umount /mnt/encrypt
/sbin/cryptsetup luksClose crypt

If you look at the USB disk the partition table now shows

Name Flags Part Type FS Type [Label] Size (MB)
sdh1 Boot Primary crypto_LUKS 7751.08

If you try and mount this you will get

mount: unknown filesystem type ‘crypto_LUKS’

Using acme.sh to generate SSL certificates

Although the standard Letsencrypt process “cerbot” is very easy and runs well on a webserver, sometimes you want to generate certiicates on a old server or a vm not running a web server. This scripot runs well on an old Ubuntu 10.04 vm I have at home.

First get the script –

git clone https://github.com/acmesh-official/acme.sh.git

You might be best to install that on your machine usingh git then copying it over to the machine needing to generate the SSL certificates.

Now move it to a sensible place

cd acme.sh/

mv acme.sh /usr/local/bin

Install a required program

apt-get install socat

This needs done on the machine running the acme script so that obviously that commamnd only works on debian/ubuntu! Other distro’s install socat their own way

Also create a directory for the acme certificates to go. I suggest –

mkdir /etc/acme

To create the first certificates stop your web server (if running) e.g.

/etc/init.d/apache2 stop

Now rub acme.sh for the first time.

acme.sh –issue –standalone –home /etc/acme -d HOSTNAME.org.uk

or if there are multiple addresses for the one domain –

acme.sh –issue –standalone –home /etc/acme -d HOSTNAME.org.uk -d
www.HOSTNAME.org.uk

or even

acme.sh –issue –standalone –home /etc/acme -d HOSTNAME -d
mail.HOSTNAME -d otherservice.HOSTNAME

The —issue asks for a certificate on that domain and –standalone starts a simple server listening on port 80. The –home tells acme.sh to start the certificates under /etc/acme.

NOTE: see the end about using the –staging parameter.

It will create a bunch of directories under /etc/acme

The SSLCertificateFile

/etc/acme/hostname/hostname.cer

The SSLCertificateKeyFile

/etc/acme/hostname/hostname.key

and the SSLCertificateChainFile

/etc/acme/hostname/fullchain.cer

Now start the web server again. Eg –

/etc/init.d/apache2 start

You will want to run this script one a week or so

/usr/local/bin/acme.sh –home /etc/acme –cron

You can run it now to test it without causing any problems.

Something you might like to use the first time (OK a bit late now!) is the “–staging” parameter whe you create your domain certificate. This uses the staging server and allows you to try the service out without issue. If you issue certificates too many times LE will ban your connection for a while. If you are just testing it USE THIS!