Homelab Cheatsheet
Table Of Contents
ZFS
Running a S.M.A.R.T test
# Running a long test
smartctl -t long /dev/disk_name
# Running a short test
smartctl -t short /dev/disk_name
Checking progress of S.M.A.R.T test
smartctl -a /dev/disk_name | grep "progress" -i -A 1
Test procedure – How long is a test going to take
smartctl -c /dev/disk_name
List S.M.A.R.T result
smartctl -a /dev/disk_name
# Only list SMART attributes
smartctl -A /dev/disk_name
# For only viewing test result
smartctl -l selftest /dev/disk_name
List all pools
# Display all information for all pools
zpool list
# Display statistics for a specifik pool
zpool list pool_name
Check pool status
# Check status on all pools
zpool status [pool_name] [-v] [-x]
Clear device error
zpool clear pool_name device_id
Script to find GPTID of disk in FreeNas
#!/bin/sh
echo
echo $(basename $0) - Mounted Drives on $(hostname)
cat /etc/version
date
echo
diskinfo="$(glabel status | tail -n +2 | awk '{split($3,a,"p"); print a[1],$1}')"
echo "+========+==========================+==================+============================================+"
echo "| Device | DISK DESCRIPTION | SERIAL NUMBER | GPTID |"
echo "+========+==========================+==================+============================================+"
for d in $(echo "$diskinfo" | cut -d" " -f 1)
do
diskinf=$(diskinfo -v $d | grep '# Disk ')
diskdescription=$(echo "$diskinf" | grep '# Disk desc' | cut -d# -f 1 | xargs)
diskserialno=$(echo "$diskinf" | grep '# Disk ident' | cut -d# -f 1 | xargs)
diskgptid=$(echo "$diskinfo" | grep "^$d" | cut -d" " -f 2)
printf "| %-6s | %-24s | %-16s | %-42s |\n" "$d" "$diskdescription" "$diskserialno" "$diskgptid"
echo "+--------+--------------------------+------------------+--------------------------------------------+"
done
How to test hdd before using them in prod. The ‘standard’ test routine is SMART tests, badblocks, then another SMART. Let each one finish before starting the next.
# Time to finish 2-3 minutes on a 10TB disk
smartctl -t short /dev/adaX
# 16-17 hours on a 10TB disk
smartctl -t long /dev/adaX
# 5 days on a 10TB disk
badblocks -ws -b 4096 /dev/adaX
smartctl -t long /dev/adaX
Rclone
Copy file from source to dest
rclone copy source:path dest:destpath
# Example - This will copy all the contet from D: drive to secret_folder
rclone copy D: secret:secret_folder
Batch script for copying files from source to dest when config have password set. Windows and powershell. Courtesy of pg1.
# Generate your secure password to a disk file (for the purprose of this example, U:\rcpw.txt):
Read-Host -Prompt 'Enter rclone configuration password' -AsSecureString | ConvertFrom-SecureString | Out-File -FilePath U:\rcpw.txt
# Create a Powershell script (for the purpose of this example, C:\xx\rcpw.ps1) to return the decrypted password from the file you created in the previous step (notice how this file is referenced in the -Path parameter). Contents of C:\xx\rcpw.ps1:
(New-Object -TypeName PSCredential -ArgumentList @( 'user', ((Get-Content -Path U:\rcpw.txt) | ConvertTo-SecureString))).GetNetworkCredential().Password
# Test it:
rclone -vv --password-command "powershell C:\xx\rcpw.ps1" about Secretplex:
# Once this works, you can default the password-command parameter via setting the environment variable RCLONE_PASSWORD_COMMAND to:
powershell C:\xx\rcpw.ps1
# Use --password-command in your batch file
C:\rclone-v1.53.2\rclone.exe -v --password-command "powershell C:\rclone-v1.53.2\rcpw.ps1" copy A: Secretplex:A --log-file C:\rclone-v1.53.2\RcloneLogFile\RcloneA.txt
Elastic Stack
Test filebeat can connect to the output by using the current settings
filebeat test output
Test filebeat configuration settings
filebeat test config
Verify logstash config
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
List elasticsearch indexes
# List indexes
curl localhost:9200/_cat/indices?v
# List indexes with username and pass
curl -u username:passord localhost:9200/_cat/indices?v
# Delete index
curl -XDELETE localhost:9200/shop
Netplan
Set static IP for host – example
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: no
addresses: [192.168.1.222/24]
gateway4: 192.168.1.1
nameservers:
addresses: [8.8.8.8,8.8.4.4]
Snort
Test Snort config
snort -c /usr/local/etc/snort/snort.lua
Storage analyzer on linux
To find what’s using storage on a Linux system, you can use various commands to analyze disk usage. Here are some commonly used commands:
df
command: Thedf
(disk free) command shows the disk space usage of file systems. It displays information about mounted filesystems, their sizes, used space, available space, and mount points.
df -h
The -h
option makes the output human-readable with sizes in “KB,” “MB,” “GB,” etc.
du
command: Thedu
(disk usage) command is used to estimate file and directory space usage.
To check the disk usage of the current directory:
du -h
To check the disk usage of a specific directory:
du -h /path/to/directory
ncdu
command (NCurses Disk Usage):ncdu
is a more advanced disk usage analyzer with a text-based user interface. It provides a more detailed and interactive view of disk usage.
To install ncdu on Ubuntu/Debian:
sudo apt update
sudo apt install ncdu
To use ncdu:
ncdu /path/to/directory
lsof
command (List Open Files): Thelsof
command can be used to list all open files and the processes that are using them. This can be useful to identify processes that might be holding onto large log files or other data.
sudo lsof | grep deleted
This command will list files marked as “(deleted)” that are still held open by processes. These files may not be visible in the file system but are still using disk space until the processes release them.
find
command: Thefind
command can be used to search for files based on various criteria, including size.
To find large files in a directory:
find /path/to/directory -type f -size +1G
This will list all files larger than 1GB in the specified directory.