Homelab Cheatsheet: Difference between revisions

From Wiki Aghanim
Jump to navigationJump to search
imported>Aghanim
No edit summary
No edit summary
 
Line 1: Line 1:
== ZFS ==
== ZFS ==


Running a S.M.A.R.T test
Running a S.M.A.R.T test


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Running a long test
# Running a long test
smartctl -t long /dev/disk_name
smartctl -t long /dev/disk_name
# Running a short test
# Running a short test
smartctl -t short /dev/disk_name
smartctl -t short /dev/disk_name
</syntaxhighlight>
</syntaxhighlight>


Checking progress of S.M.A.R.T test
Checking progress of S.M.A.R.T test


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
smartctl -a /dev/disk_name | grep "progress" -i -A 1
smartctl -a /dev/disk_name | grep "progress" -i -A 1
</syntaxhighlight>
</syntaxhighlight>


Test procedure - How long is a test going to take
Test procedure - How long is a test going to take


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
smartctl -c /dev/disk_name
smartctl -c /dev/disk_name
</syntaxhighlight>
</syntaxhighlight>


List S.M.A.R.T result
List S.M.A.R.T result


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
smartctl -a /dev/disk_name
smartctl -a /dev/disk_name
# Only list SMART attributes
# Only list SMART attributes
smartctl -A /dev/disk_name
smartctl -A /dev/disk_name
# For only viewing test result
# For only viewing test result
smartctl -l selftest /dev/disk_name
smartctl -l selftest /dev/disk_name
</syntaxhighlight>
</syntaxhighlight>


List all pools
List all pools


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Display all information for all pools
# Display all information for all pools
zpool list
zpool list
# Display statistics for a specifik pool
# Display statistics for a specifik pool
zpool list pool_name
zpool list pool_name
</syntaxhighlight>
</syntaxhighlight>


Check pool status
Check pool status


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
Line 63: Line 47:
zpool status [pool_name] [-v] [-x]
zpool status [pool_name] [-v] [-x]
</syntaxhighlight>
</syntaxhighlight>


Clear device error
Clear device error


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
zpool clear pool_name device_id
zpool clear pool_name device_id
</syntaxhighlight>
</syntaxhighlight>


Script to find GPTID of disk in FreeNas
Script to find GPTID of disk in FreeNas


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
Line 87: Line 67:
echo "| Device |    DISK DESCRIPTION    |  SERIAL  NUMBER  |                  GPTID                    |"
echo "| Device |    DISK DESCRIPTION    |  SERIAL  NUMBER  |                  GPTID                    |"
echo "+========+==========================+==================+============================================+"
echo "+========+==========================+==================+============================================+"
for d in $(echo "$diskinfo" | cut -d" " -f 1)
for d in $(echo "$diskinfo" | cut -d" " -f 1)
do
do
Line 98: Line 77:
done
done
</syntaxhighlight>
</syntaxhighlight>


How to test hdd before using them in prod. The 'standard' test routine is SMART tests, badblocks, then another SMART. Let each one finish before starting the next.
How to test hdd before using them in prod. The 'standard' test routine is SMART tests, badblocks, then another SMART. Let each one finish before starting the next.


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Time to finish 2-3 minutes on a 10TB disk
# Time to finish 2-3 minutes on a 10TB disk
smartctl -t short /dev/adaX
smartctl -t short /dev/adaX
# 16-17 hours on a 10TB disk
# 16-17 hours on a 10TB disk
smartctl -t long /dev/adaX
smartctl -t long /dev/adaX
# 5 days on a 10TB disk
# 5 days on a 10TB disk
badblocks -ws -b 4096 /dev/adaX
badblocks -ws -b 4096 /dev/adaX
smartctl -t long /dev/adaX
smartctl -t long /dev/adaX
</syntaxhighlight>
</syntaxhighlight>


== Rclone ==
== Rclone ==


Copy file from source to dest
Copy file from source to dest


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
rclone copy source:path dest:destpath
rclone copy source:path dest:destpath
# Example - This will copy all the contet from D: drive to secret_folder
# Example - This will copy all the contet from D: drive to secret_folder
rclone copy D: secret:secret_folder
rclone copy D: secret:secret_folder
</syntaxhighlight>
</syntaxhighlight>


Batch script for copying files from source to dest when config have password set. Windows and powershell. [https://forum.rclone.org/t/how-to-use-rclone-password-command-with-windows-powershell-for-config-password/15950 Courtesy of pg1.]
Batch script for copying files from source to dest when config have password set. Windows and powershell. [https://forum.rclone.org/t/how-to-use-rclone-password-command-with-windows-powershell-for-config-password/15950 Courtesy of pg1.]


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# Generate your secure password to a disk file (for the purprose of this example, U:\rcpw.txt):
# Generate your secure password to a disk file (for the purprose of this example, U:\rcpw.txt):
Read-Host -Prompt 'Enter rclone configuration password' -AsSecureString | ConvertFrom-SecureString | Out-File -FilePath U:\rcpw.txt
Read-Host -Prompt 'Enter rclone configuration password' -AsSecureString | ConvertFrom-SecureString | Out-File -FilePath U:\rcpw.txt
# Create a Powershell script (for the purpose of this example, C:\xx\rcpw.ps1) to return the decrypted password from the file you created in the previous step (notice how this file is referenced in the -Path parameter). Contents of C:\xx\rcpw.ps1:
# Create a Powershell script (for the purpose of this example, C:\xx\rcpw.ps1) to return the decrypted password from the file you created in the previous step (notice how this file is referenced in the -Path parameter). Contents of C:\xx\rcpw.ps1:
(New-Object -TypeName PSCredential -ArgumentList @( 'user', ((Get-Content -Path U:\rcpw.txt) | ConvertTo-SecureString))).GetNetworkCredential().Password
(New-Object -TypeName PSCredential -ArgumentList @( 'user', ((Get-Content -Path U:\rcpw.txt) | ConvertTo-SecureString))).GetNetworkCredential().Password
# Test it:
# Test it:
rclone -vv --password-command "powershell C:\xx\rcpw.ps1" about Secretplex:
rclone -vv --password-command "powershell C:\xx\rcpw.ps1" about Secretplex:
# Once this works, you can default the password-command parameter via setting the environment variable RCLONE_PASSWORD_COMMAND to:
# Once this works, you can default the password-command parameter via setting the environment variable RCLONE_PASSWORD_COMMAND to:
powershell C:\xx\rcpw.ps1
powershell C:\xx\rcpw.ps1
# Use --password-command in your batch file
# Use --password-command in your batch file
C:\rclone-v1.53.2\rclone.exe -v --password-command "powershell C:\rclone-v1.53.2\rcpw.ps1" copy A: Secretplex:A --log-file C:\rclone-v1.53.2\RcloneLogFile\RcloneA.txt
C:\rclone-v1.53.2\rclone.exe -v --password-command "powershell C:\rclone-v1.53.2\rcpw.ps1" copy A: Secretplex:A --log-file C:\rclone-v1.53.2\RcloneLogFile\RcloneA.txt
</syntaxhighlight>
</syntaxhighlight>


== Elastic Stack ==
== Elastic Stack ==


Test filebeat can connect to the output by using the current settings
Test filebeat can connect to the output by using the current settings


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
filebeat test output
filebeat test output
</syntaxhighlight>
</syntaxhighlight>


Test filebeat configuration settings
Test filebeat configuration settings


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
filebeat test config
filebeat test config
</syntaxhighlight>
</syntaxhighlight>


Verify logstash config
Verify logstash config


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
</syntaxhighlight>
</syntaxhighlight>


List elasticsearch indexes
List elasticsearch indexes


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
# List indexes
# List indexes
curl localhost:9200/_cat/indices?v
curl localhost:9200/_cat/indices?v
# List indexes with username and pass
# List indexes with username and pass
curl -u username:passord localhost:9200/_cat/indices?v
curl -u username:passord localhost:9200/_cat/indices?v
# Delete index
# Delete index
curl -XDELETE localhost:9200/shop
curl -XDELETE localhost:9200/shop
</syntaxhighlight>
</syntaxhighlight>


== Netplan ==
== Netplan ==


Set static IP for host - example
Set static IP for host - example


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
Line 220: Line 164:
       addresses: [8.8.8.8,8.8.4.4]
       addresses: [8.8.8.8,8.8.4.4]
</syntaxhighlight>
</syntaxhighlight>


== Snort ==
== Snort ==


Test Snort config
Test Snort config


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
snort -c /usr/local/etc/snort/snort.lua
snort -c /usr/local/etc/snort/snort.lua
</syntaxhighlight>
</syntaxhighlight>


== Storage analyzer on linux ==
== Storage analyzer on linux ==


To find what's using storage on a Linux system, you can use various commands to analyze disk usage. Here are some commonly used commands:
To find what's using storage on a Linux system, you can use various commands to analyze disk usage. Here are some commonly used commands:


* <code>df</code> command: The <code>df</code> (disk free) command shows the disk space usage of file systems. It displays information about mounted filesystems, their sizes, used space, available space, and mount points.
* <code>df</code> command: The <code>df</code> (disk free) command shows the disk space usage of file systems. It displays information about mounted filesystems, their sizes, used space, available space, and mount points.


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
df -h
df -h
</syntaxhighlight>
</syntaxhighlight>


The <code>-h</code> option makes the output human-readable with sizes in "KB," "MB," "GB," etc.
The <code>-h</code> option makes the output human-readable with sizes in "KB," "MB," "GB," etc.


* <code>du</code> command: The <code>du</code> (disk usage) command is used to estimate file and directory space usage.
* <code>du</code> command: The <code>du</code> (disk usage) command is used to estimate file and directory space usage.


To check the disk usage of the current directory:
To check the disk usage of the current directory:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
du -h
du -h
</syntaxhighlight>
</syntaxhighlight>


To check the disk usage of a specific directory:
To check the disk usage of a specific directory:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
du -h /path/to/directory
du -h /path/to/directory
</syntaxhighlight>
</syntaxhighlight>


* <code>ncdu</code> command (NCurses Disk Usage): <code>ncdu</code> is a more advanced disk usage analyzer with a text-based user interface. It provides a more detailed and interactive view of disk usage.
* <code>ncdu</code> command (NCurses Disk Usage): <code>ncdu</code> is a more advanced disk usage analyzer with a text-based user interface. It provides a more detailed and interactive view of disk usage.


To install ncdu on Ubuntu/Debian:
To install ncdu on Ubuntu/Debian:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
sudo apt update
sudo apt update
sudo apt install ncdu
sudo apt install ncdu
</syntaxhighlight>
</syntaxhighlight>


To use ncdu:
To use ncdu:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
ncdu /path/to/directory
ncdu /path/to/directory
</syntaxhighlight>
</syntaxhighlight>


* <code>lsof</code> command (List Open Files): The <code>lsof</code> command can be used to list all open files and the processes that are using them. This can be useful to identify processes that might be holding onto large log files or other data.
* <code>lsof</code> command (List Open Files): The <code>lsof</code> command can be used to list all open files and the processes that are using them. This can be useful to identify processes that might be holding onto large log files or other data.


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
sudo lsof | grep deleted
sudo lsof | grep deleted
</syntaxhighlight>
</syntaxhighlight>


This command will list files marked as "(deleted)" that are still held open by processes. These files may not be visible in the file system but are still using disk space until the processes release them.
This command will list files marked as "(deleted)" that are still held open by processes. These files may not be visible in the file system but are still using disk space until the processes release them.


* <code>find</code> command: The <code>find</code> command can be used to search for files based on various criteria, including size.
* <code>find</code> command: The <code>find</code> command can be used to search for files based on various criteria, including size.


To find large files in a directory:
To find large files in a directory:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
find /path/to/directory -type f -size +1G
find /path/to/directory -type f -size +1G
</syntaxhighlight>
</syntaxhighlight>


This will list all files larger than 1GB in the specified directory.
This will list all files larger than 1GB in the specified directory.
[[Category:Cheatsheets]]
[[Category:Cheatsheets]]
[[Category:HomeLab]]
[[Category:HomeLab]]

Latest revision as of 10:33, 18 February 2026

ZFS

Running a S.M.A.R.T test

# Running a long test
smartctl -t long /dev/disk_name
# Running a short test
smartctl -t short /dev/disk_name

Checking progress of S.M.A.R.T test

smartctl -a /dev/disk_name | grep "progress" -i -A 1

Test procedure - How long is a test going to take

smartctl -c /dev/disk_name

List S.M.A.R.T result

smartctl -a /dev/disk_name
# Only list SMART attributes
smartctl -A /dev/disk_name
# For only viewing test result
smartctl -l selftest /dev/disk_name

List all pools

# Display all information for all pools
zpool list
# Display statistics for a specifik pool
zpool list pool_name

Check pool status

# Check status on all pools
zpool status [pool_name] [-v] [-x]

Clear device error

zpool clear pool_name device_id

Script to find GPTID of disk in FreeNas

#!/bin/sh
echo
echo $(basename $0) - Mounted Drives on $(hostname)
cat /etc/version
date
echo
diskinfo="$(glabel status | tail -n +2 | awk '{split($3,a,"p"); print a[1],$1}')"
echo    "+========+==========================+==================+============================================+"
echo "| Device |     DISK DESCRIPTION     |  SERIAL  NUMBER  |                   GPTID                    |"
echo "+========+==========================+==================+============================================+"
for d in $(echo "$diskinfo" | cut -d" " -f 1)
do
   diskinf=$(diskinfo -v $d | grep '# Disk ')
   diskdescription=$(echo "$diskinf" | grep '# Disk desc' | cut -d# -f 1 | xargs)
   diskserialno=$(echo "$diskinf" | grep '# Disk ident' | cut -d# -f 1 | xargs)
   diskgptid=$(echo "$diskinfo" | grep "^$d" | cut -d" " -f 2)
   printf "| %-6s | %-24s | %-16s | %-42s |\n" "$d" "$diskdescription"     "$diskserialno" "$diskgptid"
   echo "+--------+--------------------------+------------------+--------------------------------------------+"
done

How to test hdd before using them in prod. The 'standard' test routine is SMART tests, badblocks, then another SMART. Let each one finish before starting the next.

# Time to finish 2-3 minutes on a 10TB disk
smartctl -t short /dev/adaX
# 16-17 hours on a 10TB disk
smartctl -t long /dev/adaX
# 5 days on a 10TB disk
badblocks -ws -b 4096 /dev/adaX
smartctl -t long /dev/adaX

Rclone

Copy file from source to dest

rclone copy source:path dest:destpath
# Example - This will copy all the contet from D: drive to secret_folder
rclone copy D: secret:secret_folder

Batch script for copying files from source to dest when config have password set. Windows and powershell. Courtesy of pg1.

# Generate your secure password to a disk file (for the purprose of this example, U:\rcpw.txt):
Read-Host -Prompt 'Enter rclone configuration password' -AsSecureString | ConvertFrom-SecureString | Out-File -FilePath U:\rcpw.txt
# Create a Powershell script (for the purpose of this example, C:\xx\rcpw.ps1) to return the decrypted password from the file you created in the previous step (notice how this file is referenced in the -Path parameter). Contents of C:\xx\rcpw.ps1:
(New-Object -TypeName PSCredential -ArgumentList @( 'user', ((Get-Content -Path U:\rcpw.txt) | ConvertTo-SecureString))).GetNetworkCredential().Password
# Test it:
rclone -vv --password-command "powershell C:\xx\rcpw.ps1" about Secretplex:
# Once this works, you can default the password-command parameter via setting the environment variable RCLONE_PASSWORD_COMMAND to:
powershell C:\xx\rcpw.ps1
# Use --password-command in your batch file
C:\rclone-v1.53.2\rclone.exe -v --password-command "powershell C:\rclone-v1.53.2\rcpw.ps1" copy A: Secretplex:A --log-file C:\rclone-v1.53.2\RcloneLogFile\RcloneA.txt

Elastic Stack

Test filebeat can connect to the output by using the current settings

filebeat test output

Test filebeat configuration settings

filebeat test config

Verify logstash config

sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

List elasticsearch indexes

# List indexes
curl localhost:9200/_cat/indices?v
# List indexes with username and pass
curl -u username:passord localhost:9200/_cat/indices?v
# Delete index
curl -XDELETE localhost:9200/shop

Netplan

Set static IP for host - example

# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    enp0s3:
     dhcp4: no
     addresses: [192.168.1.222/24]
     gateway4: 192.168.1.1
     nameservers:
       addresses: [8.8.8.8,8.8.4.4]

Snort

Test Snort config

snort -c /usr/local/etc/snort/snort.lua

Storage analyzer on linux

To find what's using storage on a Linux system, you can use various commands to analyze disk usage. Here are some commonly used commands:

  • df command: The df (disk free) command shows the disk space usage of file systems. It displays information about mounted filesystems, their sizes, used space, available space, and mount points.
df -h

The -h option makes the output human-readable with sizes in "KB," "MB," "GB," etc.

  • du command: The du (disk usage) command is used to estimate file and directory space usage.

To check the disk usage of the current directory:

du -h

To check the disk usage of a specific directory:

du -h /path/to/directory
  • ncdu command (NCurses Disk Usage): ncdu is a more advanced disk usage analyzer with a text-based user interface. It provides a more detailed and interactive view of disk usage.

To install ncdu on Ubuntu/Debian:

sudo apt update
sudo apt install ncdu

To use ncdu:

ncdu /path/to/directory
  • lsof command (List Open Files): The lsof command can be used to list all open files and the processes that are using them. This can be useful to identify processes that might be holding onto large log files or other data.
sudo lsof | grep deleted

This command will list files marked as "(deleted)" that are still held open by processes. These files may not be visible in the file system but are still using disk space until the processes release them.

  • find command: The find command can be used to search for files based on various criteria, including size.

To find large files in a directory:

find /path/to/directory -type f -size +1G

This will list all files larger than 1GB in the specified directory.