Linux:Bash-Ivo
This page was written and contributed by Ivo Palli.
Template
#!/usr/bin/env bash
set -o errexit # Exit on error, do not continue running the script
set -o nounset # Trying to access a variable that has not been set generates an error
set -o pipefail # When a pipe fails generate an error
if [[ "${1-}" =~ ^-*h(elp)?$ ]]; then
echo 'Usage: ./script.sh arg-one arg-two
This is an awesome bash script to make your life better.
'
exit
fi
main() {
echo do awesome stuff
}
main "$@"
Terminology
{ } brace / curly brackets (defines variable scope)
[ ] bracket
( ) parenthesis
' single quote (string literal)
" quotation mark (allows variable expansion)
` backtick (executes command. Equal to $() )
: colon
; semicolon (indicates end of command)
< less then (this is NOT an 'angle bracket')
> greater then (this is NOT an 'angle bracket')
Bash keyboard shortcuts
<tab class="wikitable" sep=tab head=top width=900> Shorcut Alternative Result <home> <ctrl>a Jump to beginning of the line <end> <ctrl>e Jump to the end of the line <ctrl><left> <alt>b Jump to the beginning of the current or previous word in the line <ctrl><right> <alt>f Jump to the end of the current or previous word in the line <ctrl>K Delete everything from the cursor to the end of the line <ctrl><alt><shift>3 Put a comment in front of this line and 'execute' (So that you can go back to it later via the history) </tab>
Print parameters
{
id
echo "---"
args=("$@")
ELEMENTS=${#args[@]}
for (( i=0;i<$ELEMENTS;i++)); do
echo ${args[${i}]}
done
} > /tmp/bliep
Debugging
Echo all commands
To make bash print out every command it executes, use this at the beginning of your script:
#!/bin/bash -x
Log function
Log function for use in a script itself:
#!/bin/bash
log_now="$(date +%s)sec"
function log {
>&2 echo -e "$(TZ=UTC date --date now-$log_now +%H:%M:%S.%N)\t${BASH_SOURCE}:${BASH_LINENO}:${FUNCNAME[1]}\t${*}"
}
function bla {
log "123"
}
bla
log "tra la la la a"
Stop running on exitcode
set -e set -o pipefail
or even
set -euxo pipefail
Comments
While it's not possible to use normal comments in a multi-line command, there is a sneaky way you can fit a comment basically anywhere:
echo 1 2 `# This is a comment`\ 3 4 echo abc | tr "a-z" "A-Z" `# Uppercase all text` | rev `# Then reverse it` | cat
This seems to work (sometimes):
echo "Hello world" | \ # A comment rev
A cludge for regular multiline comments is
: ' Note that the space between the colon and the quote is important! blah blah blah '
WARNING: Comments REQUIRE a space before the hash, otherwise they are seen as part of the command. E.g. the example below prints #321
echo '123'# | rev
Functions
function my_func
{
echo $1
}
my_func Hello
Notes:
- Functions can NOT be empty or you'll get an error.
- Arguments passed to the function can be accessed via $1, $2, etc.
- All variables defined/changed inside the function are available outside the function once called. There is nothing like 'scope'.
- Alternatively you can write 'myfunc()' instead of 'function my_func'.
- $0 will always remain the name of the script, even in a function.
- Functions can only return numeric values
Including functions from another script
SELF="`readlink -f $0`" SELF_SCRIPT="`readlink -f $0 | sed 's~.*/~~'`" SELF_DIR="`readlink -f $0 | sed 's/\(.*\/\).*/\1/'`" source "$SELF_DIR/common.sh"
It's smart to either specify the absolute path to whatever you are including or using 'readlink' to find out where your script is located.
Piping data into a function
This works:
function reverse
{
cat | rev
}
echo "Hello world!" | reverse
By checking if "$*" is empty, you can switch between 'cat' and 'echo' and allow a function to accept both piped data or arguments
stdout, stderr and redirection
{ mkdir /as/as; echo "Some stdout text"; } 2>&1 | rev # This will output both stdout and strerr in reverse
Exclamation mark
echo '!${USER}' # !${USER}
echo "!${USER}" # The '!' gets parsed
echo "\!${USER}" # \!user1
echo '!'"${USER}" # user1
Word splitting
IFS=' '
line=" this is my example "
a=( $line )
set | grep "^a=" # a=([0]="this" [1]="is" [2]="my" [3]="example")
echo ${a[2]} # my
echo ${#a[@]} # 4
unset a
IFS=$'\n'
Looping through an array
Example 1 - Stepping through with a counter
#!/bin/bash
CMDS=( "something"
"another thing"
"some more" )
COUNT=0
TOTAL=${#CMDS[@]} # Number of elements in the array, in this case 3
while [ $COUNT -lt $TOTAL ]; do # element 0 1 2
{
echo "-${CMDS[$COUNT]}-"
COUNT=$(($COUNT + 1)) # bash build-in adding
}
done
Example 2 - foreach
#!/bin/bash
IFS=$'\n'
A=(
line1
line2
line3
line4
)
for x in ${A[@]}; do
{
echo "$x"
}
done
Example 3 - Shifting the array
MAX_JOBS=5
LABEL=( HK69C HK67C HK59C HK58C HK41C HK33C DX-1731 DX-1729 D52427 D48985 )
while [ ${#LABEL[@]} -gt 0 ]; do
{
JOB_COUNT=(`jobs -p`)
if [ ${#JOB_COUNT[@]} -lt $MAX_JOBS ]; then
{
echo ${LABEL[0]} & # Do something here instead of just 'echo'
unset LABEL[0] # Remove first element
LABEL=( ${LABEL[@]} ) # pack array
}
else
sleep 1 # don't take up a lot CPU time checking. Once a second is enough
fi
}
done
wait # Wait for remaining jobs to finish
Incrementally adding to an array
See 'TESTC' in Linux:Bash-Ivo#Arguments_to_shell_scripts, or:
IFS=$'\n'
V=( $(some_program_that_outputs_tsv) )
for x in ${V[@]}; do
{
IFS=$'\t' # This needs to be in the loop to work
y=( $x )
y[3]="$(another_program_that_outputs_some_more_fields')"
echo -e "${y[0]}\t${y[1]}\t${y[2]}\t${y[3]}" # Can not output $y[@] since it will not preserve TABs
}
done
Multi-line variables and arrays
THIS NEEDS FIXING UP!
The difference between $@ and $*:
- Unquoted, the results are unspecified. In Bash, both expand to separate args and then wordsplit and globbed.
- Quoted, "$@" expands each element as a separate argument, while "$*" expands to the args merged into one argument: "$1c$2c..." (where c is the first char of IFS).
Always quote them!
VARIABLE="a b
c d
e f
"
echo "${VARIABLE[*]}" # Separate lines
echo "${VARIABLE[@]}" # Separate lines
echo "${VARIABLE[1]} is the 2nd item in an array of ${#VARIABLE[@]} items" # Doesn't work because VARIABLE is not an array, just a multi-line variable
IFS=$'\n' # needed, or you won't get separate lines
ARRAY=(
a b
c d
e f
)
echo "${ARRAY[*]}" # Separate lines
echo "${ARRAY[@]}" # One line
echo "${ARRAY[1]} is the 2nd item in an array of ${#ARRAY[@]} items" #
EXEC="$(echo -e "hello world\nthis file\nhas 3 lines")"
$ echo $EXEC
hello world this file has 3 lines
$ echo "$EXEC"
hello world
this file
has 3 lines
$ echo "${#EXEC[@]}"
1
$ EXEC=( $EXEC )
$ echo "${#EXEC[@]}"
7
Reading lines
From a pipe
Make sure your IFS is what you want it to be
iptables -L -n | while read LINE; do
{
echo "--$LINE"
export IPTABLES[${#IPTABLES[*]}]="$LINE"
echo ${#IPTABLES[*]}
}
done
echo ${#IPTABLES[*]}
echo ==${IPTABLES[2]}
From a file
Make sure your IFS is what you want it to be. Also be warned that this seems to break out of the loop if you get an error code in one of the statements or something.
IFS=$'\n'
while read LINE || [ -n "$LINE" ]; do
{
IFS=' ' # This will not affect the IFS of 'read'
TOKENS=( $LINE )
echo ${TOKENS[0]} # The first word of every line
}
done < myfile.txt # Put the name of your file here
Please note that you need to do something extra in case the input file doesn't end in a newline:
IFS=$'\n'; echo -en "line1\nline2" | while read LINE ; do { echo $LINE; } done # Only prints 'line1'
IFS=$'\n'; echo -en "line1\nline2" | while read LINE || [ -n "$LINE" ]; do { echo $LINE; } done # Prints both lines
Via substitution
IFS=$'\n' for line in $(cat filename); do echo $line done
Testing
If a file exists
if [ ! -f this_file ]; then echo "File does not exist"; fi # File exists and is a regular file (i.e. not a block device for example)
Use:
- -d to test for directories
- -s to test that the file exists and is not empty
- -w to test that the file exists and is writable
Testing if a PID exists
PID="32322"
if [ "`ps --no-headers -p ${PID}`" \!= "" ]; then
echo "Program with PID ${PID} is running"
fi
or
PID="32322"
if [ -d "/proc/$PID" ]; then
echo "Program with PID ${PID} is running"
fi
Testing true/false
if (( 1 )); then echo Wheee; fi if true; then echo Wheee; fi
Also, it's better to use expr then a regular test inside an 'if' statement, because expr also gives back a true/false when the variable you are testing against is empty.
TESTME="" if [ "$TESTME" -lt 10 ]; then echo Wheee; fi bash: [: : integer expression expected if expr "$TESTME" \< 10 > /dev/null; then echo Wheee; fi Wheee
Testing error codes
Short way:
true || echo blah # no output false || echo blah # blah true && echo blah # blah false && echo blah # no output
Long way:
# no output true if [ $? -ne 0 ]; then echo blah; fi # blah false if [ $? -ne 0 ]; then echo blah; fi
Testing return codes from functions
function yes1 { return 0; }
function no1 { return 1; }
if no1 ; then echo weeee; fi
if yes1 ; then echo weeee; fi
weeee
Tee, fifo's and pipes
Objective: Creating a tar.bz2 while at the same time calculating the MD5 and file size. We create files with the script's PID in it so that we can run it simultaneous.
PID="$$"
mkfifo /tmp/DataExport.$PID.pipe1 /tmp/DataExport.$PID.pipe2
nice tar c --files-from "${RUN_ID}.txt" | pv -s $ORI_SIZE | pbzip2 -9 | tee -i /tmp/DataExport.$PID.pipe1 /tmp/DataExport.$PID.pipe2 > $TARGET &
< /tmp/DataExport.$PID.pipe1 md5sum | cut -f 1 -d ' ' > /tmp/DataExport.$PID.md5sum &
< /tmp/DataExport.$PID.pipe2 wc -c > /tmp/DataExport.$PID.wc
MD5SUM="`cat /tmp/DataExport.$PID.md5sum`"
FSIZE="`cat /tmp/DataExport.$PID.wc`"
rm -f /tmp/DataExport.$PID.*
Full path of a file (or even your own script)
SELF="`readlink -f $0`" SELF_SCRIPT="`echo $SELF | sed 's/.*\///'`" SELF_DIR="`echo $SELF | sed 's/\(.*\/\).*/\1/'`" echo "self: $SELF" echo "self_dir: $SELF_DIR" echo "self_script: $SELF_SCRIPT"
Check if script is executed as root
if [[ $EUID -ne 0 ]]; then echo "This script must be run as root" exit 1 fi
Running parallel processes
There's also a parallel in the 'moreutils' package, which is crap. You can do a:
apt install parallel
Or install from source, since the version from the repo is 3 years older:
rm /usr/bin/parallel # Relog after this so it's out of your path wget http://ftpmirror.gnu.org/parallel/parallel-latest.tar.bz2 tar xjf parallel-latest.tar.bz2 cd parallel-*/ ./configure make src/parallel --version make install echo "will cite" | parallel --citation
Examples
IMPORANT: Do NOT quote the data (i.e. '{}') because parallel makes the effort of doing it for you. If it really doesn't work, use the '-q' option.
Also, use '--dry-run' before, so you can see what actually gets executed.
1
find . -type f | parallel -j6 'echo {}'
echo -e "1\n2\n3\n4\n5\n6\n7\n8\n9\n10" | parallel -j3 --bar "echo {} ; sleep {}" > result.txt # The result file will only contain the output of the commands
echo "hello" | parallel "echo '{}' ; echo \"{}\"" # This will echo 'hello' twice. So single quotes will not stop substitution
2
# parallel will try to make sure bash doesn't mess with the input if you use the '-q' option
echo '$$' | parallel -q echo {} # This will print '$$'
Move files into their own directory
ls *.avi | parallel "mkdir {.} && mv {} {.}"
Adding the job number with leading zeroes
echo {a..z} | \
tr ' ' '\n' | \
parallel \
--rpl '{0#} $f=1+int("".(log(total_jobs())/log(10))); $_=sprintf("%0${f}d",seq())' \
'echo {0#} {}'
01 a
02 b
03 c
...
Adding up file sizes
TOTAL_SIZE=0
TOTAL_FILES=0
for x in *; do
if [ -f $x ]; then
TOTAL_SIZE=$(($TOTAL_SIZE + `stat -c%s $x`))
TOTAL_FILES=$(($TOTAL_FILES + 1))
fi
done
echo "Total: $TOTAL_FILES files, $TOTAL_SIZE bytes"
Alternatively:
find / -xdev -type f -printf "%s\n" | awk '{total = total + $1}END{print total}'
Where $1 is the the column with the numbers.
Print sizes human readable
function printSize
{
if [ "$1" == "" ]; then
NUMBER="`cat`"
else
NUMBER="$1"
fi
php -r '$n=$argv[1];if($n)$p=floor(log($n, 1024)); else $p=0; printf("%.2f%s", $n/pow(1024,$p),substr("BKMGTPEZY",$p,1));' "$NUMBER"
}
printSize 4589598734
echo 4589598734 | printSize
See also the Awk equivalent.
Print percentage
function printPercentage
{
php -r 'printf("%.2f%%", ($argv[1] * 100) / $argv[2]);' "$1" "$2"
}
Makedir
MAKEDIR_PREVIOUS=""
function makeDir
{
MAKEDIR_NEW="`echo "$1" | sed 's~/[^/]*$~~'`"
if [ "${MAKEDIR_PREVIOUS}" \!= "${MAKEDIR_NEW}" ]; then
{
echo "==Creating: ${MAKEDIR_NEW}"
mkdir -p "${MAKEDIR_NEW}"
MAKEDIR_PREVIOUS="${MAKEDIR_NEW}"
}
fi
}
Arguments to shell scripts
#!/bin/bash
# Limitations to getopts
#
# 1 - optarg stops parsing the moment it finds a non-argument
# 2 - optarg does not remove parsed options from the arguments list
# Set defaults
TESTA="off"
TESTB=""
while getopts "ab:c:" opt; do
{
case $opt in
a) TESTA="on"
;;
b) TESTB="$OPTARG"
;;
c) TESTC[${#TESTC[@]}]="$OPTARG"
;;
?) echo "Unknown option $OPTARG"
exit 1
;;
esac
}
done
shift $(( OPTIND-1 ))
# Test if variable exists
[[ $TESTB ]] || { echo "You forgot to specify -b" && exit 1; }
echo "TESTA: $TESTA"
echo "TESTB: $TESTB"
echo "TESTC has ${#TESTC[@]} options:"
for x in ${TESTC[@]}; do
echo $x;
done
echo "Left: $*"
NOTE: Bash's internal getopts does not handle long parameters, use the external CLI utility 'getopt' for those cases (http://www.bahmanm.com/blogs/command-line-options-how-to-parse-in-bash-using-getopt)
stuff
text file with 2 columns. We take out the space so that sort understands how to sort, then we make it pretty again with 'column'
cat dir_dbname4.txt | tr -s ' ' | sort -k 2 | column -t
useful text utils: join, paste, colrm, column, rev, tac, cat, uniq, sort
Sorting on position
If you use a lot of sort in your script, you might want to
export TMPDIR="/somewhere/with/a/lot/of/space" export LC_ALL="C"
sort -n -k1.9,1.12
Fill out a colon separated file
If you have a file separated on colons, but not every row has the same number of columns:
cat old_file.txt | awk 'BEGIN { FS = ":" } ; { print $1 ":" $2 ":" $3 ":" $4 ":" $5 }' > new_file.txt
Executing a command with a different working directory
The parentheses invoke a subshell. The '&&' checks the exit status of 'cd' before executing 'du'. The 'exec' commands removes the memory footprint of the unneeded shell.
(cd /media/* && exec du -sh *)
strlen
echo ${#PATH}
Awk: addressing a field via a variable
This will a '2' and the second field for each line.
awk 'BEGIN{FS=OFS="\t"; field=2;} {print field OFS $field;}' my_file.csv
Awk: Checking field in a loop
Check if certain fields contain invalid values. Only print lines that are 100% valid.
awk '{
FS="\t" # Include this or the script will choke on empty fields (i.e. it will look in the next field instead)
fout = 0
if($9 != "a" && $9 != "b" && $9 != "c")
fout++
for(veld=10; veld<=57; veld++)
if($veld == "a" || $veld == "b" || $veld =="c")
fout++
if(fout == 0)
printf $0 "\n"
}'
Creating passwords
#!/bin/bash
if [ \! "$#" == "1" ]; then
{
echo -e "\nUsage $0 <length>\n"
exit
}
fi
cat /dev/urandom | tr -dc A-Za-z0-9 | head -c $1
echo
Reading from multiple files
#!/bin/bash
# Set variables
FD1=7
FD2=8
file1="$1"
file2="$2"
count1=0
count2=0
eof1=0
eof2=0
# Open files.
eval exec "$FD1<\"$file1\""
eval exec "$FD2<\"$file2\""
while [[ $eof1 -eq 0 || $eof2 -eq 0 ]]; do
{
if read data1 <&$FD1; then
let count1++
printf "%s, line %d: %s\n" "$file1" $count1 "$data1"
else
eof1=1
fi
if read data2 <&$FD2; then
let count2++
printf "%s, line %d: %s\n" "$file2" $count2 "$data2"
else
eof2=1
fi
}
done
Testing for the existence of multiple files (AND)
if ! [ -f my_file1 -a -f my_file2 ]; then
{
echo "Error, an input file is missing"
exit
}
fi
Preventing Bash substitution in CAT << EOF
Note that using the 'EOF' method is called 'heredoc'
cat << EOF # This will output '/root' $HOME EOF cat << 'EOF' # This will output '$HOME' $HOME EOF
Merging directories
This needs fixing:
#!/bin/bash
# usage source1 .. sourceN dest
length=$(($#-1))
sources=${@:1:$length}
DEST=$(readlink -f ${!#})
for SRC in $sources; do
pushd $SRC;
find . -type d -exec mkdir -p ${DEST}/{} \;
find . -type f -exec mv {} ${DEST}/{} \;
find . -type d -empty -delete
popd
done
Getting the Nth line
cat file.txt | tail -n +500 | head -1 # Print only the 500th line
Menus
- Use 'percol' to select one or more things in a pipe
- Use 'whiptail' to create menus like you have during Linux kernel compilation
csvkit
# Create a SQL template from the data in the CSV csvsql -i sqlite list.csv # Create an SQLlite database with a table equal to the filename that is imported (minus the .csv) csvsql --db sqlite:///leso.db --insert list.csv # Perform SQL queries directly on a CSV file. (SQLlite is used in-memory) csvsql -t --query 'select Disk,`Vault Copy` from list where `Vault Copy` == "-"' list.csv
Combining output of commands
You can combine the output of multiple commands by using braces:
{
echo 1
echo 3
echo 2
} | sort
This can also be used to output to a file.
Copy to and from clipboard
apt-get install xsel xsel # Echo whatever is in your ctrl-insert/mouse buffer xsel --clipboard # Echo whatever is in your copy/paste buffer cat large_file.txt | xsel --clipboard # You can put this file into an email for example with CTRL-V
Printing a large amount of text to the screen
cat - << 'EOF' Hello world EOF
Safety switches for scripts
set -u # Treat unset variables as an error when substituting set -e # Exit on command error set -o noclobber # Don't let bash overwrite existing files. Turn off with 'set +o noclobber', or for a single override 'echo hello >| world'
Doing something before the script exits
This prints "hello ivo":
#!/bin/bash
NAME="ivo"
trap "echo '${NAME}'" EXIT
echo -n "hello "
Running functions in the background
#!/bin/bash
function myfunc1
{
echo "Start $*"
sleep 5
echo "End $*"
}
myfunc1 `date` &
thread1=$!
sleep 2
myfunc1 `date` &
thread2=$!
wait
echo "Done"
Job schedular
#!/bin/bash
FS="\n"
JOB=(
item1
item2
item3
item4
item5
)
function myfunc1
{
sleep 2
}
RUNNING_MAX=2
JOB_TOTAL=${#JOB[@]}
JOB_CURRENT=0
while [ $JOB_CURRENT -lt $JOB_TOTAL ]; do
{
RUNNING="$(jobs | wc -l)"
if [ $RUNNING -lt $RUNNING_MAX ]; then
{
# Spawn job
JOB_THIS="${JOB[$JOB_CURRENT]}"
myfunc1 ${JOB_THIS} &
JOB_CURRENT=$(($JOB_CURRENT + 1))
echo "Started job ${JOB_THIS} [$JOB_CURRENT/$JOB_TOTAL]"
}
else
wait -n 2> /dev/null # Wait for any running job to finish
fi
}
done
wait 2> /dev/null # Wait for all running jobs to finish
echo "Done"
Another one that supports handing out slices of work
JOB_MAX=10
function worker
{
POS=$1
NUM=$2
doing_work $POS $(( ${POS} + ${NUM} ))
# You can skip a return if you want to react to the exit status of the last program run
if [ some_test ]; then
return 1; # No more work
else
return 0; # Maybe more work
fi
}
function schedule
{
POS=0
NUM=100
JOB_CURRENT=0
while true; do
{
if [ ${JOB_CURRENT} -lt ${JOB_MAX} ]; then
{
# Spawn job
worker $1 ${POS} ${NUM} &
POS=$(( ${POS} + ${NUM} ))
JOB_CURRENT=$(($JOB_CURRENT + 1))
}
else
{
wait -n 2> /dev/null # Wait until at least one job is complete
if [ $? -eq 1 ]; then
break;
fi
JOB_CURRENT=$(($JOB_CURRENT - 1))
}
fi
}
done
wait 2> /dev/null # Wait for all running jobs to finish
}
schedule # You can pass arguments if needed
Directly sending data over the network
You can send data over TCP and UDP directly via bash using a pseudo-device:
echo "Hello" > /dev/tcp/example.com/5555
Clearing scrollback buffer in Putty
alias reset="printf '\033\143\033[3J'" # Clear screen, reset scrollback
Angle brackets
cat << 'EOF' # here-doc hello EOF cat <<< 'hello' # here-string, basically synonymous with: echo "test" | cat
Making a command's output unbuffered
stdbuf -i0 -o0 -e0 command # alternatively sudo apt install socat socat EXEC:long_running_command,pty,ctty STDIO
Although note: stdbuf uses LD_PRELOAD tricks to do this and hence does not work with statically linked, setuid executables or tee.
Using Bash's own string splitting
filename=$(basename -- "$fullfile")
extension="${filename##*.}"
filename="${filename%.*}"
Keeping track of execution time
function log_test
{
TEST="$1"
shift
TIME="{\"time\":$(date '+%s'), \"test\": \"${TEST}\", \"real\": %e, \"user\": %U, \"sys\": %S}" /usr/bin/time -q -a -o log.json $*
}
log_test "Finding" find /
Note that this uses the 'time' program, not the bash build-in. While you can set the format with the bash build-in, you cannot output to a file.
To turn these separate JSON fields into an array, do
cat log.json | jq -s .
Random numbers
sleep $(( 60 + $RANDOM % 60 ))m # Sleep 60-120 minutes shuf -i 1-100000 -n 10 # 10 lines of random numbers including given values cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 10 # 10 character long string with characters specified. Cryptographically secure
Printing all but the last X characters
echo -e "123\n456\n789" | head --bytes=-2 # This prints '78' as the last line without a newline
Advanced testing
true && echo y || echo n # y false && echo y || echo n # n [[ 0 ]] && echo y || echo n # y [[ 1 ]] && echo y || echo n # y [[ "" ]] && echo y || echo n # n (true) && echo y || echo n # y !(true) && echo y || echo n # n, note that this works only in scripts. It will not work on the commandline
Other useful commands
zless zgrep
tee
Tee write/appends any data passed through it's pipe to a file. It will continue to write even if it's pipe is broken:
for x in {a..z}; do echo $x; done | tee log.txt | head # Will only see 10 lines on screen, but all 26 will be in log.txt
Tee is often (mis-)used to write somewhere as root, since you can sudo it.
Links
- https://devhints.io/bash Cheatsheet