My Notes from 5 day Unix Training

shell is command interpreter.
use ‘putty’ to connect with the machine, terminal emulation software.
to connect with it, we need server ip, port(22 by default) and protocol SSH(encrypted)
server will share applications, storage, cpu, printer, etc.
Filesystem is collection of files and information about those files. In Unix, directories are also considered as files.

dev – device files
character device files : terminal, mouse, kb are files too!
block device files: hdd, cdrom

if regular user is ‘amit’ then the home directory will be ‘/home/amit/’
online communication is chatting
offline communication is mail

more than 200 unix utilities.

unix code is written 90% in C and other 10% in machine language.

kernel uses device drivers to run the shell commands.
kernel and shell are connected by system call interfaces[open(), read()] (we can also use standard c lib[fopen(), fread()] or kernel routines[sys_open()])

all programs will be put in /bin (user binaries)
/etc has startup and shutdown scripts, etc
/mnt for mounting directories
/media for media devices

echo -e ~enable the use of escape char processing like \n for newline, \t for tab
# ~ for root user
$ ~ for normal user
useradd amit ~to add new user
passwd amit ~to add or change password
loginas amit ~to login with the user
userdel amit ~to delete the user
adduser amit ~to add user
pwd #print current directory ~ print current directory
ls ~ list files and directories
ls -a ~ list all files with hidden files as well

. means current directory
.. means parents directory
.profile, .bashrc are startup files when user logs in

echo $SHELL ~display the system variable values for information
echo $HOME ~
name=amit ~change variable value
echo $name ~display
echo $PS1 ~\u@\h:\w\$, user, host, working dir, shell. its a primary prompt.
PS1=”$ ” ~change the value of it to $. its stored in .profile file
echo dte ~for date display
echo date ‘+Date:%d/%m/%y%n Time: %H:%M:%S’ ~ customize the date format
which date ~ display its file
whatis date ~single line help for date
man date ~help manual for date
date –help ~same help
cal ~calender

$ mkdir computer
$ ls
$ pwd
$ cd computer
$ pwd
$ mkdir hw sw
$ ls
hw sw
$ mkdir hw/cpu hw/ram sw/os sw/appcl
$ ls
hw sw
$ cd ..
$ pwd
$ tree
`– computer
|– hw
| |– cpu
| `– ram
`– sw
|– appcl
`– os

changing path—————————————-
from anywhere to cpu,
$ cd /home/amitb4thumar/computer/hw/cpu
this is called absolute path method which always starts from root

In relative path method is where path is followed from current dir.
for change from cpu to os
$ cd ../../sw/os

for anywhere to home
cd $home
cd ~

create home/dir1/dir2/dir3
$ mkdir DIR1/DIR2/DIR3 this wont work as dir1, dir2 does not exist, use below instead
$ mkdir -p DIR1/DIR2/DIR3

rmdir cant remove non empty dir
use, rm -r DIR1
rm -r -i DIR1 to ask permission for removal——————————————

making files

ed – interactive line editor
ex – improved version of ed
vi – interactive screen editor, visual editor
vim – improved vi
sed – non interactive line editor

vi editor:
1. command mode – escape key and then type :wq i.e. write and quit
2. text editing mode
i ~insert at the cursor position
I ~insert at start position
a ~append at next char position
A ~append at end of line
o ~create bank after current line
O ~create bank above current line

Save and exit commands:
:w ~ save and continue
:w <file> ~ same as save as
:q ~quit, provided changes are not made
:q! ~ quit without saving

Cursor movement commands:
h of left arrow
l or right arrow
j or down arrow
k or up arrow
^ for begining of line
$ for end of line
w for begining of next word
b end of next word
b begining of previous word
G beginging of last line
<n>G begining of nth line

text deletion commands:
x delete current character
X delete previous char
dw delete word
D delete rest of the line
dd delete current line
<n>dd delete n following line including current line

replacing/substituting commands:
r<newchar> current char replaced with new char
cw<text> change current word to <text>. to end, press [esc]
R<text> override existing text with new <text>to end, press [esc]
:s/<old>/<new> replace first occurance of old with new in current line
:s/<old>/<new>/g replace all occurance of old with new
:2,5s/<old>/<new> replace first occurance of old with new between line 2 and 5, inclusive of both. g added with make it global.
:1,$s/<old>/<new> replace all occurance of old with new for all occurance.

copy/sut paste commands:
yy copy current line and store it in buffer
<n>yy copies the nth following line including current line and store it in buffer
dd cut current line and store in buffer
p paste line from buffer after current line
P paste line from buffer above current line

Misc commands:
u to unto previous changes
U redo last change
:set number display line number for the session
:set nonumber
/pattern search patter
n repeat search
~ to reverse case
J join following line with current line

Display the file contents:
paste file1 file2 ~display the file contents one after the other
cat filename
cat -n filename display with line numbers

cat file1 file2 file3 ~display all content at the same time
cat file* ~display all files content
cat file? ~file{any char set}
cat file[13] ~display file1 or file3
cat file[1-3] ~display file1 or file2 or file3
cat file[^13] ~display anything than file1 or file3, exclusion

Copying files and dirs:
cp file1 newfile1 ~copy content of file1 to newfile1
cp -i file1 newfile1 ~confirmation before overwrite
cp LNTBLR LNTCHENNAI ~copy if the empty directory exists
cp -r LNTBLR LNTCHENNAI ~copy and new directories are to be created, if chennai does exists it will copy the dir LNTBLR in the chennai, not just the inside contents

Renaming files n dir:
mv file12 file21 ~renaming file12 to file21
mv file21 computer ~moving file21 to computer dir
mv LNTBLR LNTCHENNAI ~changing BLR to CHENNAI. if chennai already exists, BLR contents gets moved to chennai.

Removing files:
rm file1
rm -i file1 ~interactive remove

Creating empty file:
touch computer/file10

File locating command:
find . -name “file*” -print ~find in current dir by name the pattern file and anything after it and then print its location. it finds files and dirs as well.
find . -name “file*” -type d ~find in current dir by name the pattern file and anything after it and then print name of the dir in which they reside it finds dirs only
find . -name “file*” -type f ~find in current dir by name the pattern file and anything after it and then print name of the dir in which they reside. it finds files only
find . -name “.*” ~find in current dir by name the pattern of hidden file starting with . and then print its location
find . -name “file*” -type f -exec rm -i {} \; ~find in current dir by name the pattern file and anything after it and then execute rm -i and {} is place where each file location will go there instead for each file found, \ is escape char and ; is required for -exec. ; is default acts as command separator but as its required for exec we mask its behaviour by using \ as escape char
find . -name “file*” -type f -print -exec cat {} \; -exec rm -i {} \;
-exec is not an command but an option in find command to use

find . -mtine -1 ~find files n dirs modified in past 1 day
find . -mtine +1 ~find files n dirs modified in before 1 day
find . -mmin -20 ~find files n dirs modified in past 20 mins
find /var/log -name “message*” -mtime +30 -exec rm -i {} \; ~find pattern file dir in past 30 days and remove them

Word Count:
wc newfile ~number of lines, words and char in the file
wc -l newfile ~number of line only
wc -w newfile ~number of words
wc -c newfile ~number of chars

Pagenation command:
displays 1st page of the file.
more filename ~enter will scroll by [enter], [space] will scroll by one page, [q] to quit. displays large file
less filename ~same as more but we can use [arrow] and [page down, up] keys as well. less is better than more.
head filename ~displays first 10 lines
head -5 filename ~first 5 line
tail filename ~displays last 10 lines
tail -n +710 filename ~displays remaining lines after 710th line, inclusive

Sorting contents of the file:
sort filename ~sort lines based on first char
sort -n filename ~sort based on numeric value
sort -n -r filename ~sort in reverse based on value
sort -k 2 filename ~sort based on the 2nd word (or column if data is in that format) on each line, space is field separater
sort -t “-” -k 2 filename ~sort based on the 2nd word (or column if data is in that format) on each line, “-” is field separater here
sort -k 2 filename -o newfile ~sort and store the content in newfile based on the 2nd word (or column if data is in that format) on each line, space is field separate r

Extracting specific characters and fields:
cut -c 1,4,7,10 filename ~extract 1st, 4th,7th, 10th char from filename
cut -c 1-4,7-10 filename ~extract 1 to 4th,7-10th char from filename inclusive both limitsrange
cut -d ” ” -f 1,4 filename ~extract 1st and 4th field with space as separater

tr “abcd” “PQRS” < filename ~replace and display file with each occurance of a with P, b with Q and so on.
tr “[a-z]” “[A-Z]” < filename ~display lower to upper
tr -d “[a-z]” < filename ~delete the chars in range a-z and display
tr -cd “[a-z]” < filename ~delete everything other than the chars in range a-z and display, complement of -d is -cd
tr -s “[a-z]” < filename ~delete all multiple occurances of a-z per line in the filename.
tr -s “[aA0-zZ9] /n” < filename ~delete multiple occurances of any char in the file per line
uniq filename ~it returns all names from the file for single time
uniq -d filename ~returns duplicate names
uniq -u filename ~returns unique names
uniq -c filename ~count the occurance

File comparing commands:
cmp file1 file2 ~will return the first difference with byte pos and line number
cmp -s file1 file2 ~supress the output of the command
echo $? ~returns exit status of last command. 0 if they are same, else something else.
data must be in a sorted order i the file
comm file1 file2 ~3 column o/p, 1st the difference which is in 1, 2nd the difference of file 2, 3rd is the common datc
comm -12 file1 file2 ~common names only
diff file1 file2 ~detailed difference between the two. a-append, c-change, d-delete
to implement the changes suggested in diff command
diff file1 file2 > diff_file ~contains the o/p of diff command
patch file1 diff_file ~change file1 with the changes in diff_file

JOIN 1 files:
join file1 file2 ~display join based on 1st word in both sorted files
join -1 3 -2 1 file1 file2 ~display first column with which we are comparing i.e. 3rd column of file 1 and 1st of file 2. and then other columns based on that comparison

Pattern locating commands:
GREP- global reg ex program
grep pattern filename ~returns line containing pattern, case sensitive, use -i for insensitive.
use -v to display lines which does not contain the pettern
use -n to display the line number as well
use -c to display the number of lines containing the pattern
use -o to display the word containing pattern
use -w to search for the pattern as a whole word
use * in place of filename to search in all files
use -l will display only filename which has the pattern and no more details than that.
use -A2 or -B3 to include 2 lines after the pattern line and 3 lines before it
^pattern to search pattern which is at the start of line
$pattern to search pattern which is at the end of line
6[09] pattern is 60 or 69
6[0-9] pattern is between 60 to 69
6[^09] pattern is anything other than 60 or 69
6[[:digit:]] pattern is 6 and then any digit
other than [:digit:], we have classes as upper, lower, alpha, alnum classes as well
[[:digit:]][[:digit:]][[:digit:]] to search for 3 digit occurances
-E [[:digit:]]{3} can be used for this as well, E stands for extended grep support.

Searching for multiple patterns:
fgrep – fixed grep doesnt support wild cards
egrep – extended grep
grep with option -E

fgrep “pattern1
pattern3” filename ~one pattern on each line, kind of interactive

egrep “patter1|pattern2|pattern3” filename ~connected with pipe. above interactive type can be used as well
grep -E “patter1|pattern2|pattern3” filename

Input and output redirection:
when we run any command in unix linux, 3 files are always opened/available for that program
for each file, a unique file descriptor value will be assigned. unix uses them internally for managing files

files opened device associated file descriptor
————– ——————— —————-
std ip file keyboard 0
std op file monitor 1
std error file monitor 2

io redirection symbols:
< ip redirection
> op redirection
used for c programming

cat < file1 > file2 ~take file1 as ip and op it on file2, overwritting. using same file in both place will make it loose its contents
cat file1 >> file2 ~appends file1 in file2

cat filename
file contents
[ctrl]+D ~to make filename with the contents

cat >> filename
append text
[ctrl]+D ~to append filename with the contents

to save error messages to a file
cat nofile > errorfile

to redirect error messages
cat nofile 2> errorfile or
cat nofile 2>> errorfile ~to save or append the errors or warnings to the errorfile

Find number of files in a dir:
ls > filename ~to write names of files and dir in a file
wc -l filename ~to count the number of them

<command1> | <command2> |.. | <commandn>
first command is executed and its op is feed to 2nd comamnd

ls | wc -l ~combining them in a single command
ls -l ~display long list of all files and info
ls -l | grep “^d” ~to display only directories

who ~display all users details
who | grep “amit” | tr -s ” ” | cut -d ” ” -f 1,3,4 ~list of users, search for a user, replace multiple spaces with single space then using space as differentiator we get fields 1 2 4

#extract lines containing pattern in the file. extract names and salary. salary in desc order
grep “pattern” filename | cut -d ” ” -f 2,4 | sort -k2 -nr

#TEE command:
grep “pattern” filename | cut -d ” ” -f 2,4 | sort -k2 -nr | tee filename ~save its output in the file and display it on screen. 2 o/p at the same time

#File access permissions:
in file details via “ls -l”, first char is type of file.
– ordinary file
d dir file
L symbolic link file
c char device file
b block device file
p named pipe file
s socket file

next 9 chars are file access permissions
next char is no. of links
next char is owner name, grp name, filesize, date and time of last modification, filename

Type of permission:
x-execute(applicable for dir, shell script and bin files)

Types of users:
u-user/owner creater of file
g-group users belonging to same group
o-others users who belong to another group

Changing file access permission:
chmod command using symbolic or octal method
granting access –
giving access +

ls -l filename ~display permission
chmod u+x,o+w,go+x filename ~user execute, others write, grp n others execute

Using octal method:
no permission:0
chmod 764 filename
7= 421 rwx
6= 42 rw
4= 4 r

Default file access permission (FAP) is based on umask value:
different value for different logged in user
umask ~if o/p is 0002 (ignore the first char as its for special permission)
fap 6 6 6 value for dir, (its 777 for files)
umask 0 0 2 it is the permission which we wish to deny
dafault fap 6 6 4 hence these are granted by default to all files created by the user (substraction of above 2 rows)
umask value ~to change dafault value

Special permission:
set user id uid
set grp id gid
sticky bit

passwd ~to change password, command is present in /usr/bin/
password is not stored in the /etc/passwd. only user details are stored.
its stored in /etc/shadow file.

When user execute passwd command, he has r-x permission on it. executing passwd will make changes in this shadow file on which user doesnt have write permission. its possible because of rws permission on passwd file for owner/root user. here s is setting user id. once this bit is set, any user could be executing passwd program will be using previlledges of owner of the file. we can use this “s” to grant such permission to other users as well.

Setting group id:
by default group will be same as the user
groupadd grpname ~add a group
chgrp grpname dirname ~assgning grp to the dir.
chmod g+s dirname ~now common user share file permission of the group.
now new files will have default grp as the grpname. can be changed by root only.

Sticky bit:
its denoted by t. if anyone can create file then they can remove any files too. after applying t, now anyone can create but only owner can remove it. applied to folders like shared temp folder.
chmod o+t dirname

all processes are born from the parent process and init process with PID 1 is parent of them all.
ps ~disply processes by the user
ps -l ~long display
ps -A ~display all process on the server
ps -u murthy ~display process from a specific user
ps -t /dev/pts/0 ~display process running under a terminal

when you run any program in foreground mode, you cant execute any other till it completes.
So running time consuming processes in background mode is done by appending “&” to the command or program
use [ctrl] + z to suspend the command and we can resume it later using ‘fg %<job number>’ i.e job number which can be found out using ‘jobs’ command. It gives info of suspended as well as bg processes. To resume it in background mode use ‘bg %1’. but then we cant suspend it now as its in bg mode.
use [ctrl] + c to halt it in fg mode. use ‘kill <pid>’ or ‘kill %<job number>’ to halt bg commands
‘kill -9 <pid>’ to kill any process with cant be terminated with just kill. its usually called as ‘sure killer’ or ‘sig kill’

Scheduling processes:
at <time to schedule> <enter>
echo “msg”
<other commands>
[ctrl] + d ~to schedule a process only once at later time
atrm <job number> ~terminate a scheduling job
batch < programname ~to schedule a process only once when load is less on cpu

crontab <enter>
<minute when to execute> <hour> <day> <month> <weekday 0-6> echo “msg”
~to schedule a process repeatedly at regular interval. ‘*’ to make it every day, hour, etc. here minute is not every 20 minutes. its when mins value is 20. multiple commands can be there in a job.
crontab -l ~list jobs
crontab -e ~edit the job
crontab -r ~to remove the job

tty ~to get terminal number of the logged on user (tele type writter)

Communication commands:
offline comm: mail no need for the user to be online
mail <username>
subject: <text>
[ctrl] + d

mail <username> < filename

mail ~displays mails all with numbers
q ~exit mail mode

online comm: write <uname> <enter>
[ctrl] + D to get out

mesg n ~Stop msgs from any users
mesg y ~start again

sed – Stream editor:
its a non interactive line editor
used to display add remove change substitute lines

p to print
s to substitute
d to delete
i to insert
a to append
c to change
sed ‘p’ <filename> ~it displays lines twice. it has 2 buffers internally. first to patten buffer and then to std op device. then take actions i.e. ‘p’ print command. hence twice.

sed ‘2p’ <filename> ~displays each line once. proper display except print 2nd line twice bcoz of ‘2p’ command

sed -n ‘2p’ <filename> ~Disable sending to std op device. hence prints 2nd line only.
sed -n ‘2,4p’ <filename> ~Disable sending to std op device. hence prints 2nd to 4th lines only.
sed -n ‘/sachin/p’ <filename> ~Disable sending to std op device. displays line with sachin
sed -n ‘/sachin/,/dravid/p’ <filename> ~Disable sending to std op device. hence prints lines which has sachine to line where dravid is there.
sed -n ‘2,/sachin/p’ <filename> ~Disable sending to std op device. displays 2nd line till line with sachin
sed -n ‘/sachin/,+3p’ <filename> ~Disable sending to std op device. displays line with sachin and 3 more lines after it.
sed -n -e ‘2p’ -e ‘4p’ <filename> ~display 2nd and 4th line
sed -n ‘/^3/p’ <filename> ~display line starting with 3
sed -n ‘/5$/p’ <filename> ~from 5th line
sed -n 1,4w <outfile> <filename> ~store op in outfile

sed ‘s/5/x/’ <file> ~replace 5 with x in each line for 1st occurance
sed -i ‘s/5/x/’ <file> ~ -i will make changes into the file
sed ‘s/5/x/g’ <file> ~replace 5 with x in each line for all occurance
sed ‘s/5/x/2’ <file> ~replace 5 with x in each line for 2nd occurance
sed ‘2s/5/x/2’ <file> ~replace 5 with x in each line for 2nd occurance in 2nd line
sed ‘/sachin/s/5/x/g’ <file> ~replace 5 with x in each line for all occurance in lines having sachin
echo “Welcome to Unix” | sed ‘s/[[:upper:]]/[&]/’ <file>

sed ‘4a <text>’ <file> ~append at 4th line
sed ‘/<pattern>/c <text>’ <file> ~change text at line having pattern
sed -i ‘2d’ <file> ~delete 2nd line permanently from file
sed ‘2i <data>’ <file> ~insert data at 2nd line

AWK command:

used on files which has valid data separated by uniform field separater. we cant save the change in the file. we can redirect the op to another file.

awk ‘{print $0’} <file> ~print all fields in the file
awk ‘{print $2,$4’} <file> ~2nd and 4th column
awk ‘{print $2,”earns”,$4’} <file> ~inserting liternal in the op
awk ‘/<pattern>/{print $0’} <file> ~print line with pattern
awk ‘/<pattern1>/,/<patetern2>/’ <file> ~print range from 1 to 2
awk ‘{printf”%-4d\t%s/t%7.2f\n”,$1,$2,$4}’ <file> ~conversion char or format specifier in c used to print values in diff format. %d for int, %s for string, %f fpr float
awk ‘$2==<pattern>’} <file> ~if 2nd field has sachin then print
awk ‘$4 > <value> && $5 < <value> <file> ~if 4th field has > and 5th has < value
awk ‘$4 > <value> && $5 < <value>{print touppar($2),$3,$4} <file> ~print 2nd field in uppar char and print 3rd 4th field
awk ‘{print NR,$0’} <file> ~print all fields in the file with line numbers
awk ‘NR==2’ <file> ~print 2nd line
awk ‘{print $NR’} <file> ~prints nth field in nth line
awk ‘{print NF’} <file> ~prints number of fields in each line
awk ‘{print $NF’} <file> ~prints last fields in each line
awk -F “-” ‘{print $2,$3}’ <file> ~making file compactible with awk by specifying its field separater.
awk -F”-” ‘{OFS=”#”;print $0}’ <file> ~making # as field separater in the op
awk -F”-” ‘{OFS=”#”;print $0 > “<outfile>”}’ <file> ~making # as field separater and saving it in outfile

Begin and end constructs in awk:
BEGIN- preprosessor statement
END- post preprosessor statement

awk ‘BEGIN{printf(“\t<title>\n”);totalsalary=0}
print $0, <text>, $4
END{printf(“\n totalsalary is:”,totalsalary)}’ <file> ~display structural data in desired format using c lang. we can copy this code from begin to end and put it in a file. use ‘awk -f <codefile> <datafile>’ to run the saved code

awk ‘{sum=sum+$4} END {print sum}’ <file> ~print sum of column 4

Storage structure in Unix fs:
when create a file or dir, internally an i-node(index node) number will be assigned and used by unix to access files. ‘ls -ai’ displays filename with its inode number. In dir, only filename and inode number of those are stored. There is an entry in inode table for each file for inode number, type of file, dir, symbolic file, char dev, block, FAP, no. of links, owner ID, grp id, size, 3 timestamps(create, access, modified) and array of datablock addresses in which the actual data is stored. inode block has inode table.

types of link:
Hard link:
ln <file1> <file2> ~both filename points to one inode number. link count is +1 for both files now, can be seen by ‘ls -li <file>’. if we edit on, charges are seen in both as source is same. if we remove any one file, its link entry in the inode table in that dir will be removed but one entry will stiil remain which we copied. so its contents are not lost. Hard links are used to link within same fs.

Soft or symbolic lync:
ln -s oldfile symoldfile ~this creates soft links where the inode number of both files will be different. permission for sym file has all permission granted. size of sym file is the charname length of old file, not actual content size of old file. ls -li symfile gives the details. link count wont be changed. we can view, write with help of both filenames but deleting the original file will also delete all links to it i.e. link files will also be invalid. but if we create again the file with the same name then the link will again established. its not linked with inode but the “NAME OF THE FILE is linked”. renaming the original file will also result in loss of link. Soft links are used to link across fs.

shell is command interpreter. is also provides with programming constucts using which u can implement decision making code in the script.

shell script?
shell script is a text file with sequence of one or more valid unix commands with optional decision making statements. its a wrapper code around various command lines.

its used to automate routine task like managing users, managing database backup/recovery, installing app soft, deploy web apps, managing or monitoring log files.

sh bourne shell – default shell in MAC os
ksh korn shell – most popular shell – default shell in IBM AIX os
bash bourne again shell – – default shell in linux env
csh C shell
tcsh tennel c shell

sh,ksh,bash – syntax and control flow structures are similar
csh,tcsh – similar to c lang

BASH Scripting:
System default shell variables:
echo $SHELL ~display shell name
cat /etc/shells ~display available shells
echo $HOME, $USER, $LOGNAME, $PS1, $PS2, $TERM, $TMOUT, $MAIL, $ENV (variables$SET (lists functions defined)

Local and global variables:
name=amit ~local variable, dont leave space and its case sensitive
echo $name to display it
sh<entet> ~enter into subshell who has parent bash. variables defined in parent bash wont be available in subshell sh. exit to exit sh subshell
export name ~in parent shell. its will make it available in its subshells.
unset name ~to remove local variable. we cant undo the export command.

Unix never cares about extensions. its for us to use .sh
sh <file> ~invoke sh and pass file for line by line interpretation.
chmod u+x <file>~grant executing permission
./<file> ~invoke default shell to execute file in current pwd

Interactive scripts:
use ‘read <variable>’ to read ip and store it in variable.
‘echo “msg”‘ to op and ‘echo $variables’ to print them.
-n to supress newline char

<> ~wont work as its not a command. commands are stored in $PATH and this file wont be there in $PATH. its in pwd. if we include pwd in $PATH then it will run.

Example on Quotes:
take from the final krishna file- welcome to unix
reverse quote `<commmand>` and $(<command>) will execute query and display the op of it. we can also user normally to store the commands in varibles and then printing those variables will result in the execution of the commands

Real world example:
taking file and backing it up in other folder with folder name as the bkp_timestamp

#!bin/bash #insure that script runs in bash only
#using coding standards
#Script name, Purpose, Author, Created, Modified
echo -n “enter filename to backup”
read filename
DIRNAME=BKP_`date ‘+%d_%b_%y_%H_%M’`
mkdir ~/BACKUP/$DIRNAME # ~ is to denote home dir
cp $filename ~/BACKUP/$DIRNAME/$filename.bkp
echo “$filename is backed up”

Bugs: user not ip anything, ip is invalid file, ip is dir name are discussed at the end of day5


command line arguments or positional parameters:
we can pass args by appending it after we enter the name of the script while executing it.
$0 ~returns name of the script or program
basename $0 ~returns name of the script or program without all full path
$1 ~returns 1st arg in command line
$2 ~returns 2nd arg in command line
$n ~returns nth arg in command line
$# ~number of arguments in the command line
$* ~returns list of all the arguments not inclusive of program name
$@ ~returns list of all the arguments not inclusive of program name
$$ ~returns priority of the current process
$? ~exit status of last command

we couldnt access more than 9 arguments earlier as there was no parsing 2 digit number with $ like $12. so new way to get it is to make it inside curly braces like ${12}. then we can access any number of arguments.

Shift command:
shift commands removes first argument by default and keeps rest of it. passing value to shift i.e. ‘shift 2’ will ignore next 2 arguments. it shifts arguments to left. it was used in the past when it wouldnt support more than 9 args.
Implementing backup script using command line arg:
earlier interactive backup can be also scripted using the args method as well.

Combining variable with literal:
echo ${varname}literal
echo ${name}thumar

$? ~exit status of last command
if it returns 0, that means it was success/true. non zero value means failure/false. non zero range is 0-255 but internally it uses value upto 138. it returns this to the os.

grep “pattern” filename > outfile 2> errorfile ~if true save it to outfile else print error in errorfile. nothing is display on screen.

&& || operator:
using && between 2 commnds like grep && echo will result in situation like this. If grep is success then execute 2nd command as well. if 1st fails, dont execute 2nd command.
using || in between will be treated like excute either 1st or 2nd command, whichever gives success.

Evaluating conditional statements:
used mainly for string comparision.
test <cond> or <cond2> or <cond n>

Integer comparison operator:
-eq equal to
-ne not eq to
-gt >
-ge >=
-lt <
-le <=

-a AND
-o OR

test $x -eq $y ;echo $?
op: 0

##String comparision operator: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
“str1” = “str2” ~true if both have same contents
“str1” != “str2” ~true if both are different
-z “str” ~true if its NULL
-n “str” ~true if its not NULL
“str1” > “str2” ~if string1 is > string2. its not based on length but its based on value of chars in it.
use [[ “str1” > “str2” ]] as it provides better results than using single brase.

File testing operation:
-e <file> ~true if file exists e.g ‘[ -e file1] ; echo $?’ returns 0
-f <file> ~true for regular file
-d <file> ~true for dir file
-L <file> ~true for symbolic link file
-r <file> ~true for read access
-w <file> ~true for write access
-x <file> ~true for execute access
-s <file> ~true for not empty
<file1> -ot <file2> ~true for file1 older than file2
<file1> -nt <file2> ~true for file1 newer than file2

Control flow structure:
if <cond> then <state> fi
if <cond> then <state> else <state> fi
if <cond> then <state> elif <cond> then <state> . . . else <state> fi

take example from the presenters file.

Bugs from end of day4
use ‘if [ $# -ne 1 ] then echo invalid argument and exit
dont use $n args directly, store it in variables and then use them
‘ if [ -e $filename -a -f $filename ]’ to check if the file exists, then continue else echo that dir or file doesnt exist.
Performing arithmatic operations:
expr command or use ((..))
eg. expr $x + $y returns 13
use \* for multiplication

echo $((x+y-$z)) returns -13

case statements:
case <var> in
value1) <state1> ;; ~closing brace only
value n) <state n> ;; ~2 semicolons
*) echo “default op” ;;

Interactive loops:
while <cond> do <statement> done
can be used to run srcipts untill user choose to quit.

until <cond> do <statement> done
used in situations like checking for password until its correct.

for <var> in <value1> <value2> … <value n>
do <state> done

Listing on dirs:
for flname in *
do if [-d $flname]
take from author

varname[index]=value e.g. x[0]=10
echo ${x[0]} use * to list all, #x[*] for total numbers, !x[*] to list indexes

team=(sach drav “virat coh” harbh) ~also array
echo ${team[*]:2} ~display from 2 index
echo ${team[*]:1:2} ~display 1 and 2 index

typeset -a holiday ~declare as associative array to act as follow

pr command to format file before printing
eg. pr –width=80 –length==30 -h “emp details” filename
lpr is command to print file on printer

trap performs action when it finds one or more interrupt signals
trap <some command or function name> <signal name e.g. SIGINT which is generated when user terminates script by [ctrl]+c>

Passing options and arguments to a script:
every commands has multiple options with or without arguments to those options as well. like -o outflie is option with argument and -r stands for reverse is only an option.

getopts commands:
take from author
used to make script to take valid options and arguments defined by us beforehand.

To remove empty file in pwd:
for flname in *
if [! -s filename]
rm -i filename


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s