Table of Contents for
Learning Linux Shell Scripting

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Learning Linux Shell Scripting by Ganesh Sanjiv Naik Published by Packt Publishing, 2015
  1. Cover
  2. Table of Contents
  3. Learning Linux Shell Scripting
  4. Learning Linux Shell Scripting
  5. Credits
  6. About the Author
  7. Acknowledgments
  8. About the Reviewers
  9. www.PacktPub.com
  10. Preface
  11. What you need for this book
  12. Who this book is for
  13. Conventions
  14. Reader feedback
  15. Customer support
  16. 1. Getting Started and Working with Shell Scripting
  17. Tasks done by shell
  18. Working in shell
  19. Learning basic Linux commands
  20. Our first script – Hello World
  21. Compiler and interpreter – difference in process
  22. When not to use scripts
  23. Various directories
  24. Working more effectively with shell – basic commands
  25. Working with permissions
  26. Summary
  27. 2. Drilling Deep into Process Management, Job Control, and Automation
  28. Monitoring processes using ps
  29. Process management
  30. Process monitoring tools – top, iostat, and vmstat
  31. Understanding "at"
  32. Understanding "crontab"
  33. Summary
  34. 3. Using Text Processing and Filters in Your Scripts
  35. IO redirection
  36. Pattern matching with the vi editor
  37. Pattern searching using grep
  38. Summary
  39. 4. Working with Commands
  40. Command substitution
  41. Command separators
  42. Logical operators
  43. Pipes
  44. Summary
  45. 5. Exploring Expressions and Variables
  46. Working with environment variables
  47. Working with read-only variables
  48. Working with command line arguments (special variables, set and shift, getopt)
  49. Understanding getopts
  50. Understanding default parameters
  51. Working with arrays
  52. Summary
  53. 6. Neat Tricks with Shell Scripting
  54. The here document and the << operator
  55. The here string and the <<< operator
  56. File handling
  57. Debugging
  58. Summary
  59. 7. Performing Arithmetic Operations in Shell Scripts
  60. Using the let command for arithmetic
  61. Using the expr command for arithmetic
  62. Binary, octal, and hex arithmetic operations
  63. A floating-point arithmetic
  64. Summary
  65. 8. Automating Decision Making in Scripts
  66. Understanding the test command
  67. Conditional constructs – if else
  68. Switching case
  69. Implementing simple menus with select
  70. Looping with the for command
  71. Exiting from the current loop iteration with the continue command
  72. Exiting from a loop with a break
  73. Working with the do while loop
  74. Using until
  75. Piping the output of a loop to a Linux command
  76. Running loops in the background
  77. The IFS and loops
  78. Summary
  79. 9. Working with Functions
  80. Passing arguments or parameters to functions
  81. Sharing the data by many functions
  82. Declaring local variables in functions
  83. Returning information from functions
  84. Running functions in the background
  85. Creating a library of functions
  86. Summary
  87. 10. Using Advanced Functionality in Scripts
  88. Using the trap command
  89. Ignoring signals
  90. Using traps in function
  91. Running scripts or processes even if the user logs out
  92. Creating dialog boxes with the dialog utility
  93. Summary
  94. 11. System Startup and Customizing a Linux System
  95. User initialization scripts
  96. Summary
  97. 12. Pattern Matching and Regular Expressions with sed and awk
  98. sed – noninteractive stream editor
  99. Using awk
  100. Summary
  101. Index

Chapter 3. Using Text Processing and Filters in Your Scripts

In the last chapter, you studied about basic process management. You learned about the ps command. You also studied about job management using commands such as jobs, fg, bg, kill, pkill as well as various other tools such as top, iostat, and vmstat.

In this chapter, you will cover the following topics:

  • Using more, less, head, and tail
  • Using diff, cut, paste, comm, and uniq
  • Working with grep
  • Understand standard input, output, and standard error
  • Understand various metacharacters and their usage

Text filtering tools

Normally, Shell scripting involves report generation, which will include processing various text files and filtering their output to finally produce desirable results. Lets start discussing the two Linux commands, namely, more and less:

more: Sometimes we get a very large output on the screen for certain commands, which cannot be viewed completely on one screen. In such cases, we can use the more command to view the output text one page at a time. Add "| more" after the command, as follows:

$ ll /dev | more

The | is called pipe. You will learn more about it in the next chapters. In this command, pressing the spacebar will move the output on the screen one page at a time, or pressing Enter will move the screen by one line at a time.

less: Instead of more, if you use less it will show a screen containing the full text all at once. We can move forward as well as backwards. This is a very useful text filtering tool.

The syntax of usage is as follows:

$ command |  less
e.g. $ ll /proc | less

This command will show a long listing of directory listing of the /proc folder. Let's say that we want to see if the cpuinfo file is present in the directory or not? Just press the arrow key up or down to scroll through the display. With the more command, you can not scroll backwards. You can move forward only. With page up and down key presses, you can move forward or backward one page at a time, which is very fast.

In addition to scrolling forward or backward, you can search for pattern using / for forward search and ? for backward search. You can use N for repeating the search in a forward or backward direction.

Head and tail

For testing the next few commands, we will need a file with a sequence of numbers 1 to 100. For this, use the following command:

$ seq 100 > numbers.txt

The preceding command created a file with the numbers 1 to 100 on separate lines. The following example shows the usage of the head command:

$ head  // will display top 10 lines
$ head  -3  numbers.txt   // will show first 3 lines
$ head +5 numbers.txt   // will show from line 5. Some shell may not work this command

The following example shows the usage of the tail command:

$ tail  // will display last 10 lines
$ tail  -5  numbers.txt  // will show last 5 lines
$ tail +15 numbers.txt  // will show from line 15 onwards. Some shell may not work

To print lines 61 to 65 from numbers.txt into file log.txt, type the following:

$ head -65 numbers.txt | tail -5 > log.txt

The diff command

The diff command is used to find differences between two files. Let's see a few examples to find out its usage.

The content of file1 is as follows:

I go for shopping on Saturday
I rest completely on Sunday
I use Facebook & Tweeter for social networking

The content of file2 is as follows:

Today is Monday.
I go for shopping on Saturday
I rest completely on Sunday
I use Facebook & Tweeter for social networking

Then, type the diff command:

$ diff file1 file2

Output:

0a1
> Today is Monday

In the output, 0a1 tells us that line number 1 is added in file2.

Let's learn another example with line deletion.

The content of file1 is as follows:

Today is Monday
I go for shopping on Saturday
I rest completely on Sunday
I use Facebook & Tweeter for social networking

The content of file2 is as follows:

Today is Monday
I go for shopping on Saturday
I rest completely on Sunday

Then, type the diff command:

$ diff file1 file2

The output is as follows:

4d3
< I use Facebook & Tweeter for social networking.

In the output, 4d3 tells us that line number 4 is deleted in file2. Similarly, the change command will show us changes in file as well.

The cut command

The cut command is used to extract specified columns/characters of a text, which is given as follows:

  • -c: Will specify the filtering of characters
  • -d: Will specify the delimiter for fields
  • -f: Will specify the field number

The following are few examples that show the usage of the cut command:

  • Using the next command, from the /etc/passwd file, the fields 1 and 3 will be displayed. The display will contain the login name and user ID. We used the –d: option to specify that the field or columns are separated by a colon (:):
    $ cut -d: -f1,3 /etc/passwd
    
    The cut command
  • Using this command, from the /etc/passwd file, the fields 1 to 5 will be displayed. The display will contains the login name, encrypted password, user ID, group ID, and user name:
    $ cut -d: -f1-5 /etc/passwd
    
    The cut command
  • This command will show characters 1 to 3 and 8 to 12 from the emp.lst file:
    $ cut -c1-3,8-12 /home/student/emp.lst
    
  • The output of the date command is sent as an input to the cut command and only the first three characters are printed on screen, which is shown as follows:
    $ date | cut -c1-3
    Mon
    

The paste command

Using this utility, we can paste two files horizontally, such as file_1, which will become the first column and file_2 will become the second column:

$ paste file_1 file_2

The join command

Consider two files, namely, one.txt and two.txt.

  • The content of one.txt is as follows:
    1 India
    2 UK
    3 Canada
    4 US
    5 Ireland
  • The content of two.txt is as follows:
    1 New Delhi
    2 London
    3 Toronto
    4 Washington
    5 Dublin

In this case, for both the files, the common fields are the fields which have serial numbers that are the same in both files. We can combine both files by following command:

$ join one.txt two.txt

The output will be as follows:

1 India New Delhi
2 UK London
3 Canada Toronto
4 US Washington
5 Ireland Dublin

The uniq command

The following are a few examples showing the usage of the uniq command:

  • This command removes duplicate adjacent lines from the file:
    $ cat test
    aa
    aa
    cc
    cc
    bb
    bb
    yy
    zz
    $ uniq test
    

    This output removes the duplicate adjacent lines from test file, shown as follows:

    aa
    cc
    bb
    yy
    zz
    
  • The next command prints only duplicate lines:
    $ uniq -d test
    

    Output:

    aa
    cc
    bb
    

The comm command

The comm command shows the lines unique to file_1, file_2 along with the common lines in them. We can use various options while using the command in the scripts:

$ cat file_1
Barack Obama
David Cameron
Narendra Modi

$ cat file_2
Barack Obama
Engela Merkel
Vladimir Putin

$ comm –nocheck-order file_1 file_2
      Barack Obama
  David Cameron
    Engela Merkel
  Narendra Modi
    Vladimir Putin

In the preceding example, we can see:

  • The first column shows unique lines in file_1
  • The second column shows unique lines in file_2
  • The last column shows the content common in both the files

The output shows that the unique files in file_1 are David Cameron and Narendra Modi. Unique files in the second file are Engela Merkel and Vladimir Putin. The command name in both the files is Barack Obama, which is displayed in the third column.

The tr command

The tr command is a Linux utility for text processing such as translating, deleting, or squeezing repeated characters, which is shown as follows:

$ tr '[a-z]' '[A-Z]' < filename

This will translate the lower case characters to upper case:

$ tr '|' '~' < emp.lst

This will squeeze multiple spaces into a single space:

$ ls –l | tr -s " "

In this example, the -s option squeezes multiple contiguous occurrences of the character into a single char.

Additionally, the -d option can remove the characters.

Sort: It sorts the contents of a text file, line by line.

  • -n: Will sort as per the numeric value
  • -d: Will sort as per the dictionary meaning
  • -r: Will sort in the reverse order
  • -t: Option to specify delimiter for fields
  • +num: Specifies sorting field numbers
  • -knum: Specifies sorting filed numbers
  • $ sort +4 sample.txt: This will sort according to the 4th field
  • $ sort –k4 sample.txt: This will sort according to the 4th field

    Sr

    Examples of command usage

    Explanation

    1

    sort sample.txt

    Alphabetically sorting of lines

    2

    sort -u sample.txt

    Duplicate entries are sorted

    3

    sort -r sample.txt

    Reverse sort

    4

    sort-n -k3 sample.txt

    Numerical sorting of the 3rd field