Table of Contents for
Intermediate C Programming

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Intermediate C Programming by Yung-Hsiang Lu Published by CRC Press, 2015
  1. Front Cover
  2. Contents (1/2)
  3. Contents (2/2)
  4. List of Figures (1/2)
  5. List of Figures (2/2)
  6. List of Tables
  7. Foreword
  8. Preface
  9. Author, Reviewers, and Artist
  10. Rules in Software Development
  11. Source Code
  12. I. Computer Storage: Memory and File
  13. 1. Program Execution (1/2)
  14. 1. Program Execution (2/2)
  15. 2. Stack Memory (1/5)
  16. 2. Stack Memory (2/5)
  17. 2. Stack Memory (3/5)
  18. 2. Stack Memory (4/5)
  19. 2. Stack Memory (5/5)
  20. 3. Prevent, Detect, and Remove Bugs (1/2)
  21. 3. Prevent, Detect, and Remove Bugs (2/2)
  22. 4. Pointers (1/6)
  23. 4. Pointers (2/6)
  24. 4. Pointers (3/6)
  25. 4. Pointers (4/6)
  26. 4. Pointers (5/6)
  27. 4. Pointers (6/6)
  28. 5. Writing and Testing Programs (1/4)
  29. 5. Writing and Testing Programs (2/4)
  30. 5. Writing and Testing Programs (3/4)
  31. 5. Writing and Testing Programs (4/4)
  32. 6. Strings (1/3)
  33. 6. Strings (2/3)
  34. 6. Strings (3/3)
  35. 7. Programming Problems and Debugging (1/4)
  36. 7. Programming Problems and Debugging (2/4)
  37. 7. Programming Problems and Debugging (3/4)
  38. 7. Programming Problems and Debugging (4/4)
  39. 8. Heap Memory (1/3)
  40. 8. Heap Memory (2/3)
  41. 8. Heap Memory (3/3)
  42. 9. Programming Problems Using Heap Memory (1/4)
  43. 9. Programming Problems Using Heap Memory (2/4)
  44. 9. Programming Problems Using Heap Memory (3/4)
  45. 9. Programming Problems Using Heap Memory (4/4)
  46. 10. Reading and Writing Files (1/3)
  47. 10. Reading and Writing Files (2/3)
  48. 10. Reading and Writing Files (3/3)
  49. 11. Programming Problems Using File (1/2)
  50. 11. Programming Problems Using File (2/2)
  51. II. Recursion
  52. 12. Recursion (1/4)
  53. 12. Recursion (2/4)
  54. 12. Recursion (3/4)
  55. 12. Recursion (4/4)
  56. 13. Recursive C Functions (1/4)
  57. 13. Recursive C Functions (2/4)
  58. 13. Recursive C Functions (3/4)
  59. 13. Recursive C Functions (4/4)
  60. 14. Integer Partition (1/5)
  61. 14. Integer Partition (2/5)
  62. 14. Integer Partition (3/5)
  63. 14. Integer Partition (4/5)
  64. 14. Integer Partition (5/5)
  65. 15. Programming Problems Using Recursion (1/5)
  66. 15. Programming Problems Using Recursion (2/5)
  67. 15. Programming Problems Using Recursion (3/5)
  68. 15. Programming Problems Using Recursion (4/5)
  69. 15. Programming Problems Using Recursion (5/5)
  70. III. Structure
  71. 16. Programmer-Defined Data Types (1/6)
  72. 16. Programmer-Defined Data Types (2/6)
  73. 16. Programmer-Defined Data Types (3/6)
  74. 16. Programmer-Defined Data Types (4/6)
  75. 16. Programmer-Defined Data Types (5/6)
  76. 16. Programmer-Defined Data Types (6/6)
  77. 17. Programming Problems Using Structure (1/4)
  78. 17. Programming Problems Using Structure (2/4)
  79. 17. Programming Problems Using Structure (3/4)
  80. 17. Programming Problems Using Structure (4/4)
  81. 18. Linked Lists (1/3)
  82. 18. Linked Lists (2/3)
  83. 18. Linked Lists (3/3)
  84. 19. Programming Problems Using Linked List (1/2)
  85. 19. Programming Problems Using Linked List (2/2)
  86. 20. Binary Search Trees (1/4)
  87. 20. Binary Search Trees (2/4)
  88. 20. Binary Search Trees (3/4)
  89. 20. Binary Search Trees (4/4)
  90. 21. Parallel Programming Using Threads (1/5)
  91. 21. Parallel Programming Using Threads (2/5)
  92. 21. Parallel Programming Using Threads (3/5)
  93. 21. Parallel Programming Using Threads (4/5)
  94. 21. Parallel Programming Using Threads (5/5)
  95. IV. Applications
  96. 22. Finding the Exit of a Maze (1/5)
  97. 22. Finding the Exit of a Maze (2/5)
  98. 22. Finding the Exit of a Maze (3/5)
  99. 22. Finding the Exit of a Maze (4/5)
  100. 22. Finding the Exit of a Maze (5/5)
  101. 23. Image Processing (1/3)
  102. 23. Image Processing (2/3)
  103. 23. Image Processing (3/3)
  104. 24. Huffman Compression (1/10)
  105. 24. Huffman Compression (2/10)
  106. 24. Huffman Compression (3/10)
  107. 24. Huffman Compression (4/10)
  108. 24. Huffman Compression (5/10)
  109. 24. Huffman Compression (6/10)
  110. 24. Huffman Compression (7/10)
  111. 24. Huffman Compression (8/10)
  112. 24. Huffman Compression (9/10)
  113. 24. Huffman Compression (10/10)
  114. A. Linux
  115. B. Version Control
  116. C. Integrated Development Environments (IDE) (1/3)
  117. C. Integrated Development Environments (IDE) (2/3)
  118. C. Integrated Development Environments (IDE) (3/3)
180 Intermediate C Programming
n 3 is an odd number. This means that there are t(n (n 3)) = t(3) ways to
partition n with n 3 as the first number. For example, if n = 10, there are t(3) ways
to partition 10 with 7 as the first number. Note that t(3) = 1, because the only valid
partition of 3 that starts with an even number is: 3 = 2 + 1.
n 2 is an even number. We skip this case because s(n) is only concerned with the
number of ways to partition n using an odd number as the first number.
n 1 is an odd number, so t(n (n 1)) = t(1) is included in the calculation of s(n).
Note, however, that t(1) = 0.
n is an even number. We skip this case because s(n) only concerns itself with partitions
that begin with odd numbers.
Hence, when n is an even number:
s(n) = t(n 1) + t(n 3) + t(n 5)... + t(3) + t(1) (12.20)
Following this logic when n is an odd number:
n 3 is an even number and this case is discarded when computing s(n). For example,
if n = 11, then n 3 = 8, which is even. Since s(n) only concerns itself with partitions
that begin with an odd number, we skip t(3).
n 2 is an odd number leaving the remainder 2 to be partitioned. Thus we add t(2).
n 1 is an even number and this case is discarded when computing s(n).
n is an odd number and it is a valid partition for s(n). This means we add 1 to the
end of the equation.
When n is an odd number, s(n) can be written as:
s(n) = t(n 1) + t(n 3) + t(n 5)... + t(2) + 1 (12.21)
Combining these two halves together, we get:
s(n) =
(
t(n 1) + t(n 3) + t(n 5)... + t(1) when n is even
t(n 1) + t(n 3) + t(n 5)... + t(2) + 1 when n is odd
(12.22)
Using similar reasoning again, t(n) can be written as follows:
t(n) =
(
s(n 2) + s(n 4) + s(n 6)... + s(4) + s(2) + 1 when n is even
s(n 2) + s(n 4) + s(n 6)... + s(3) + s(1) when n is odd
(12.23)
Since a partition may start with an odd number or an even number, f(n) = s(n) + t(n)
and it is the answer to the question. This is the number of ways to partition n using alter-
nating odd and even numbers. Section 12.4.2 explains how to find the number of partitions
using odd numbers only. The answer is expressed as h(n). A similar procedure can be used
to find the number of partitions using even numbers only. Let’s call it u(n). Of course, u(n)
is zero if n is odd.
Is h(n) + u(n) the same as s(n) + t(n)? Why? I leave this question for you to answer. If
the answer is yes, prove it. If the answer is no, explain the reason.
12.4.5 Generalizing the Integer Partition Problem
This problem has many variations, for example,
How many “2”s are used?
How many “3”s are used?, etc.
Recursion 181
How many “+” symbols are used?
How many numbers are used, and what is the general rule?
When n is 1, one number is used.
When n is 2, three numbers are used.
When n is 3, eight numbers are used.
When n is 4, twenty numbers are used.
12.4.6 How Not to Solve the Integer Partition Problem
Sometimes people try to solve these types of problems in the following way:
1. Manually count the answers for the first several values of n.
2. Observe the relationships and write a formula that satisfies these relationships.
3. Claim this formula is the answer.
This approach is logically flawed. For any finite number of pairs of (x
1
, y
1
), (x
2
, y
2
), (x
3
, y
3
),
... , (x
k
, y
k
), there is always a polynomial y = a
k
x
k
+ a
k1
x
k1
+ ... + a
1
x + a
0
that passes
through these points. That does not mean this polynomial is the correct formula. In fact,
the previous examples show that the answers are not polynomials.
There is another explanation for why “conclusion by observation” is logically flawed. Do
you have a favorite television program that is broadcast daily? By observation, this program
is on air every day. Can you claim that this program will be on air forever? Of course not.
The program may stop after a few seasons or a few years. Observation of finite instances
is not a valid way to derive a general rule. Even after a thousand observations, you cannot
guarantee that it is still true next time.
The equations in (12.8), (12.9), (12.10), and (12.12) are not derived from observation of
finite cases. The equations are correct for any positive integer n. In some cases n must be
greater than some specific value, for example, n > 1 in (12.10). The equations are general
and the derivations from these equations are logically sound. When you solve this type of
problem, please remember that observation is insufficient.
Recursive formulas are actually reasonably straightforward with some practice. The key
is realizing that without using recursion, the problem may be really difficult. The simplicity
of recursion is that you can assume that you already have the answer to smaller cases.
Therefore if you can write f (n) in terms of f (smaller than n), and if you can write trivial
cases like f(0) and f(1), then that is the entire solution.
This page intentionally left blankThis page intentionally left blank