Table of Contents for
Mastering Assembly Programming

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Mastering Assembly Programming by Alexey Lyashko Published by Packt Publishing, 2017
  1. Mastering Assembly Programming
  2. Title Page
  3. Copyright
  4. Mastering Assembly Programming
  5. Credits
  6. About the Author
  7. About the Reviewer
  8. www.PacktPub.com
  9. Why subscribe?
  10. Customer Feedback
  11. Table of Contents
  12. Preface
  13. What this book covers
  14. What you need for this book
  15. Who this book is for
  16. Conventions
  17. Reader feedback
  18. Customer support
  19. Downloading the example code
  20. Errata
  21. Piracy
  22. Questions
  23. Intel Architecture
  24. Processor registers
  25. General purpose registers
  26. Accumulators
  27. Counter
  28. Stack pointer
  29. Source and destination indices
  30. Base pointer
  31. Instruction pointer
  32. Floating point registers
  33. XMM registers
  34. Segment registers and memory organization
  35. Real mode
  36. Protected mode - segmentation
  37. Protected mode - paging
  38. Long mode - paging
  39. Control registers
  40. Debug registers
  41. Debug address registers DR0 - DR3
  42. Debug control register (DR7)
  43. Debug status register (DR6)
  44. The EFlags register
  45. Bit #0 - carry flag
  46. Bit #2 - parity flag
  47. Bit #4 - adjust flag
  48. Bit #6 - zero flag
  49. Bit #7 - sign flag
  50. Bit #8 - trap flag
  51. Bit #9 - interrupt enable flag
  52. Bit #10 - direction flag
  53. Bit #11 - overflow flag
  54. Remaining bits
  55. Summary
  56. Setting Up a Development Environment
  57. Microsoft Macro Assembler
  58. Installing Microsoft Visual Studio 2017 Community
  59. Setting up the Assembly project
  60. GNU Assembler (GAS)
  61. Installing GAS
  62. Step 1 - installing GAS
  63. Step 2 - let's test
  64. Flat Assembler
  65. Installing the Flat Assembler
  66. The first FASM program
  67. Windows
  68. Linux
  69. Summary
  70. Intel Instruction Set Architecture (ISA)
  71. Assembly source template
  72. The Windows Assembly template (32-bit)
  73. The Linux Assembly template (32-bit)
  74. Data types and their definitions
  75. A debugger
  76. The instruction set summary
  77. General purpose instructions
  78. Data transfer instructions
  79. Binary Arithmetic Instructions
  80. Decimal arithmetic instructions
  81. Logical instructions
  82. Shift and rotate instructions
  83. Bit and byte instructions
  84. Execution flow transfer instructions
  85. String instructions
  86. ENTER/LEAVE
  87. Flag control instructions
  88. Miscellaneous instructions
  89. FPU instructions
  90. Extensions
  91. AES-NI
  92. SSE
  93. Example program
  94. Summary
  95. Memory Addressing Modes
  96. Addressing code
  97. Sequential addressing
  98. Direct addressing
  99. Indirect addressing
  100. RIP based addressing
  101. Addressing data
  102. Sequential addressing
  103. Direct addressing
  104. Scale, index, base, and displacement
  105. RIP addressing
  106. Far pointers
  107. Summary
  108. Parallel Data Processing
  109. SSE
  110. Registers
  111. Revisions
  112. Biorhythm calculator
  113. The idea
  114. The algorithm
  115. Data section
  116. The code
  117. Standard header
  118. The main() function
  119. Data preparation steps
  120. Calculation loop
  121. Adjustment of sine input values
  122. Computing sine
  123. Exponentiation
  124. Factorials
  125. AVX-512
  126. Summary
  127. Macro Instructions
  128. What are macro instructions?
  129. How it works
  130. Macro instructions with parameters
  131. Variadic macro instructions
  132. An introduction to calling conventions
  133. cdecl (32-bit)
  134. stdcall (32-bit)
  135. Microsoft x64 (64-bit)
  136. AMD64 (64-bit)
  137. A note on Flat Assembler's macro capabilities
  138. Macro instructions in MASM and GAS
  139. Microsoft Macro Assembler
  140. The GNU Assembler
  141. Other assembler directives (FASM Specific)
  142. The conditional assembly
  143. Repeat directives
  144. Inclusion directives
  145. The include directive
  146. File directive
  147. Summary
  148. Data Structures
  149. Arrays
  150. Simple byte arrays
  151. Arrays of words, double words, and quad words
  152. Structures
  153. Addressing structure members
  154. Arrays of structures
  155. Arrays of pointers to structures
  156. Linked lists
  157. Special cases of linked lists
  158. Stack
  159. Queue and deque
  160. Priority queues
  161. Cyclic linked list
  162. Summary for special cases of linked lists
  163. Trees
  164. A practical example
  165. Example - trivial cryptographic virtual machine
  166. Virtual machine architecture
  167. Adding support for a virtual processor to the Flat Assembler
  168. Virtual code
  169. The virtual processor
  170. Searching the tree
  171. The loop
  172. Tree balancing
  173. Sparse matrices
  174. Graphs
  175. Summary
  176. Mixing Modules Written in Assembly and Those Written in High-Level Languages
  177. Crypto Core
  178. Portability
  179. Specifying the output format
  180. Conditional declaration of code and data sections
  181. Exporting symbols
  182. Core procedures
  183. Encryption/decryption
  184. Setting the encryption/decryption parameters
  185. f_set_data_pointer
  186. f_set_data_length
  187. GetPointers()
  188. Interfacing with C/C++
  189. Static linking - Visual Studio 2017
  190. Static linking - GCC
  191. Dynamic linking
  192. Assembly and managed code
  193. Native structure versus managed structure
  194. Importing from DLL/SO and function pointers
  195. Summary
  196. Operating System Interface
  197. The rings
  198. System call
  199. System call hardware interface
  200. Direct system calls
  201. Indirect system calls
  202. Using libraries
  203. Windows
  204. Linking against object and/or library files
  205. Object file
  206. Producing the executable
  207. Importing procedures from DLL
  208. Linux
  209. Linking against object and/or library files
  210. Object file
  211. Producing the executable
  212. Dynamic linking of ELF
  213. The code
  214. Summary
  215. Patching Legacy Code
  216. The executable
  217. The issue
  218. PE files
  219. Headers
  220. Imports
  221. Gathering information
  222. Locating calls to gets()
  223. Preparing for the patch
  224. Importing fgets()
  225. Patching calls
  226. Shim code
  227. Applying the patch
  228. A complex scenario
  229. Preparing the patch
  230. Adjusting file headers
  231. Appending a new section
  232. Fixing the call instruction
  233. ELF executables
  234. LD_PRELOAD
  235. A shared object
  236. Summary
  237. Oh, Almost Forgot
  238. Protecting the code
  239. The original code
  240. The call
  241. The call obfuscation macro
  242. A bit of kernel space
  243. LKM structure
  244. LKM source
  245. .init.text
  246. .exit.text
  247. .rodata.str1.1
  248. .modinfo
  249. .gnu.linkonce.this_module
  250. __versions
  251. Testing the LKM
  252. Summary

Sparse matrices

Sparse matrices are rarely discussed, if at all, due to the relative complexity of implementation and maintenance; however, they may be a very convenient and useful instrument in certain cases. Basically, sparse matrices are conceptually very close to arrays, but they're much more efficient when working with sparse data as they allow memory savings, which in turn allows the processing of much larger amounts of data.

Let's take astrophotography as an example. For those of us not familiar with the subject, amateur astrophotography means plugging your digital camera into a telescope, selecting a region in the night sky, and taking pictures. However, since pictures are taken at night time without a flashlight or any other aid (it would be silly to try to light celestial objects with a flashlight anyway), one has to take dozens of pictures of the same object and then stack the images together using a specific algorithm. In this case, there are two major problems:

  • Noise reduction
  • Image alignment

Lacking professional equipment (meaning not having a huge telescope with a cooled CCD or CMOS matrix), one faces the problem of noise. The longer the exposition, the more noise in the final image. Of course, there are numerous algorithms for noise reduction, but sometimes, a real celestial object may mistakenly be treated as noise and be removed by the noise reduction algorithm. Therefore, it is a good idea to process each image and detect potential celestial objects. If certain "light", which otherwise may be considered as noise, is present in at least 80% of images (it is hard to believe that any noise would have survived for such a long time without any changes, unless we are talking about dead pixels), then its area needs different treatment.

However, in order to process an image, we need to make a decision on how to store the result. We, of course, may use an array of structures describing each and every pixel, but that would be too expensive by means of the memory required for such operation. On the other hand, even if we take a picture of the highly populated area of the night sky, the area occupied by celestial objects would be significantly smaller than the "empty" space. Instead, we may divide an image into smaller areas, analyze certain characteristics of those smaller regions, and only take into consideration those that seem to be populated. The following figure presents the idea:

The figure (which shows the Messier 82 object, also known as Cigar Galaxy) is divided into 396 smaller regions (a matrix of 22 x 18 regions, 15 x 15 pixels each). Each region may be described by its luminosity, noise ratio, and many other aspects, including its location on the figure, meaning that it may occupy quite a sensible amount of memory. Having this data stored in a two-dimensional array with more than 30 images simultaneously may result in megabytes, of meaningless data. As the image shows, there are only two regions of interest, which together form about 0.5% (which fits the definition of sparse data more than perfectly), meaning that if we choose to use arrays, we waste 99.5% of the used memory.

Utilizing sparse matrices, we may reduce the usage of memory to the minimum required to store important data. In this particular case, we would have a linked list of 22 column header nodes, 18 row header nodes, and only 2 nodes for data. The following is a very rough example of such an arrangement:

The preceding example is very rough; in reality, the implementation would contain a few other links. For example, empty column header nodes would have their down pointer point to themselves, and empty row headers would have their right pointers point to themselves, too. The last data node in a row would have its right pointer pointing to the row header node, and the same applies to the last data node in a column having its down pointer pointing to the column header node.