Table of Contents for
Python Web Penetration Testing Cookbook

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Python Web Penetration Testing Cookbook by Dave Mound Published by Packt Publishing, 2015
  1. Cover
  2. Table of Contents
  3. Python Web Penetration Testing Cookbook
  4. Python Web Penetration Testing Cookbook
  5. Credits
  6. About the Authors
  7. About the Reviewers
  8. www.PacktPub.com
  9. Disclamer
  10. Preface
  11. What you need for this book
  12. Who this book is for
  13. Sections
  14. Conventions
  15. Reader feedback
  16. Customer support
  17. 1. Gathering Open Source Intelligence
  18. Gathering information using the Shodan API
  19. Scripting a Google+ API search
  20. Downloading profile pictures using the Google+ API
  21. Harvesting additional results from the Google+ API using pagination
  22. Getting screenshots of websites with QtWebKit
  23. Screenshots based on a port list
  24. Spidering websites
  25. 2. Enumeration
  26. Performing a ping sweep with Scapy
  27. Scanning with Scapy
  28. Checking username validity
  29. Brute forcing usernames
  30. Enumerating files
  31. Brute forcing passwords
  32. Generating e-mail addresses from names
  33. Finding e-mail addresses from web pages
  34. Finding comments in source code
  35. 3. Vulnerability Identification
  36. Automated URL-based Directory Traversal
  37. Automated URL-based Cross-site scripting
  38. Automated parameter-based Cross-site scripting
  39. Automated fuzzing
  40. jQuery checking
  41. Header-based Cross-site scripting
  42. Shellshock checking
  43. 4. SQL Injection
  44. Checking jitter
  45. Identifying URL-based SQLi
  46. Exploiting Boolean SQLi
  47. Exploiting Blind SQL Injection
  48. Encoding payloads
  49. 5. Web Header Manipulation
  50. Testing HTTP methods
  51. Fingerprinting servers through HTTP headers
  52. Testing for insecure headers
  53. Brute forcing login through the Authorization header
  54. Testing for clickjacking vulnerabilities
  55. Identifying alternative sites by spoofing user agents
  56. Testing for insecure cookie flags
  57. Session fixation through a cookie injection
  58. 6. Image Analysis and Manipulation
  59. Hiding a message using LSB steganography
  60. Extracting messages hidden in LSB
  61. Hiding text in images
  62. Extracting text from images
  63. Enabling command and control using steganography
  64. 7. Encryption and Encoding
  65. Generating an MD5 hash
  66. Generating an SHA 1/128/256 hash
  67. Implementing SHA and MD5 hashes together
  68. Implementing SHA in a real-world scenario
  69. Generating a Bcrypt hash
  70. Cracking an MD5 hash
  71. Encoding with Base64
  72. Encoding with ROT13
  73. Cracking a substitution cipher
  74. Cracking the Atbash cipher
  75. Attacking one-time pad reuse
  76. Predicting a linear congruential generator
  77. Identifying hashes
  78. 8. Payloads and Shells
  79. Extracting data through HTTP requests
  80. Creating an HTTP C2
  81. Creating an FTP C2
  82. Creating an Twitter C2
  83. Creating a simple Netcat shell
  84. 9. Reporting
  85. Converting Nmap XML to CSV
  86. Extracting links from a URL to Maltego
  87. Extracting e-mails to Maltego
  88. Parsing Sslscan into CSV
  89. Generating graphs using plot.ly
  90. Index

Checking jitter

The only difficult thing about performing time-based SQL Injections is that plague of gamers everywhere, lag. A human can easily sit down and account for lag mentally, taking a string of returned values, and sensibly going over the output and working out that cgris is chris. For a machine, this is much harder; therefore, we should attempt to reduce delay.

We will be creating a script that makes multiple requests to a server, records the response time, and returns an average time. This can then be used to calculate fluctuations in responses in time-based attacks known as jitter.

How to do it…

Identify the URLs you wish to attack and provide to the script through a sys.argv variable:

import requests
import sys
url = sys.argv[1]

values = []

for i in xrange(100): 
  r = requests.get(url)
  values.append(int(r.elapsed.total_seconds()))

average = sum(values) / float(len(values))
print “Average response time for “+url+” is “+str(average)

The following screenshot is an example of the output produced when using this script:

How to do it…

How it works…

We import the libraries we require for this script, as with every other script we've done in this book so far. We set the counter I to zero and create an empty list for the times we are about to generate:

while i < 100:
  r = requests.get(url)
  values.append(int(r.elapsed.total_seconds()))
  i = i + 1

Using the counter I, we run 100 requests to the target URL and append the response time of the request to list we created earlier. R.elapsed is a timedelta object, not an integer, and therefore must be called with .total_seconds() in order to get a usable number for our later average. We then add one to the counter to account for this loop and so that the script ends appropriately:

average = sum(values) / float(len(values))
print “Average response time for “+url+” is “+average

Once the loop is complete, we calculate the average of the 100 requests by calculating the total values of the list with sum and dividing it by the number of values in the list with len.

We then return a basic output for ease of understanding.

There's more…

This is a very basic way of performing this action and only really performs the function as a standalone script to prove a point. To be performed as part of another script, we would do the following:

import requests
import sys

input = sys.argv[1]

def averagetimer(url):

  i = 0
  values = []

  while i < 100:
    r = requests.get(url)
    values.append(int(r.elapsed.total_seconds()))
    i = i + 1

  average = sum(values) / float(len(values))
  return average

averagetimer(input)