Table of Contents for
Python Web Penetration Testing Cookbook

Version ebook / Retour

Cover image for bash Cookbook, 2nd Edition Python Web Penetration Testing Cookbook by Dave Mound Published by Packt Publishing, 2015
  1. Cover
  2. Table of Contents
  3. Python Web Penetration Testing Cookbook
  4. Python Web Penetration Testing Cookbook
  5. Credits
  6. About the Authors
  7. About the Reviewers
  8. www.PacktPub.com
  9. Disclamer
  10. Preface
  11. What you need for this book
  12. Who this book is for
  13. Sections
  14. Conventions
  15. Reader feedback
  16. Customer support
  17. 1. Gathering Open Source Intelligence
  18. Gathering information using the Shodan API
  19. Scripting a Google+ API search
  20. Downloading profile pictures using the Google+ API
  21. Harvesting additional results from the Google+ API using pagination
  22. Getting screenshots of websites with QtWebKit
  23. Screenshots based on a port list
  24. Spidering websites
  25. 2. Enumeration
  26. Performing a ping sweep with Scapy
  27. Scanning with Scapy
  28. Checking username validity
  29. Brute forcing usernames
  30. Enumerating files
  31. Brute forcing passwords
  32. Generating e-mail addresses from names
  33. Finding e-mail addresses from web pages
  34. Finding comments in source code
  35. 3. Vulnerability Identification
  36. Automated URL-based Directory Traversal
  37. Automated URL-based Cross-site scripting
  38. Automated parameter-based Cross-site scripting
  39. Automated fuzzing
  40. jQuery checking
  41. Header-based Cross-site scripting
  42. Shellshock checking
  43. 4. SQL Injection
  44. Checking jitter
  45. Identifying URL-based SQLi
  46. Exploiting Boolean SQLi
  47. Exploiting Blind SQL Injection
  48. Encoding payloads
  49. 5. Web Header Manipulation
  50. Testing HTTP methods
  51. Fingerprinting servers through HTTP headers
  52. Testing for insecure headers
  53. Brute forcing login through the Authorization header
  54. Testing for clickjacking vulnerabilities
  55. Identifying alternative sites by spoofing user agents
  56. Testing for insecure cookie flags
  57. Session fixation through a cookie injection
  58. 6. Image Analysis and Manipulation
  59. Hiding a message using LSB steganography
  60. Extracting messages hidden in LSB
  61. Hiding text in images
  62. Extracting text from images
  63. Enabling command and control using steganography
  64. 7. Encryption and Encoding
  65. Generating an MD5 hash
  66. Generating an SHA 1/128/256 hash
  67. Implementing SHA and MD5 hashes together
  68. Implementing SHA in a real-world scenario
  69. Generating a Bcrypt hash
  70. Cracking an MD5 hash
  71. Encoding with Base64
  72. Encoding with ROT13
  73. Cracking a substitution cipher
  74. Cracking the Atbash cipher
  75. Attacking one-time pad reuse
  76. Predicting a linear congruential generator
  77. Identifying hashes
  78. 8. Payloads and Shells
  79. Extracting data through HTTP requests
  80. Creating an HTTP C2
  81. Creating an FTP C2
  82. Creating an Twitter C2
  83. Creating a simple Netcat shell
  84. 9. Reporting
  85. Converting Nmap XML to CSV
  86. Extracting links from a URL to Maltego
  87. Extracting e-mails to Maltego
  88. Parsing Sslscan into CSV
  89. Generating graphs using plot.ly
  90. Index

Enumerating files

When enumerating a web application, you will want to determine what pages exist. A common practice that is normally used is called spidering. Spidering works by going to a website and then following every single link within that page and any subsequent pages within that website. However, for certain sites, such as wikis, this method may result in the deletion of data if a link performs an edit or delete function when accessed. This recipe will instead take a list of commonly found filenames of web pages and check whether they exist.

Getting ready

For this recipe, you will need to create a list of commonly found page names. Penetration testing distributions, such as Kali Linux will come with word lists for various brute forcing tools and these could be used instead of generating your own.

How to do it…

The following script will take a list of possible filenames and test to see whether the pages exist within a website:

#bruteforce file names
import sys
import urllib2

if len(sys.argv) !=4:
    print "usage: %s url wordlist fileextension\n" % (sys.argv[0])
    sys.exit(0)

base_url = str(sys.argv[1])
wordlist= str(sys.argv[2])
extension=str(sys.argv[3])
filelist = open(wordlist,'r')
foundfiles = []

for file in filelist:
  file=file.strip("\n")
  extension=extension.rstrip()
  url=base_url+file+"."+str(extension.strip("."))
  try:
    request = urllib2.urlopen(url)
    if(request.getcode()==200):
      foundfiles.append(file+"."+extension.strip("."))
    request.close()
  except urllib2.HTTPError, e:
    pass

if len(foundfiles)>0:
  print "The following files exist:\n"
  for filename in foundfiles:
    print filename+"\n"
else:
  print "No files found\n"

The following output shows what could be returned when run against Damn Vulnerable Web App (DVWA) using a list of commonly found web pages:

python filebrute.py http://192.168.68.137/dvwa/ filelist.txt .php
The following files exist:

index.php

about.php

login.php

security.php

logout.php

setup.php

instructions.php

phpinfo.php

How it works…

After importing the necessary modules and validating the number of arguments, the list of filenames to check is opened in read-only mode, which is indicated by the r parameter in the file's open operation:

filelist = open(wordlist,'r')

When the script enters the loop for the list of filenames, any newline characters are stripped from the filename, as this will affect the creation of the URLs when checking for the existence of the filename. If a preceding . exists in the provided extension, then that also is stripped. This allows for the use of an extension that does or doesn't have the preceding . included, for example, .php or php:

  file=file.strip("\n")
  extension=extension.rstrip()
  url=base_url+file+"."+str(extension.strip("."))

The main action of the script then checks whether or not a web page with the given filename exists by checking for a HTTP 200 code and catches any errors given by a nonexistent page:

  try:
    request = urllib2.urlopen(url)
    if(request.getcode()==200):
      foundfiles.append(file+"."+extension.strip("."))
    request.close()
  except urllib2.HTTPError, e:
    pass