Very frequently, it is of interest to scan a website and extract information from specific tags. This basic mechanism can be used to trawl the web in search of useful bits of information. At other times you need to get a list of <IMG> tags and the SRC attribute, or <A> tags and the corresponding HREF attribute. The possibilities are endless.
file_get_contents(). The problem with these approaches is that we will end up having to do a massive amount of string manipulation, most likely having to make inordinate use of the dreaded regular expression. In order to avoid all of this, we'll simply take advantage of an already existing PHP 7 class DOMDocument. So we create a DOMDocument instance, setting it to UTF-8. We don't care about whitespace, and use the handy loadHTMLFile() method to load the contents of the website into the object:public function getContent($url)
{
if (!$this->content) {
if (stripos($url, 'http') !== 0) {
$url = 'http://' . $url;
}
$this->content = new DOMDocument('1.0', 'utf-8');
$this->content->preserveWhiteSpace = FALSE;
// @ used to suppress warnings generated from // improperly configured web pages
@$this->content->loadHTMLFile($url);
}
return $this->content;
}Note that we precede the call to the loadHTMLFile() method with an @. This is not done to obscure bad coding (!) as was often the case in PHP 5! Rather, the @ suppresses notices generated when the parser encounters poorly written HTML. Presumably, we could capture the notices and log them, possibly giving our Hoover class a diagnostic capability as well.
getElementsByTagName() method for this purpose. If we wish to extract all tags, we can supply * as an argument:public function getTags($url, $tag)
{
$count = 0;
$result = array();
$elements = $this->getContent($url)
->getElementsByTagName($tag);
foreach ($elements as $node) {
$result[$count]['value'] = trim(preg_replace('/\s+/', ' ', $node->nodeValue));
if ($node->hasAttributes()) {
foreach ($node->attributes as $name => $attr)
{
$result[$count]['attributes'][$name] =
$attr->value;
}
}
$count++;
}
return $result;
}getAttribute(). You'll notice that there is a parameter for the DNS domain. We've added this in order to keep the scan within the same domain (if you're building a web tree, for example):public function getAttribute($url, $attr, $domain = NULL)
{
$result = array();
$elements = $this->getContent($url)
->getElementsByTagName('*');
foreach ($elements as $node) {
if ($node->hasAttribute($attr)) {
$value = $node->getAttribute($attr);
if ($domain) {
if (stripos($value, $domain) !== FALSE) {
$result[] = trim($value);
}
} else {
$result[] = trim($value);
}
}
}
return $result;
}In order to use the new Hoover class, initialize the autoloader (described previously) and create an instance of the Hoover class. You can then run the Hoover::getTags() method to produce an array of tags from the URL you specify as an argument.
Here is a block of code from chap_01_vacuuming_website.php that uses the Hoover class to scan the O'Reilly website for <A> tags:
<?php
// modify as needed
define('DEFAULT_URL', 'http://oreilly.com/');
define('DEFAULT_TAG', 'a');
require __DIR__ . '/../Application/Autoload/Loader.php';
Application\Autoload\Loader::init(__DIR__ . '/..');
// get "vacuum" class
$vac = new Application\Web\Hoover();
// NOTE: the PHP 7 null coalesce operator is used
$url = strip_tags($_GET['url'] ?? DEFAULT_URL);
$tag = strip_tags($_GET['tag'] ?? DEFAULT_TAG);
echo 'Dump of Tags: ' . PHP_EOL;
var_dump($vac->getTags($url, $tag));The output will look something like this:

For more information on DOM, see the PHP reference page at http://php.net/manual/en/class.domdocument.php.