Improving HTML scraper efficiency with pcntl_fork()
Asked Answered
C

2

2

With the help from two previous questions, I now have a working HTML scraper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scraper working with pcntl_fork.

If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions.

Using code I've cobbled together from multiple sources, I have this working test:

<?php
libxml_use_internal_errors(true);
ini_set('max_execution_time', 0); 
ini_set('max_input_time', 0); 
set_time_limit(0);

$hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org");

function doDomStuff($singleHref,$childPid) {
    $html = new DOMDocument();
    $html->loadHtmlFile($singleHref);

    $xPath = new DOMXPath($html);

    $domQuery = '//div[@id="slogan"]/h2';
    $domReturn = $xPath->query($domQuery);

    foreach($domReturn as $return) {
        $slogan = $return->nodeValue;
        echo "Child PID #" . $childPid . " says: " . $slogan . "\n";
    }
}

$pids = array();
foreach ($hrefArray as $singleHref) {
    $pid = pcntl_fork();

    if ($pid == -1) {
        die("Couldn't fork, error!");
    } elseif ($pid > 0) {
        // We are the parent
        $pids[] = $pid;
    } else {
        // We are the child
        $childPid = posix_getpid();
        doDomStuff($singleHref,$childPid);
        exit(0);
    }
}

foreach ($pids as $pid) {
    pcntl_waitpid($pid, $status);
}

// Clear the libxml buffer so it doesn't fill up
libxml_clear_errors();

Which raises the following questions:

1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100).

2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers.

My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it):

<?php
libxml_use_internal_errors(true);
ini_set('max_execution_time', 0); 
ini_set('max_input_time', 0); 
set_time_limit(0);
$maxChildWorkers = 10;

$html = new DOMDocument();
$html->loadHtmlFile('http://xxxx');
$xPath = new DOMXPath($html);

$domQuery = '//div[@id=productDetail]/a';
$domReturn = $xPath->query($domQuery);

$hrefsArray[] = $domReturn->getAttribute('href');

function doDomStuff($singleHref) {
    // Do stuff here with each product
}

// To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. 
$pids = array();
foreach ($workArray(1,2,3 ... 10) as $singleHref) {
    $pid = pcntl_fork();

    if ($pid == -1) {
        die("Couldn't fork, error!");
    } elseif ($pid > 0) {
        // We are the parent
        $pids[] = $pid;
    } else {
        // We are the child
        $childPid = posix_getpid();
        doDomStuff($singleHref);
        exit(0);
    }
}


foreach ($pids as $pid) {
    pcntl_waitpid($pid, $status);
}

// Clear the libxml buffer so it doesn't fill up
libxml_clear_errors();

But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process.

I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

Cellar answered 18/5, 2010 at 23:48 Comment(0)
H
2

It seems like I suggest this daily, but have you looked at Gearman? There's even a well documented PECL class.

Gearman is a work queue system. You'd create workers that connect and listen for jobs, and clients that connect and send jobs. The client can either wait for the requested job to be completed, or it can fire it and forget. At your option, workers can even send back status updates, and how far through the process they are.

In other words, you get the benefits of multiple processes or threads, without having to worry about processes and threads. The clients and workers can even be on different machines.

Hydrokinetic answered 4/6, 2010 at 3:46 Comment(0)
E
4

Introduction

pcntl_fork() is not the only way to improve performance HTML scraper while it might be a good idea to use Message Queue has Charles suggested but you still need a faster effective way to pull that request in your workers

Solution 1

Use curl_multi_init ... curl is actually faster and using multi curl gives you parallel processing

From PHP DOC

curl_multi_init Allows the processing of multiple cURL handles in parallel.

So Instead of using $html->loadHtmlFile('http://xxxx'); to load the files several times you can just use curl_multi_init to load multiple url at the same time

Here are some Interesting Implementations

Solution 2

You can use pthreads to use multi-threading in PHP

Example

// Number of threads you want
$threads = 10;

// Treads storage
$ts = array();

// Your list of URLS // range just for demo
$urls = range(1, 50);

// Group Urls
$urlsGroup = array_chunk($urls, floor(count($urls) / $threads));

printf("%s:PROCESS  #load\n", date("g:i:s"));

$name = range("A", "Z");
$i = 0;
foreach ( $urlsGroup as $group ) {
    $ts[] = new AsyncScraper($group, $name[$i ++]);
}

printf("%s:PROCESS  #join\n", date("g:i:s"));

// wait for all Threads to complete
foreach ( $ts as $t ) {
    $t->join();
}

printf("%s:PROCESS  #finish\n", date("g:i:s"));

Output

9:18:00:PROCESS  #load
9:18:00:START  #5592     A
9:18:00:START  #9620     B
9:18:00:START  #11684    C
9:18:00:START  #11156    D
9:18:00:START  #11216    E
9:18:00:START  #11568    F
9:18:00:START  #2920     G
9:18:00:START  #10296    H
9:18:00:START  #11696    I
9:18:00:PROCESS  #join
9:18:00:START  #6692     J
9:18:01:END  #9620       B
9:18:01:END  #11216      E
9:18:01:END  #10296      H
9:18:02:END  #2920       G
9:18:02:END  #11696      I
9:18:04:END  #5592       A
9:18:04:END  #11568      F
9:18:04:END  #6692       J
9:18:05:END  #11684      C
9:18:05:END  #11156      D
9:18:05:PROCESS  #finish

Class Used

class AsyncScraper extends Thread {

    public function __construct(array $urls, $name) {
        $this->urls = $urls;
        $this->name = $name;
        $this->start();
    }

    public function run() {
        printf("%s:START  #%lu \t %s \n", date("g:i:s"), $this->getThreadId(), $this->name);
        if ($this->urls) {
            // Load with CURL
            // Parse with DOM
            // Do some work

            sleep(mt_rand(1, 5));
        }
        printf("%s:END  #%lu \t %s \n", date("g:i:s"), $this->getThreadId(), $this->name);
    }
}
Exhalation answered 28/3, 2013 at 20:21 Comment(0)
H
2

It seems like I suggest this daily, but have you looked at Gearman? There's even a well documented PECL class.

Gearman is a work queue system. You'd create workers that connect and listen for jobs, and clients that connect and send jobs. The client can either wait for the requested job to be completed, or it can fire it and forget. At your option, workers can even send back status updates, and how far through the process they are.

In other words, you get the benefits of multiple processes or threads, without having to worry about processes and threads. The clients and workers can even be on different machines.

Hydrokinetic answered 4/6, 2010 at 3:46 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.