首页 > 解决方案 > 使用数组的 PHP 递归

问题描述

我有一个函数 (listarUrls ()),它返回/扫描它在网页上找到的所有 url。我需要对于函数返回给我的每个 url,我返回列表/根据用户请求多次扫描该页面的所有 url,即

        .If the user asks for 1 iteration of the url www.a.com, bring back:
            -$arry[0] www.1.com
            -$arry[1] www.2.com
            -..... So with all the urls you find in www.a.com

        .If the user asks for 2 iteration of the url www.a.com, bring back:
            -$arry[0] www.1.com
                -$arry[0][0] www.1-1.com
                -$arry[0][1] www.1-2.com
                -...So with all the urls you find in www.1.com
            -$arry[1] www.2.com
                -$arry[1][0] www.2-1.com
                -$arry[1][1] www.2-2.com
                -...So with all the urls you find in www.2.com
            -...

        .If the user asks for 3 iteration of the url www.a.com, bring back:
            -$arry[0] www.1.com
                -$arry[0][0] www.1-1.com
                    -$arry[0][0][0] www.1-1-1.com
                    -$arry[0][0][1] www.1-1-2.com
                    -...So with all the urls you find in www.1-1.com
                -$arry[0][1] www.1-2.com
                    -$arry[0][1][0] www.1-2-1.com
                    -$arry[0][1][1] www.1-2-2.com
                    -...So with all the urls you find in www.1-2.com
            -$arry[1] www.2.com
                -$arry[1][0] www.2-1.com
                    -$arry[1][0][0] www.2-1-1.com
                    -$arry[1][0][1] www.2-1-2.com
                    -...So with all the urls you find in www.2-1.com
                -$arry[1][1] www.2-2.com
                    -$arry[1][1][0] www.2-2-1.com
                    -$arry[1][1][1] www.2-2-2.com
                    -...So with all the urls you find in www.2-2.com
        -...

有人可以对这个主题有所了解吗?

标签: phparraysrecursion

解决方案


这是网络抓取,可选择指示调查的深度。

我们可以有如下的函数定义:

function scrapeURLs($url,$steps,&$visited_urls = []);

$url是我们正在抓取的当前 URL。$steps是我们正在调查的哪一步。如果$steps == 1在我们的递归函数中的任何一点,我们将停止进一步抓取。$visited_urls是为了确保我们不会两次访问相同的 URL 进行抓取。

片段:

<?php
ini_set('max_execution_time','500');
libxml_use_internal_errors(true); // not recommended but fine for debugging. Make sure HTML of the URL follows DOMDocument requirements
function scrapeURLs($url,$steps,&$visited_urls = []){
    $result = [];   
    if(preg_match('/^http(s)?:\/\/.+/',$url) === 0){ // if not a proper URL, we stop here, but will have to double check if it's a relative URL and do some modifications to current script
        return $result;
    }

    $dom = new DOMDocument();
    $dom->loadHTMLFile($url);

    // get all script tags
    foreach($dom->getElementsByTagName('script') as $script_tag){
        $script_url = $script_tag->getAttribute('src');
        if(!isset($visited_urls[$script_url])){
            $visited_urls[$script_url] = true;
            $result[$script_url] = $steps === 1 ? [] : scrapeURLs($script_url,$steps - 1,$visited_urls);    // stop or recurse further  
        }       

    }   

    // get all anchor tags
    foreach($dom->getElementsByTagName('a') as $anchor_tag){
        $anchor_url = $anchor_tag->getAttribute('href');
        if(!isset($visited_urls[$anchor_url])){
            $visited_urls[$anchor_url] = true;
            $result[$anchor_url] = $steps === 1 ? [] : scrapeURLs($anchor_url,$steps - 1,$visited_urls);
            // stop or recurse further
        }
    }

    /* Likewise, you can capture several other URLs like CSS stylesheets, image URLs etc*/

    return $result;
}

print_r(scrapeURLs('http://yoursite.com/',2));

推荐阅读