首页 > 解决方案 > 网页抓取和承诺

问题描述

我正在使用cheerio 和node 进行网络抓取,但我对promise 有疑问。我可以从页面中抓取文章列表,但在该列表中,我们有更多单页链接。我还需要为列表中的每个项目抓取单页。我将向您展示我的代码以获得更好的解决方案。

import rp from 'request-promise'
import cheerio from 'cheerio'
import conn from './connection'

const flexJob = `https://www.flexjobs.com`
const flexJobCategory = ['account-management', 'bilingual']

class WebScraping {

    //list of article e.g for page 2
    results = [] // [[title], [link for page],...]
    contentPage = [] //content for each page

    scrapeWeb(link) {
        let fullLink = `${link}/jobs/${flexJobCategory[1]}?page=2`
        const options = {
            uri: fullLink,
            transform(body) {
                return cheerio.load(body)
            }
        }
        rp(options)
            .then(($) => {
                console.log(fullLink)
                $('.featured-job').each((index, value) => {

                    //html nodes
                    let shortDescription = value.children[1].children[1].children[3].children[1].children[1].children[0].data
                    let link = value.children[1].children[1].children[1].children[1].children[1].children[0].attribs.href
                    let pageLink = flexJob + '' + link
                    let title = value.children[1].children[1].children[1].children[1].children[1].children[0].children[0].data
                    let place = value.children[1].children[1].children[1].children[1].children[3].children[1].data
                    let jobType = value.children[1].children[1].children[1].children[1].children[3].children[0].children[0].data
                    this.results.push([title, '', pageLink.replace(/\s/g, ''), '', shortDescription.replace(/\n/g, ''), place, jobType, 'PageContent::: '])
                })
            })
            .then(() => {
                this.results.forEach(element => {
                    console.log('link: ', element[2])
                    this.scrapePage(element[2])
                });
            })
            .then(() => {
                console.log('print content page', this.contentPage)
            })
            .then(() => {
                //this.insertIntoDB()
                console.log('insert into db')
            })
            .catch((err) => {
                console.log(err)
            })

    }

    /**
     * It's going to scrape all pages from list of jobs
     * @param {Any} pageLink 
     * @param {Number} count 
     */
    scrapePage(pageLink) {
        let $this = this
        //console.log('We are in ScrapePage' + pageLink + ': number' + count)
        //this.results[count].push('Hello' + count)
        let content = ''
        const options = {
            uri: pageLink,
            transform(body) {
                return cheerio.load(body)
            }
        }
        rp(options)
            .then(($) => {
                //this.contentPage.push('Hello' + ' : ');
                console.log('Heloo')
            })
            .catch((err) => {
                console.log(err)
            })
    }
    /**
     * This method is going to insert data into Database
    */
    insertIntoDB() {
        conn.connect((err) => {
            var sql = "INSERT INTO contact (title, department, link, salary, short_description, location, job_type, page_detail) VALUES ?"
            var values = this.results
            conn.query(sql, [values], function (err) {
                if (err) throw err
                conn.end()
            })
        })
    }
}
let webScraping = new WebScraping()
let scrapeList =  webScraping.scrapeWeb(flexJob)

所以,在'scrapeWeb'方法,在第二个'.then',我调用'scrapePage'方法,然而,在'scrapePage'方法中的promise之前执行的第三个promise。

标签: javascriptnode.jsasynchronousweb-scrapingpromise

解决方案


你有一个竞争条件问题。

您需要的第一个调整是scrapePage返回一个Promise.

scrapePage(pageLink) {
        let $this = this
        let content = ''
        const options = {
            uri: pageLink,
            transform(body) {
                return cheerio.load(body)
            }
        }
        return rp(options);
    }

在第二个比中,您需要调用所有子页面抓取,例如:

.then(() => {
return Promise.all(this.results.map(childPage => this.scrapePage(childPage)));
})

这会将所有子页面的刮擦包装到 Promise 中,并且只有当所有这些都解决时,代码才会流动。


推荐阅读