arrays - 如何将 URL 数组作为函数的参数传递
问题描述
我想将我的第一个函数返回的 URL 数组传递给我的第二个函数,但是我不确定如何执行此操作。
require 'open-uri'
require 'nokogiri'
require 'byebug'
def fetch_recipe_urls
base_url = 'https://cooking.nytimes.com'
easy_recipe_url = 'https://cooking.nytimes.com/search?q=easy'
easy_searchpage = Nokogiri::HTML(open(easy_recipe_url))
recipes = easy_searchpage.search('//article[@class="card recipe-card"]/@data-url')
recipes_url_array = recipes.map do |recipe|
uri = URI.parse(recipe.text)
uri.scheme = "http"
uri.host = "cooking.nytimes.com"
uri.query = nil
uri.to_s
end
end
def scraper(url)
html_file = open(url).read
html_doc = Nokogiri::HTML(html_file)
recipes = Array.new
recipe = {
title: html_doc.css('h1.recipe-title').text.strip,
time: html_doc.css('span.recipe-yield-value').text.split("servings")[1],
steps: html_doc.css('ol.recipe-steps').text.split.join(" "),
ingredients: html_doc.css('ul.recipe-ingredients').text.split.join(" ")
}
recipes << recipe
end
解决方案
由于调用后有一个数组fetch_recipe_urls
,因此可以迭代并调用scraper
其中的每个 URL:
def scraper(url)
html_file = open(url).read
html_doc = Nokogiri::HTML(html_file)
{
title: html_doc.css('h1.recipe-title').text.strip,
time: html_doc.css('span.recipe-yield-value').text.split("servings")[1],
steps: html_doc.css('ol.recipe-steps').text.split.join(" "),
ingredients: html_doc.css('ul.recipe-ingredients').text.split.join(" ")
}
end
fetch_recipe_urls.map { |url| scraper(url) }
但我实际上会将代码构造为:
BASE_URL = 'https://cooking.nytimes.com/'
def fetch_recipe_urls
page = Nokogiri::HTML(open(BASE_URL + 'search?q=easy'))
recipes = page.search('//article[@class="card recipe-card"]/@data-url')
recipes.map { |recipe_node| BASE_URL + URI.parse(recipe_node.text).to_s }
end
def scrape(url)
html_doc = Nokogiri::HTML(open(url).read)
{
title: html_doc.css('h1.recipe-title').text.strip,
time: html_doc.css('span.recipe-yield-value').text.split("servings")[1],
steps: html_doc.css('ol.recipe-steps').text.split.join(" "),
ingredients: html_doc.css('ul.recipe-ingredients').text.split.join(" ")
}
end
fetch_recipe_urls.map { |url| scrape(url) }
您也可以调用scrape
/ scraper
inside fetch_recipe_urls
,但我建议采用单一责任方法。一个更好的主意是制作这个 OOP 并构造一个Scraper
类和一个CookingRecipe
更符合习惯的方法。
推荐阅读
- devops - 使用 GitOps 部署时如何提供密码
- reference - C++ 中对函数的未定义引用
- regex - 复杂的正则表达式(换行符,多个变量)
- python - 确定返回的对象是复制还是深度复制
- api - CoinmarketCap API,24 小时交易量大于 7 天交易量,这怎么可能?
- python - 正则表达式:匹配特定字符是字符串中的特定位置
- java - Apache PDFBox 表/行对齐与 Boxable
- c# - 在 EF Core 5 中保留实体引用集合的顺序
- reactjs - 更新 node_modules 中的特定 package.json
- google-chrome - Chrome:将 JSON(或其他对象)放入 chrome_debug.log