首页 > 解决方案 > 为什么我无法获取所有页面

问题描述

在页面https://www.jogossantacasa.pt/web/Placard/placard上,我正在尝试获取Futebol->.... for我可以,但这只会在循环中刮掉一页。谢谢大家。

public class main {

    static List<String> links=new ArrayList<>();
    static List<String> ligas=new ArrayList<>();
    static String url="https://www.jogossantacasa.pt"; //main link

    public static void main(String[] args) {
        // TODO Auto-generated method stub
        Document doc;
        // Here i get the links
        try {
            doc = Jsoup.connect(url+"/web/Placard/placard").get();
            Elements a = doc.getElementsByClass("width9");
            boolean qwerty = true;
            for(Element ele : a) {
                Elements k = ele.select("li");      
                for(Element d : k)
                {   
                    String hj = d.select("a").text();
            
                    if(hj.contains("Ténis")) qwerty = false;
                    if(qwerty) {
                        if(!hj.contains("Futebol")) {
                            links.add(d.select("a").attr("href"));
                            ligas.add(hj);
                        }
                    }
                }
            }
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        // Here I try to scrape each country page and error is only the last page is scraped
        for(int i = 0 ; i < links.size() ; i++) {

            String urlEach=url+links.get(i);
            Document docEach;

            try {
                docEach = Jsoup.connect(urlEach).get();
                System.out.println(docEach.toString());
            } catch (IOException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }       
        }
    }
}

标签: javaweb-scrapingjsoup

解决方案


第一页 ( /web/Placard/eventos?id=23316) 很大,超过 3MB。Jsoup 仅下载此文件的前 1MB。要克服此限制,请在连接时设置更高的maxBodySize0禁用该限制。

docEach = Jsoup.connect(urlEach).maxBodySize(10*1024*1024).get(); // 10MB

推荐阅读