首页 > 解决方案 > 如何将 crawler4j 数据发送到 CrawlerManager?

问题描述

我正在使用一个项目,用户可以搜索一些网站并查找具有唯一标识符的图片。

public class ImageCrawler extends WebCrawler {

private static final Pattern filters = Pattern.compile(
        ".*(\\.(css|js|mid|mp2|mp3|mp4|wav|avi|mov|mpeg|ram|m4v|pdf" +
                "|rm|smil|wmv|swf|wma|zip|rar|gz))$");

private static final Pattern imgPatterns = Pattern.compile(".*(\\.(bmp|gif|jpe?g|png|tiff?))$");

public ImageCrawler() {
}

@Override
public boolean shouldVisit(Page referringPage, WebURL url) {
    String href = url.getURL().toLowerCase();
    if (filters.matcher(href).matches()) {
        return false;
    }

    if (imgPatterns.matcher(href).matches()) {
        return true;
    }

    return false;
}

@Override
public void visit(Page page) {
    String url = page.getWebURL().getURL();

    byte[] imageBytes = page.getContentData();
    String imageBase64 = Base64.getEncoder().encodeToString(imageBytes);
    try {
        SecurityContextHolder.getContext().setAuthentication(new UsernamePasswordAuthenticationToken(urlScan.getOwner(), null));
        DecodePictureResponse decodePictureResponse = decodePictureService.decodePicture(imageBase64);
        URLScanResult urlScanResult = new URLScanResult();
        urlScanResult.setPicture(pictureRepository.findByUuid(decodePictureResponse.getPictureDTO().getUuid()).get());
        urlScanResult.setIntegrity(decodePictureResponse.isIntegrity());
        urlScanResult.setPictureUrl(url);
        urlScanResult.setUrlScan(urlScan);
        urlScan.getResults().add(urlScanResult);
        urlScanRepository.save(urlScan);
    }

    } catch (ResourceNotFoundException ex) {
        //Picture is not in our database
    }
}

爬虫将独立运行。ImageCrawlerManager 类,它是单调的,运行爬虫。

public class ImageCrawlerManager {

private static ImageCrawlerManager instance = null;


private ImageCrawlerManager(){
}

public synchronized static ImageCrawlerManager getInstance()
{
    if (instance == null)
    {
        instance = new ImageCrawlerManager();
    }
    return instance;
}

@Transactional(propagation=Propagation.REQUIRED)
@PersistenceContext(type = PersistenceContextType.EXTENDED)
public void startCrawler(URLScan urlScan, DecodePictureService decodePictureService, URLScanRepository urlScanRepository, PictureRepository pictureRepository){

    try {
        CrawlConfig config = new CrawlConfig();
        config.setCrawlStorageFolder("/tmp");
        config.setIncludeBinaryContentInCrawling(true);

        PageFetcher pageFetcher = new PageFetcher(config);
        RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
        RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);

        CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
        controller.addSeed(urlScan.getUrl());

        controller.start(ImageCrawler.class, 1);
        urlScan.setStatus(URLScanStatus.FINISHED);
        urlScanRepository.save(urlScan);
    } catch (Exception e) {
        e.printStackTrace();
        urlScan.setStatus(URLScanStatus.FAILED);
        urlScan.setFailedReason(e.getMessage());
        urlScanRepository.save(urlScan);
    }
}

如何将每个图像数据发送给解码该图像的管理器,获取搜索的发起者并将结果保存到数据库?在上面的代码中,我可以运行多个爬虫并将其保存到数据库中。但不幸的是,当我同时运行两个爬虫时,我可以存储两个搜索结果,但它们都连接到首先运行的爬虫。

标签: springasynchronouscrawler4j

解决方案


您应该将您的数据库服务注入您的ẀebCrawler实例,而不是使用单例来管理您的网络爬网的结果。

crawler4j支持自定义CrawlController.WebCrawlerFactory(请参阅此处以供参考),它可以与 Spring 一起使用,将您的数据库服务注入到ImageCrawler实例中。

每个爬虫线程都应该对您描述的整个过程负责(例如,通过使用一些特定的服务):

解码此图像,获取搜索的发起者并将结果保存到数据库

像这样设置它,您的数据库将是唯一的事实来源,您将不必处理在不同实例或用户会话之间同步爬虫状态。


推荐阅读