首页 > 解决方案 > 从键值(环境)传递到关系(实体)时,Xodus 会产生一个巨大的文件

问题描述

我最初使用 Xodus Entity 创建了一个键值数据库,该数据库创建了一个 2GB 的小型数据库:

public static void main(String[] args) throws Exception{

        if (args.length != 2){
            throw new Exception("Argument missing. Current number of arguments: " + args.length); 
        }

        long offset = Long.parseLong(args[0]);
        long chunksize = Long.parseLong(args[1]);

        Path pathBabelNet = Paths.get("/mypath/BabelNet-API-3.7/config");
        BabelNetLexicalizationDataSource dataSource = new BabelNetLexicalizationDataSource(pathBabelNet);
        Map<String, List<String>> data = new HashMap<String, List<String>>();
        data = dataSource.getDataChunk(offset, chunksize);

        jetbrains.exodus.env.Environment env = Environments.newInstance(".myAppData");
        final Transaction txn = env.beginTransaction();
        Store store = env.openStore("xodus-lexicalizations", StoreConfig.WITHOUT_DUPLICATES, txn);

        for (Map.Entry<String, List<String>> entry : data.entrySet()) {
            String key = entry.getKey();
            String value = entry.getValue().get(0);

            store.put(txn, StringBinding.stringToEntry(key), StringBinding.stringToEntry(value));
        }

        txn.commit();
        env.close();

    }

我使用批处理脚本分块执行此操作:

#!/bin/bash

START_TIME=$SECONDS

chunksize=50000

for ((offset=0; offset<165622128;))
do
    echo $offset;
    java -Xmx10g -jar /path/to/jar.jar $offset $chunksize
    offset=$((offset+(chunksize*12)))
done

ELAPSED_TIME=$(($SECONDS - $START_TIME))

echo $ELAPSED_TIME;

现在我改变了它,所以它是相关的:

public static void main(String[] args) throws Exception{

        if (args.length != 2){
            throw new Exception("Argument missing. Current number of arguments: " + args.length); 
        }

        long offset = Long.parseLong(args[0]);
        long chunksize = Long.parseLong(args[1]);

        Path pathBabelNet = Paths.get("/mypath/BabelNet-API-3.7/config");
        BabelNetLexicalizationDataSource dataSource = new BabelNetLexicalizationDataSource(pathBabelNet);
        Map<String, List<String>> data = new HashMap<String, List<String>>();
        data = dataSource.getDataChunk(offset, chunksize);

        PersistentEntityStore store = PersistentEntityStores.newInstance("lexicalizations-test");
        final StoreTransaction txn = store.beginTransaction(); 

        Entity synsetID;
        Entity lexicalization;
        String id;

        for (Map.Entry<String, List<String>> entry : data.entrySet()) {
            String key = entry.getKey();
            String value = entry.getValue().get(0);

            synsetID = txn.newEntity("SynsetID");
            synsetID.setProperty("synsetID", key);

            lexicalization = txn.newEntity("Lexicalization");
            lexicalization.setProperty("lexicalization", value);

            lexicalization.addLink("synsetID", synsetID);
            synsetID.addLink("lexicalization", lexicalization);

            txn.flush();
        }

        txn.commit();
    }

这创建了一个超过 17GB 的文件,它只是因为我的内存不足而停止了。我知道它会更大,因为它必须存储链接等,但要大十倍?我究竟做错了什么?

标签: javaxodus

解决方案


出于某种原因,删除了txn.flush()所有修复程序。现在它只有 5.5GB。


推荐阅读