首页 > 解决方案 > 如何将 xgboost 集成到 spark 中?(Python)

问题描述

我正在尝试使用 XGBoost 对蜂巢上的数据训练模型,数据太大,我无法将其转换为 pandas df,因此我必须将 XGBoost 与 spark df 一起使用。创建 XGBoostEstimator 时,出现错误:

TypeError:'JavaPackage'对象不可调用异常AttributeError:“'NoneType'对象没有属性'_detach'”被忽略

我没有使用 xgboost for spark 的经验,我在网上尝试了一些教程,但都没有奏效。我试图转换为 pandas df 但数据太大而且我总是OutOfMemoryException从 Java 包装器中获取(我也尝试查找它并且该解决方案对我不起作用,提高了执行程序的内存)。

我关注的最新教程是:

https://towardsdatascience.com/pyspark-and-xgboost-integration-tested-on-the-kaggle-titanic-dataset-4e75a568bdb

放弃 XGBoost 模块后,我开始使用sparkxgb.

spark = create_spark_session('shai', 'dna_pipeline')
# sparkxgboost files 
spark.sparkContext.addPyFile('resources/sparkxgb.zip')

def create_spark_session(username=None, app_name="pipeline"):
    if username is not None:
        os.environ['HADOOP_USER_NAME'] = username

    return SparkSession \
        .builder \
        .master("yarn") \
        .appName(app_name) \
        .config(...) \
        .config(...) \
        .getOrCreate()

def train():
    train_df = spark.table('dna.offline_features_train_full')
    test_df = spark.table('dna.offline_features_test_full')

    from sparkxgb import XGBoostEstimator

    vectorAssembler = VectorAssembler() \
        .setInputCols(train_df.columns) \
        .setOutputCol("features")

    # This is where the program fails
    xgboost = XGBoostEstimator(
        featuresCol="features",
        labelCol="label",
        predictionCol="prediction"
    )

    pipeline = Pipeline().setStages([xgboost])
    pipeline.fit(train_df)

完整的例外是:

Traceback (most recent call last):
  File "/home/elad/DNA/dna/dna/run.py", line 283, in <module>
    main()
  File "/home/elad/DNA/dna/dna/run.py", line 247, in main
    offline_model = train_model(True, home_dir=config['home_dir'], hdfs_client=client)
  File "/home/elad/DNA/dna/dna/run.py", line 222, in train_model
    model = train(offline_mode=offline, spark=spark)
  File "/home/elad/DNA/dna/dna/model/xgboost_train.py", line 285, in train
    predictionCol="prediction"
  File "/home/elad/.conda/envs/DNAenv/lib/python2.7/site-packages/pyspark/__init__.py", line 105, in wrapper
    return func(self, **kwargs)
  File "/tmp/spark-7781039b-6821-42be-96e0-ca4005107318/userFiles-70b3d1de-a78c-4fac-b252-2f99a6761b32/sparkxgb.zip/sparkxgb/xgboost.py", line 115, in __init__
  File "/home/elad/.conda/envs/DNAenv/lib/python2.7/site-packages/pyspark/ml/wrapper.py", line 63, in _new_java_obj
    return java_obj(*java_args)
TypeError: 'JavaPackage' object is not callable
Exception AttributeError: "'NoneType' object has no attribute '_detach'" in <bound method XGBoostEstimator.__del__ of XGBoostEstimator_4f54b37156fb0a113233> ignored

我不知道为什么会发生此异常,也不知道如何将 sparkxgb 正确集成到我的代码中。

帮助将不胜感激。

谢谢

标签: pythonapache-sparkpysparkxgboost

解决方案


经过一天调试该模块的地狱后,问题只是错误地提交了jar。我在本地下载了 jars 并使用 pyspark 提交它们:

PYSPARK_SUBMIT_ARGS=--jars resources/xgboost4j-0.72.jar,resources/xgboost4j-spark-0.72.jar

这解决了问题。


推荐阅读