docker - 在 Dockerfile 中安装 pyspark
问题描述
我有一个 Dockerfile 如下
FROM python:3.7
RUN apt-get update
RUN apt-get install default-jdk -y
COPY requirements.txt ./
RUN pip install -r requirements.txt
我在 GitLab 的 CI 管道中使用它,它运行良好。
但是,最近它已停止工作。我还没有更新我的requirements.txt
文件,所以这可能是因为default-jdk
已经改变了吗?
我应该如何更新我的 Dockerfile 以便它现在可以正确安装 pyspark?
编辑
错误示例:
/usr/local/lib/python3.7/site-packages/pyspark/rdd.py:824: in collect
port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
/usr/local/lib/python3.7/site-packages/py4j/java_gateway.py:1160: in __call__
answer, self.gateway_client, self.target_id, self.name)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
answer = 'xro1291'
gateway_client = <py4j.java_gateway.GatewayClient object at 0x7f6490c2a350>
target_id = 'z:org.apache.spark.api.python.PythonRDD', name = 'collectAndServe'
def get_return_value(answer, gateway_client, target_id=None, name=None):
"""Converts an answer received from the Java gateway into a Python object.
For example, string representation of integers are converted to Python
integer, string representation of objects are converted to JavaObject
instances, etc.
:param answer: the string returned by the Java gateway
:param gateway_client: the gateway client used to communicate with the Java
Gateway. Only necessary if the answer is a reference (e.g., object,
list, map)
:param target_id: the name of the object from which the answer comes from
(e.g., *object1* in `object1.hello()`). Optional.
:param name: the name of the member from which the answer comes from
(e.g., *hello* in `object1.hello()`). Optional.
"""
if is_error(answer)[0]:
if len(answer) > 1:
type = answer[1]
value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
if answer[1] == REFERENCE_TYPE:
raise Py4JJavaError(
"An error occurred while calling {0}{1}{2}.\n".
> format(target_id, ".", name), value)
E py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
E : java.lang.IllegalArgumentException
E at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
E at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
E at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
E at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
E at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
E at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
E at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
E at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
E at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
E at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
E at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
E at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
E at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
E at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
E at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
E at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
E at scala.collection.immutable.List.foreach(List.scala:381)
E at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
E at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
E at org.apache.spark.SparkContext.clean(SparkContext.scala:2292)
E at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066)
E at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
E at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
E at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
E at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
E at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
E at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
E at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:153)
E at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
E at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
E at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
E at java.base/java.lang.reflect.Method.invoke(Method.java:566)
E at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
E at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
E at py4j.Gateway.invoke(Gateway.java:282)
E at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
E at py4j.commands.CallCommand.execute(CallCommand.java:79)
E at py4j.GatewayConnection.run(GatewayConnection.java:214)
E at java.base/java.lang.Thread.run(Thread.java:834)
/usr/local/lib/python3.7/site-packages/py4j/protocol.py:320: Py4JJavaError
解决方案
将基础图像更改为python:3.7-stretch
对我有用
推荐阅读
- sql - 如何更新要反转的sql表计数器字段编号?
- python - 如何在不丢失结果的情况下优化 python 代码?
- python - 气流传感器任务只等待一段时间
- mysql - 选择实例在另一列中共享相同值的列的值 [MySQL]
- ios - 将长文本放入 UILabel 中的多行
- javascript - 如何为数组JS中的每个人返回孙子的名字?
- spring-boot - 无法在 groovy 测试中共享自动装配的对象
- visual-studio - Visual Studio“干净的解决方案”删除自定义文件
- reactjs - 选择反应输入
- javascript - Vue警告-无法读取未定义的属性indexOf