python - 有效地序列化/反序列化 SparseDataFrame
问题描述
有没有人有效地序列化/反序列化 pandas SparseDataFrame?
import pandas as pd
import scipy
from scipy import sparse
dfs = pd.SparseDataFrame(scipy.sparse.random(1000, 1000).toarray())
# just for testing
泡菜不是答案
它慢得离谱。
import pickle, time
start = time.time()
# serialization
msg = list(pickle.dumps(dfs, protocol=pickle.HIGHEST_PROTOCOL))
# deserialization
dfs = pickle.loads(bytes(msg))
stop = time.time()
stop - start
# 0.4420337677001953
# This is with Python 3.5 so it's using cPickle
作为比较 msgpack在密集版本上更快
df = dfs.to_dense()
start = time.time()
# serialization
msg = list(df.to_msgpack(compress='zlib'))
# deserialization
df = pd.read_msgpack(bytes(msg))
stop = time.time()
stop - start
# 0.09514737129211426
消息包
Msgpack 将是答案,但我找不到 SparseDataFrame 的实现(相关)
# serialization
dfs.to_msgpack(compress='zlib')
# Returns: NotImplementedError: msgpack sparse frame is not implemented
坐标格式
坐标格式的 msgpack viascipy.sparse.coo_matrix
似乎值得考虑,但转换python.sparse.coo_matrix
速度很慢
from scipy.sparse import coo_matrix
start = time.time()
# serialization
columns = dfs.columns
shape = dfs.shape
start_to_coo = time.time()
dfc = dfs.to_coo()
stop_to_coo = time.time()
start_comprehension = time.time()
row = [x.item() for x in df.row]
col = [x.item() for x in df.col]
data = [x.item() for x in df.data]
stop_comprehension = time.time()
start_packing = time.time()
msg = list(msgpack.packb({'columns':list(columns), 'shape':shape, 'row':row, 'col':col, 'data':data}))
stop_packing = time.time()
# deserialization
start_unpacking = time.time()
dict = msgpack.unpackb(bytes(msg))
stop_unpacking = time.time()
columns=dict[b'columns']
index=range(dict[b'shape'][0])
dfc = coo_matrix((dict[b'data'], (dict[b'row'], dict[b'col'])), shape=dict[b'shape'])
stop = time.time()
print('total: ' + str(stop - start))
print(' to_coo: ' + str(stop_to_coo - start_to_coo))
print(' comprehension: ' + str(stop_comprehension - start_comprehension))
print(' packing: ' + str(stop_packing - start_packing))
print(' unpacking: ' + str(stop_unpacking - start_unpacking))
#total: 0.2799222469329834
# to_coo: 0.22925591468811035
# comprehension & cast: 0.02356100082397461 (msgpack does not support all numpy formats)
# packing: 0.004893064498901367
# unpacking: 0.001984834671020508
从那里开始,似乎需要通过一种密集的格式。
start = time.time()
dfs = pd.SparseDataFrame(dfc.toarray())
stop = time.time()
stop - start
# 2.8947737216949463
解决方案
时间开销源于 和 中的字符串dumps
处理loads
。
使用dumps/loads
:
def pickle_dumps():
msg = list(pickle.dumps(dfs, protocol=pickle.HIGHEST_PROTOCOL))
pickle.loads(bytes(msg))
%timeit pickle_dumps()
# 212 ms ± 2.19 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
使用dump/load
:
def pickle_file():
with open('dump.pickle', 'wb') as f:
pickle.dump(dfs, f, protocol=pickle.HIGHEST_PROTOCOL)
with open('dump.pickle', 'rb') as f:
return pickle.load(f)
%timeit pickle_file()
# 82.7 ms ± 1.25 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
甚至更短的使用 pandas 内置函数:
def to_pickle():
dfs.to_pickle('./dump.pickle')
pd.read_pickle('./dump.pickle')
%timeit to_pickle()
# 86.8 ms ± 1.54 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
推荐阅读
- swift - 如何为 SCNPhysicsBody 的某些属性添加观察者?
- r - 如何合并具有相同列和行中一些相同数据的多张excel表
- android - 如何使用 Kotlin 在片段内播放 YouTube 视频
- java - 构造函数中的对象创建链接 wrt 继承
- apache-kafka - 即使 Kafka 连接断开,应用程序也应该继续运行
- javascript - 我想验证用户的伪在他写作时是否良好
- sql - 日期范围之间的日期范围
- javascript - 使用 Gremlin Javascript 在 JanusGraph 中创建和查询 Geopoint
- terraform - Terraform 在初始化期间将资源列为提供者
- ios - 为 iOS 12 开发的应用程序可以安装在 iPadOS 上吗