在dask中,我得到一个错误:"ValueError:计算数据中的列与提供的元数据中的列不匹配
这对我来说没有意义,因为我确实提供了正确的元数据.它不是按照dict中的规定订购的.
下面是一个简单的工作示例:
from datetime import date
import pandas as pd
import numpy as np
from dask import delayed
import dask.dataframe as dsk
# Making example data
values = pd.DataFrame({'date' : [date(2020,1,1), date(2020,1,1), date(2020,1,2), date(2020,1,2)], 'id' : [1,2,1,2], 'A': [4,5,2,2], 'B':[7,3,6,1]})
def get_dates():
return pd.DataFrame({'date' : [date(2020,1,1), date(2020,1,1), date(2020,1,2), date(2020,1,2)]})
def append_values(df):
df2 = pd.merge(df, values, on = 'date', how = 'left')
return df2
t0 = pd.DataFrame({'date' : [date(2020,1,1), date(2020,1,1), date(2020,1,2), date(2020,1,2)]})
t1 = delayed(t0)
t2 = dsk.from_delayed(t1)
t = t2.map_partitions(append_values, meta = {'A' : 'f8', 'B': 'i8', 'id' : 'i8', 'date' : 'object'}, enforce_metadata = False)
# Applying a grouped function.
def func(x,y):
return pd.DataFrame({'summ' : [np.mean(x) + np.mean(y)], 'difference' : [int(np.floor(np.mean(x) - np.mean(y)))]})
# Everything works when I compute the dataframe before doing the apply. But I want to distribute the apply so I dont like this option.
res = t.compute().groupby(['date']).apply(lambda df: func(df['A'], df['B']))
# This fails as the meta is out of order. But the meta is in a dict and is hence not supposted to be ordered anyway!
res = t.groupby(['date']).apply(lambda df: func(df['A'], df['B'])).compute()
我做错了什么?我该如何修复它?虽然一种解决方法是在执行分组操作之前进行计算,但这在我的实际情况下是不可行的(在RAM中存储的数据太多).
还有一个可能相关的问题,但我不认为是:ValueError: The columns in the computed data do not match the columns in the provided metadata.这似乎与dask的csv解析有关