I got a defaultdict
with lists as values and tuples as keys (ddict
in the code below). I want to find the min and max of values for a given set of keys. The keys are given as a numpy array. The numpy array is a 3D array containing the keys. Each row of the 3D array is the block of keys for which we need to find the min
and max
i.e. for each row we take the corresponding 2D array entries, and get the values corresponding to those entries and find the min
and max
over those values. I need to do it for all the rows of the 3D array.
from operator import itemgetter
import numpy as np
ddict = {(1.0, 1.0): [1,2,3,4], (1.0, 2.5): [2,3,4,5], (1.0, 3.75): [], (1.5, 1.0): [8,9,10], (1.5, 2.5): [2,6,8,19,1,31], (1.5,3.75): [4]}
indA = np.array([ [ [( 1.0, 1.0), ( 1.0, 3.75)], [(1.5,1.0), (1.5,3.75)] ], [ [(1.0, 2.5), (1.5,1.0)], [(1.5, 2.5), (1.5,3.75)] ] ])
mins = min(ddict, key=itemgetter(*[tuple(i) for b in indA for i in b.flatten()]))
maxs = max(ddict, key=itemgetter(*[tuple(i) for b in indA for i in b.flatten()]))
I tried the above code to get the output of
min1 = min([1,2,3,4,8,9,10,4])
& min2 = min([2,3,4,5,8,9,10,2,6,8,19,1,31,4])
and
max1= max([1,2,3,4,8,9,10,4])
& max2 = max([2,3,4,5,8,9,10,2,6,8,19,1,31,4])
I want to calculate the min
and max
for every 2D array in the numpy array. Any workaround ? Why my code is not working ? It gives me error TypeError: tuple indices must be integers or slices, not tuple