WIP Native Dask implementation for area interpolation (Do not merge)#187
WIP Native Dask implementation for area interpolation (Do not merge)#187darribas wants to merge 2 commits intopysal:mainfrom
Conversation
so what about just returning
so i say we go ahead and stop here and move back to memory |
|
That sounds good, I'd just be very clear on the documentation these approaches will not work with larger-than-memory data (which the released version for categoricals does). In reality, I expect most use cases to be fine with that, and it is also true that what we bring up in memory does not have geometries so it is significatly cheaper in terms of memory. But we should make it clear in the documentation? |
This is my first attempt at implementing area interpolation fully in Dask (as opposed to using the single-core logic inside the Dask scheduler). The main motivation for this is to obtain correct estimates for intensive/extensive variables, which currently are erroneous using the chunk-by-chunk code that does work for categorical (as discussed in #185 )
A couple of thoughts on what I learned:
tobler/tobler/area_weighted/area_interpolate.py
Lines 316 to 321 in ce6fcb9
This means we need to build that table of weights before we split things to each worker for independent work. Now, the weights can be built in parallel (this is essentially about filling in different parts of a matrix which are not all related to each other. This is what I attempt in this PR on
id_area_table(which is copied from the single-core implementation, works in chunks and is added to the existing Dask computation graph. This returns a three-column Dask dataframe with source ID, target ID and shared area, where only non-zero values are stored (i.e., you never have a row in this table with a value of 0 in the third column. As @knaaptime suggests, this is not a million miles away from the new graph implementation.id_area_table, which builds the cross-over table, can be run in parallel with minimal inter-worker communication, it's performant and returns what is expected.https://github.com/darribas/tobler/blob/5ce79e71a89fc4ede93ec22f6eb84b1acf884e4a/tobler/area_weighted/area_interpolate_dask.py#L346-L360
The logic of the code is as follows:
weightsNow the issue with the logic above is that. although 1., 3., and potentially 4., are fast and very performant on Dask, 2. involves a significant amount of inter-worker communication and, since it is not done on sorted indices (that I can see), it will be slow.
This is where I stopped. A couple of further thoughts:
pandas.DataFrameorpolars.DataFrame, so if we can live with in-memory computation only, I think this logic would speed up our current implementationVery happy to explore further options and discuss alternative views!