on-disk¶
Streaming: Processing Unlimited Frames On-Disk¶
A key feature of trackpy is the ability to process an unlimited number of frames.
For feature-finding, this is straightforward: a frame is loaded, features are located, the locations are saved the disk, and the memory is cleared for the next frame. For linking, the problem is more challenging, but trackpy handles all this complexity for you, using as little memory as possible throughout.
When data sets become large, beginning-friendly file formats like CSV or Excel become impractical. We recommend using the HDF5 file format, which is trackpy can read and write out of the box. (HDF5 is widely used; you can be sure it will be around for many, many years to come.)
If you have some other format in mind, see the end of this tutorial, where we explain how to extend trackpy's interface to support other formats.
PyTables¶
You need pytables, which is normally included with the Anaconda distribution. If you find that you don't have it, you can easily install it using conda. Type this command into a Terminal or Command Prompt.
conda install pytables
Locate Features, Streaming Results into an HDF5 File¶
import trackpy as tp
import pims
@pims.pipeline
def gray(image):
return image[:, :, 1]
images = gray(pims.open('../sample_data/bulk_water/*.png'))
images = images[:10] # We'll take just the first 10 frames for demo purposes.
# For this demo, we'll first remove the file if it already exists.
!rm -f data.h5
We can use locate
inside a loop:
with tp.PandasHDFStore('data.h5') as s: # This opens an HDF5 file. Data will be stored and retrieved by frame number.
for image in images:
features = tp.locate(image, 11, invert=True) # Find the features in a given frame.
s.put(features) # Save the features to the file before continuing to the next frame.
or, equivalently, we can use batch
, which accepts the storage file as output
.
with tp.PandasHDFStore('data.h5') as s:
tp.batch(images, 11, invert=True, output=s)
Frame 9: 573 features
We can get the data for a given frame:
with tp.PandasHDFStore('data.h5') as s:
frame_2_results = s.get(2)
frame_2_results.head() # Display the first few rows.
y | x | mass | size | ecc | signal | raw_mass | ep | frame | |
---|---|---|---|---|---|---|---|---|---|
0 | 5.509524 | 497.075000 | 72.537360 | 2.870416 | 0.029523 | 2.158850 | 10115.0 | 0.768330 | 2 |
1 | 5.652962 | 295.981547 | 266.747504 | 2.322843 | 0.247683 | 16.407260 | 10670.0 | 0.078022 | 2 |
2 | 6.350493 | 68.049288 | 236.523604 | 2.351310 | 0.044731 | 10.362480 | 10878.0 | 0.058368 | 2 |
3 | 6.405941 | 336.590347 | 209.322095 | 1.996594 | 0.127966 | 15.111950 | 10551.0 | 0.096638 | 2 |
4 | 6.899098 | 432.617521 | 363.723046 | 2.855660 | 0.466993 | 14.334764 | 10838.0 | 0.061340 | 2 |
Or dump all the data, if your machine has enough memory to hold it:
with tp.PandasHDFStore('data.h5') as s:
all_results = s.dump()
all_results.head() # Display the first few rows.
y | x | mass | size | ecc | signal | raw_mass | ep | frame | |
---|---|---|---|---|---|---|---|---|---|
0 | 4.750000 | 103.668564 | 192.862485 | 2.106615 | 0.066390 | 10.808405 | 10714.0 | 0.073666 | 0 |
1 | 5.249231 | 585.779487 | 164.659302 | 2.962674 | 0.078936 | 4.222033 | 10702.0 | 0.075116 | 0 |
2 | 5.785986 | 294.792544 | 244.624615 | 2.244542 | 0.219217 | 15.874846 | 10686.0 | 0.077141 | 0 |
3 | 5.869369 | 338.173423 | 187.458282 | 2.046201 | 0.185333 | 13.088304 | 10554.0 | 0.099201 | 0 |
4 | 6.746377 | 310.584169 | 151.486558 | 3.103294 | 0.053342 | 4.475355 | 10403.0 | 0.147430 | 0 |
You can dump the first N frames using s.dump(N)
.
Link Trajectories, Streaming From and Updating the HDF5 File¶
with tp.PandasHDFStore('data.h5') as s:
for linked in tp.link_df_iter(s, 3, neighbor_strategy='KDTree'):
s.put(linked)
Frame 9: 573 trajectories present.
The original data is overwritten.
with tp.PandasHDFStore('data.h5') as s:
frame_2_results = s.get(2)
frame_2_results.head() # Display the first few rows.
y | x | mass | size | ecc | signal | raw_mass | ep | frame | particle | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 5.509524 | 497.075000 | 72.537360 | 2.870416 | 0.029523 | 2.158850 | 10115.0 | 0.768330 | 2 | 535 |
1 | 5.652962 | 295.981547 | 266.747504 | 2.322843 | 0.247683 | 16.407260 | 10670.0 | 0.078022 | 2 | 2 |
2 | 6.350493 | 68.049288 | 236.523604 | 2.351310 | 0.044731 | 10.362480 | 10878.0 | 0.058368 | 2 | 8 |
3 | 6.405941 | 336.590347 | 209.322095 | 1.996594 | 0.127966 | 15.111950 | 10551.0 | 0.096638 | 2 | 3 |
4 | 6.899098 | 432.617521 | 363.723046 | 2.855660 | 0.466993 | 14.334764 | 10838.0 | 0.061340 | 2 | 6 |
Framewise Data Interfaces¶
Built-in interfaces¶
There are three different interfaces. You can use them interchangeably. They offer different performance advantages.
PandasHDFStore
-- fastest for a small (~100) number of framesPandasHDFStoreBig
-- fastest for a medium or large number of framesPandasHDFStoreSingleNode
-- optimizes HDF queries that access multiple frames (advanced)
Writing your own interface¶
Trackpy implements a generic interface that could be used to store and retrieve particle tracking data in any file format. We hope that it can make it easier for researchers who use different file formats to exchange data. Any in-house format could be accessed using the same simple interface demonstrated above.
At present, the interface is implemented only for HDF5 files. To extend it to any format, write a class subclassing trackpy.FramewiseData
. This custom class must implement the methods put
, get
, close
, and __iter__
and the properties max_frame
and t_column
. Refer to the built-in classes in framewise_data.py for examples to work from.