joblib.dump

joblib.dump(value, filename, compress=0, cache_size=100)

Fast persistence of an arbitrary Python object into a files, with dedicated storage for numpy arrays.

Parameters:

value: any Python object :

The object to store to disk

filename: string :

The name of the file in which it is to be stored

compress: integer for 0 to 9, optional :

Optional compression level for the data. 0 is no compression. Higher means more compression, but also slower read and write times. Using a value of 3 is often a good compromise. See the notes for more details.

cache_size: positive number, optional :

Fixes the order of magnitude (in megabytes) of the cache used for in-memory compression. Note that this is just an order of magnitude estimate and that for big arrays, the code will go over this value at dump and at load time.

Returns:

filenames: list of strings :

The list of file names in which the data is stored. If compress is false, each array is stored in a different file.

See also

joblib.load
corresponding loader

Notes

Memmapping on load cannot be used for compressed files. Thus using compression can significantly slow down loading. In addition, compressed files take extra extra memory during dump and load.