close is called automatically in this case. Lzip FileEncoder class FileEncoder : def _init_ ( self, path, level = 6, member_size = ( 1 << 51 )): """ Encode sequential byte buffers and write the compressed bytes to a file - path is the output file name, it must be a path-like object such as a string or a pathlib path - level must be either an integer in or a tuple (directory_size, match_length) 0 is the fastest compression level, 9 is the slowest see for the mapping between integer levels, directory sizes and match lengths - member_size can be used to change the compressed file's maximum member size see the Lzip manual for details on the tradeoffs incurred by this value """ def compress ( self, buffer ): """ Encode a buffer and write the compressed bytes into the file - buffer must be a byte-like object, such as bytes or a bytearray """ def close ( self ): """ Flush the encoder contents and close the file compress must not be called after calling close Failing to call close results in a corrupted encoded file """įileEncoder can be used as a context manager ( with FileEncoder(.) as encoder). The latter should only be used in advanced scenarios where fine buffer control is required. lzip deals with high-level operations (open and close files, download remote data, change default arguments.) whereas lzip_extension focuses on efficiently compressing and decompressing in-memory byte buffers. The present package contains two libraries. Lzip can also decompress data from an in-memory buffer. decompress_url_iter ( "" ): # chunk is a bytes object decompress_url ( "" ) # option 2: iterate over the decompressed file in small chunks for chunk in lzip. frombuffer ( chunk, dtype = " ![]() Read and decompress a file one chunk at a time, and ensure that each chunk contains a number of bytes that is a multiple of word_size (useful to parse numpy arrays with a known dtype): import lzip import numpy for chunk in lzip. decompress_file_iter ( "/path/to/input.lz" ): # chunk is a bytes object Read and decompress a file one chunk at a time (useful for large files): import lzip for chunk in lzip. Read and decompress a file to an in-memory buffer: import lzip buffer = lzip. Lzip can use different compression levels. compress_to_file ( "/path/to/output.lz", values. close ()Ĭompress a Numpy array and write the result to a file: import lzip import numpy values = numpy. FileEncoder ( "/path/to/output.lz" ) encoder. ![]() Use FileEncoder without context management ( with): import lzip encoder = lzip. FileEncoder ( "/path/to/output.lz" ) as encoder : encoder. compress_to_file ( "/path/to/output.lz", b "data to compress" )Ĭompress multiple chunks and write the result to a single file (useful to avoid large in-memory buffers): import lzip with lzip. pip3 install lzipĬompress an in-memory buffer and write it to a file: import lzip lzip. URLs can be used as well to download, decompress and process the chunks of a remote Lzip archive in one go. Decoding and encoding can be performed in chunks, enabling the decompression, processing and compression of files that do not fit in RAM. This package is compatible with arbitrary byte sequences but provides features to facilitate interoperability with Numpy's frombuffer and tobytes functions. Lzip is a Python wrapper for lzlib ( ) to encode and decode Lzip archives ( ).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |