IronPython.Modules Copy the latest data from the memory buffer. This won't always contain data, because comrpessed data is only written after a block is filled. Add data to the input buffer. This manipulates the position of the stream to make it appear to the BZip2 stream that nothing has actually changed. The data to append to the buffer. Try to convert IList(Of byte) to byte[] without copying, if possible. Throw TypeError with a specified message if object isn't callable. Convert object to ushort, throwing ValueError on overflow. Interface for "file-like objects" that implement the protocol needed by load() and friends. This enables the creation of thin wrappers that make fast .NET types and slow Python types look the same. Interface for "file-like objects" that implement the protocol needed by dump() and friends. This enables the creation of thin wrappers that make fast .NET types and slow Python types look the same. Call the appropriate reduce method for obj and pickle the object using the resulting data. Use the first available of copy_reg.dispatch_table[type(obj)], obj.__reduce_ex__, and obj.__reduce__. Pickle the result of a reduce function. Only context, obj, func, and reduceCallable are required; all other arguments may be null. Write value in pickle decimalnl_short format. Write value in pickle float8 format. Write value in pickle uint1 format. Write value in pickle uint2 format. Write value in pickle int4 format. Write value in pickle decimalnl_short format. Write value in pickle decimalnl_short format. Write value in pickle decimalnl_long format. Write value in pickle unicodestringnl format. Write value in pickle unicodestring4 format. Write value in pickle stringnl_noescape_pair format. Return true if value is appropriate for formatting in pickle uint1 format. Return true if value is appropriate for formatting in pickle uint2 format. Return true if value is appropriate for formatting in pickle int4 format. Emit a series of opcodes that will set append all items indexed by iter to the object at the top of the stack. Use APPENDS if possible, but append no more than BatchSize items at a time. Emit a series of opcodes that will set all (key, value) pairs indexed by iter in the object at the top of the stack. Use SETITEMS if possible, but append no more than BatchSize items at a time. Emit a series of opcodes that will set all (key, value) pairs indexed by iter in the object at the top of the stack. Use SETITEMS if possible, but append no more than BatchSize items at a time. Find the module for obj and ensure that obj is reachable in that module by the given name. Throw PicklingError if any of the following are true: - The module couldn't be determined. - The module couldn't be loaded. - The given name doesn't exist in the module. - The given name is a different object than obj. Otherwise, return the name of the module. To determine which module obj lives in, obj.__module__ is used if available. The module named by obj.__module__ is loaded if needed. If obj has no __module__ attribute, then each loaded module is searched. If a loaded module has an attribute with the given name, and that attribute is the same object as obj, then that module is used. Interpret everything from markIndex to the top of the stack as a sequence of key, value, key, value, etc. Set dict[key] = value for each. Pop everything from markIndex up when done. Used to check the type to see if we can do a comparison. Returns true if we can or false if we should return NotImplemented. May throw if the type's really wrong. Helper function for doing the comparisons. time has no __cmp__ method Base class used for iterator wrappers. Error function on real values Complementary error function on real values: erfc(x) = 1 - erf(x) Gamma function on real values Natural log of absolute value of Gamma function Provides helper functions which need to be called from generated code to implement various portions of modules. Checks for the specific permissions, provided by the mode parameter, are available for the provided path. Permissions can be: F_OK: Check to see if the file exists R_OK | W_OK | X_OK: Check for the specific permissions. Only W_OK is respected. single instance of environment dictionary is shared between multiple runtimes because the environment is shared by multiple runtimes. lstat(path) -> stat result Like stat(path), but do not follow symbolic links. spawns a new process. If mode is nt.P_WAIT then then the call blocks until the process exits and the return value is the exit code. Otherwise the call returns a handle to the process. The caller must then call nt.waitpid(pid, options) to free the handle and get the exit code of the process. Failure to call nt.waitpid will result in a handle leak. spawns a new process. If mode is nt.P_WAIT then then the call blocks until the process exits and the return value is the exit code. Otherwise the call returns a handle to the process. The caller must then call nt.waitpid(pid, options) to free the handle and get the exit code of the process. Failure to call nt.waitpid will result in a handle leak. spawns a new process. If mode is nt.P_WAIT then then the call blocks until the process exits and the return value is the exit code. Otherwise the call returns a handle to the process. The caller must then call nt.waitpid(pid, options) to free the handle and get the exit code of the process. Failure to call nt.waitpid will result in a handle leak. spawns a new process. If mode is nt.P_WAIT then then the call blocks until the process exits and the return value is the exit code. Otherwise the call returns a handle to the process. The caller must then call nt.waitpid(pid, options) to free the handle and get the exit code of the process. Failure to call nt.waitpid will result in a handle leak. Copy elements from a Python mapping of dict environment variables to a StringDictionary. Convert a sequence of args to a string suitable for using to spawn a process. Python regular expression module. Compiled reg-ex pattern Preparses a regular expression text returning a ParsedRegex class that can be used for further regular expressions. Implementes a resource-based meta_path importer as described in PEP 302. Instantiates a new meta_path importer using an embedded ZIP resource file. Process a sequence of objects that are compatible with ObjectToSocket(). Return two things as out params: an in-order List of sockets that correspond to the original objects in the passed-in sequence, and a mapping of these socket objects to their original objects. The socketToOriginal mapping is generated because the CPython select module supports passing to select either file descriptor numbers or an object with a fileno() method. We try to be faithful to what was originally requested when we return. Return the System.Net.Sockets.Socket object that corresponds to the passed-in object. obj can be a System.Net.Sockets.Socket, a PythonSocket.SocketObj, a long integer (representing a socket handle), or a Python object with a fileno() method (whose result is used to look up an existing PythonSocket.SocketObj, which is in turn converted to a Socket. Stops execution of Python or other .NET code on the main thread. If the thread is blocked in native code the thread will be interrupted after it returns back to Python or other .NET code. Provides a dictionary storage implementation whose storage is local to the thread. Represents the date components that we found while parsing the date. Used for zeroing out values which have different defaults from CPython. Currently we only know that we need to do this for the year. Samples on how to subtype built-in types from C# an int variable for demonstration purposes an int variable for demonstration purposes BytesIO([initializer]) -> object Create a buffered I/O implementation using an in-memory bytes buffer, ready for reading and writing. close() -> None. Disable all I/O operations. True if the file is closed. getvalue() -> bytes. Retrieve the entire contents of the BytesIO object. Remove all 'b's from mode string to simplify parsing Read and decode the next chunk from the buffered reader. Returns true if EOF was not reached. Places decoded string in _decodedChars. Convert string or bytes into bytes Convert most bytearray-like objects into IList of byte Creates an optimized encoding mapping that can be consumed by an optimized version of charmap_encode. Encodes the input string with the specified optimized encoding map. Decodes the input string using the provided string mapping. Optimized encoding mapping that can be consumed by charmap_encode. Walks the queue calling back to the specified delegate for each populated index in the queue. Returns the dialects from the code context. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. Provides support for interop with native code from Python code. The meta class for ctypes array instances. Converts an object into a function call parameter. Base class for all ctypes interop types. Creates a new CFuncPtr object from a tuple. The 1st element of the tuple is the ordinal or function name. The second is an object with a _handle property. The _handle property is the handle of the module from which the function will be loaded. Creates a new CFuncPtr which calls a COM method. Creates a new CFuncPtr with the specfied address. Creates a new CFuncPtr with the specfied address. we need to keep alive any methods which have arguments for the duration of the call. Otherwise they could be collected on the finalizer thread before we come back. Creates a method for calling with the specified signature. The returned method has a signature of the form: (IntPtr funcAddress, arg0, arg1, ..., object[] constantPool) where IntPtr is the address of the function to be called. The arguments types are based upon the types that the ArgumentMarshaller requires. Base class for marshalling arguments from the user provided value to the call stub. This class provides the logic for creating the call stub and calling it. Emits the IL to get the argument for the call stub generated into a dynamic method. Gets the expression used to provide the argument. This is the expression from an incoming DynamicMetaObject. Gets an expression which keeps alive the argument for the duration of the call. Returns null if a keep alive is not necessary. Provides marshalling of primitive values when the function type has no type information or when the user has provided us with an explicit cdata instance. Provides marshalling for when the function type provide argument information. Provides marshalling for when the user provides a native argument object (usually gotten by byref or pointer) and the function type has no type information. The meta class for ctypes function pointer instances. Converts an object into a function call parameter. Fields are created when a Structure is defined and provide introspection of the structure. Called for fields which have been limited to a range of bits. Given the value for the full type this extracts the individual bits. Called for fields which have been limited to a range of bits. Sets the specified value into the bits for the field. Common functionality that all of the meta classes provide which is part of our implementation. This is used to implement the serialization/deserialization of values into/out of memory, emit the marshalling logic for call stubs, and provide common information (size/alignment) for the types. Gets the native size of the type Gets the required alignment for the type Deserialized the value of this type from the given address at the given offset. Any new objects which are created will keep the provided MemoryHolder alive. raw determines if the cdata is returned or if the primitive value is returned. This is only applicable for subtypes of simple cdata types. Serializes the provided value into the specified address at the given offset. Gets the .NET type which is used when calling or returning the value from native code. Gets the .NET type which the native type is converted into when going to Python code. This is usually int, BigInt, double, object, or a CData type. Emits marshalling of an object from Python to native code. This produces the native type from the Python type. Emits marshalling from native code to Python code This produces the python type from the native type. This is used for return values and parameters to Python callable objects that are passed back out to native code. Returns a string which describes the type. Used for _buffer_info implementation which only exists for testing purposes. The meta class for ctypes pointers. Converts an object into a function call parameter. Access an instance at the specified address The meta class for ctypes simple data types. These include primitives like ints, floats, etc... char/wchar pointers, and untyped pointers. Converts an object into a function call parameter. Helper function for reading char/wchar's. This is used for reading from arrays and pointers to avoid creating lots of 1-char strings. The enum used for tracking the various ctypes primitive types. 'c' 'b' 'B' 'h' 'H' 'i' 'I' 'l' 'L' 'f' 'd', 'g' 'q' 'Q' 'O' 'P' 'z' 'Z' 'u' '?' 'v' 'X' Meta class for structures. Validates _fields_ on creation, provides factory methods for creating instances from addresses and translating to parameters. Converts an object into a function call parameter. Structures just return themselves. If our size/alignment hasn't been initialized then grabs the size/alignment from all of our base classes. If later new _fields_ are added we'll be initialized and these values will be replaced. Base class for data structures. Subclasses can define _fields_ which specifies the in memory layout of the values. Instances can then be created with the initial values provided as the array. The values can then be accessed from the instance by field name. The value can also be passed to a foreign C API and the type can be used in other structures. class MyStructure(Structure): _fields_ = [('a', c_int), ('b', c_int)] MyStructure(1, 2).a MyStructure() class MyOtherStructure(Structure): _fields_ = [('c', MyStructure), ('b', c_int)] MyOtherStructure((1, 2), 3) MyOtherStructure(MyStructure(1, 2), 3) The meta class for ctypes unions. Converts an object into a function call parameter. Gets a function which casts the specified memory. Because this is used only w/ Python API we use a delegate as the return type instead of an actual address. Implementation of our cast function. data is marshalled as a void* so it ends up as an address. obj and type are marshalled as an object so we need to unmarshal them. Returns a new type which represents a pointer given the existing type. Converts an address acquired from PyObj_FromPtr or that has been marshaled as type 'O' back into an object. Converts an object into an opaque address which can be handed out to managed code. Decreases the ref count on an object which has been increased with Py_INCREF. Increases the ref count on an object ensuring that it will not be collected. returns address of C instance internal buffer. It is the callers responsibility to ensure that the provided instance will stay alive if memory in the resulting address is to be used later. Gets the required alignment of the given type. Gets the required alignment of an object. Returns a pointer instance for the given CData Gets the ModuleBuilder used to generate our unsafe call stubs into. Given a specific size returns a .NET type of the equivalent size that we can use when marshalling these values across calls. Shared helper between struct and union for getting field info and validating it. Verifies that the provided bit field settings are valid for this type. Shared helper to get the _fields_ list for struct/union and validate it. Helper function for translating from memset to NT's FillMemory API. Helper function for translating from memset to NT's FillMemory API. Emits the marshalling code to create a CData object for reverse marshalling. Wrapper class for emitting locals/variables during marshalling code gen. A wrapper around allocated memory to ensure it gets released and isn't accessed when it could be finalized. Creates a new MemoryHolder and allocates a buffer of the specified size. Creates a new MemoryHolder at the specified address which is not tracked by us and we will never free. Creates a new MemoryHolder at the specified address which will keep alive the parent memory holder. Gets the address of the held memory. The caller should ensure the MemoryHolder is always alive as long as the address will continue to be accessed. Gets a list of objects which need to be kept alive for this MemoryHolder to be remain valid. Used to track the lifetime of objects when one memory region depends upon another memory region. For example if you have an array of objects that each have an element which has it's own lifetime the array needs to keep the individual elements alive. The keys used here match CPython's keys as tested by CPython's test_ctypes. Typically they are a string which is the array index, "ffffffff" when from_buffer is used, or when it's a simple type there's just a string instead of the full dictionary - we store that under the key "str". Copies the data in data into this MemoryHolder. Copies memory from one location to another keeping the associated memory holders alive during the operation. Native functions used for exposing ctypes functionality. Allocates memory that's zero-filled Helper function for implementing memset. Could be more efficient if we could P/Invoke or call some otherwise native code to do this. Returns a new callable object with the provided initial set of arguments bound to it. Calling the new function then appends to the additional user provided arguments. Creates a new partial object with the provided positional arguments. Creates a new partial object with the provided positional and keyword arguments. Gets the function which will be called Gets the initially provided positional arguments. Gets the initially provided keyword arguments or None. Gets or sets the dictionary used for storing extra attributes on the partial object. Calls func with the previously provided arguments and more positional arguments. Calls func with the previously provided arguments and more positional arguments and keyword arguments. Operator method to set arbitrary members on the partial object. Operator method to get additional arbitrary members defined on the partial object. Operator method to delete arbitrary members defined in the partial object. Populates the given directory w/ the locale information from the given CultureInfo. Generator based on the .NET Core implementation of System.Random handleToSocket allows us to translate from Python's idea of a socket resource (file descriptor numbers) to .NET's idea of a socket resource (System.Net.Socket objects). In particular, this allows the select module to convert file numbers (as returned by fileno()) and convert them to Socket objects so that it can do something useful with them. Return the internal System.Net.Sockets.Socket socket object associated with the given handle (as returned by GetHandle()), or null if no corresponding socket exists. This is primarily intended to be used by other modules (such as select) that implement networking primitives. User code should not normally need to call this function. Create a Python socket object from an existing .NET socket object (like one returned from Socket.Accept()) Perform initialization common to all constructors Convert an object to a 32-bit integer. This adds two features to Converter.ToInt32: 1. Sign is ignored. For example, 0xffff0000 converts to 4294901760, where Convert.ToInt32 would throw because 0xffff0000 is less than zero. 2. Overflow exceptions are thrown. Converter.ToInt32 throws TypeError if x is an integer, but is bigger than 32 bits. Instead, we throw OverflowException. Convert an object to a 16-bit integer. This adds two features to Converter.ToInt16: 1. Sign is ignored. For example, 0xff00 converts to 65280, where Convert.ToInt16 would throw because signed 0xff00 is -256. 2. Overflow exceptions are thrown. Converter.ToInt16 throws TypeError if x is an integer, but is bigger than 16 bits. Instead, we throw OverflowException. Return a standard socket exception (socket.error) whose message and error code come from a SocketException This will eventually be enhanced to generate the correct error type (error, herror, gaierror) based on the error code. Convert an IPv6 address byte array to a string in standard colon-hex notation. The .NET IPAddress.ToString() method uses dotted-quad for the last 32 bits, which differs from the normal Python implementation (but is allowed by the IETF); this method returns the standard (no dotted-quad) colon-hex form. Handle conversion of "" to INADDR_ANY and "<broadcast>" to INADDR_BROADCAST. Otherwise returns host unchanged. Return the IP address associated with host, with optional address family checking. host may be either a name or an IP address (in string form). If family is non-null, a gaierror will be thrown if the host's address family is not the same as the specified family. gaierror is also raised if the hostname cannot be converted to an IP address (e.g. through a name lookup failure). Return the IP address associated with host, with optional address family checking. host may be either a name or an IP address (in string form). If family is non-null, a gaierror will be thrown if the host's address family is not the same as the specified family. gaierror is also raised if the hostname cannot be converted to an IP address (e.g. through a name lookup failure). Return fqdn, but with its domain removed if it's on the same domain as the local machine. Convert a (host, port) tuple [IPv4] (host, port, flowinfo, scopeid) tuple [IPv6] to its corresponding IPEndPoint. Throws gaierror if host is not a valid address. Throws ArgumentTypeException if any of the following are true: - address does not have exactly two elements - address[0] is not a string - address[1] is not an int Convert an IPEndPoint to its corresponding (host, port) [IPv4] or (host, port, flowinfo, scopeid) [IPv6] tuple. Throws SocketException if the address family is other than IPv4 or IPv6. BER encoding of an integer value is the number of bytes required to represent the integer followed by the bytes Enum which specifies the format type for a compiled struct Struct used to store the format and the number of times it should be repeated. Duplicates a subprocess handle which was created for piping. This is only called when we're duplicating the handle to make it inheritable to the child process. In CPython the parent handle is always reliably garbage collected. Because we know this handle is not going to be used we close the handle being duplicated. Wrapper provided for backwards compatibility. Special hash function because IStructuralEquatable.GetHashCode is not allowed to throw. Special equals because none of the special cases in Ops.Equals are applicable here, and the reference equality check breaks some tests. gets the object or throws a reference exception Special equality function because IStructuralEquatable.Equals is not allowed to throw. gets the object or throws a reference exception Special equality function because IStructuralEquatable.Equals is not allowed to throw. Returns the underlying .NET RegistryKey zip_searchorder defines how we search for a module in the Zip archive: we first search for a package __init__, then for non-package .pyc, .pyo and .py entries. The .pyc and .pyo entries are swapped by initzipimport() if we run in optimized mode. Also, '/' is replaced by SEP there. Given a path to a Zip file and a toc_entry, return the (uncompressed) data as a new reference. Return the code object for the module named by 'fullname' from the Zip archive as a new reference. Given a path to a Zip archive, build a dict, mapping file names (local to the archive, using SEP as a separator) to toc entries. A toc_entry is a tuple: (__file__, # value to use for __file__, available for all files compress, # compression kind; 0 for uncompressed data_size, # size of compressed data on disk file_size, # size of decompressed data file_offset, # offset of file header from start of archive time, # mod time of file (in dos format) date, # mod data of file (in dos format) crc, # crc checksum of the data ) Directories can be recognized by the trailing SEP in the name, data_size and file_offset are 0. Given a (sub)modulename, write the potential file path in the archive (without extension) to the path buffer. Determines the type of module we have (package or module, or not found). Provides a StreamContentProvider for a stream of content backed by a file on disk. Delivers the remaining bits, left-aligned, in a byte. This is valid only if NumRemainingBits is less than 8; in other words it is valid only after a call to Flush(). Reset the BitWriter. This is useful when the BitWriter writes into a MemoryStream, and is used by a BZip2Compressor, which itself is re-used for multiple distinct data blocks. Write some number of bits from the given value, into the output. The nbits value should be a max of 25, for safety. For performance reasons, this method does not check! Write a full 8-bit byte into the output. Write four 8-bit bytes into the output. Write all available byte-aligned bytes. This method writes no new output, but flushes any accumulated bits. At completion, the accumulator may contain up to 7 bits. This is necessary when re-assembling output from N independent compressors, one for each of N blocks. The output of any particular compressor will in general have some fragment of a byte remaining. This fragment needs to be accumulated into the parent BZip2OutputStream. Writes all available bytes, and emits padding for the final byte as necessary. This must be the last method invoked on an instance of BitWriter. Knuth's increments seem to work better than Incerpi-Sedgewick here. Possibly because the number of elems to sort is usually small, typically <= 20. BZip2Compressor writes its compressed data out via a BitWriter. This is necessary because BZip2 does byte shredding. The number of uncompressed bytes being held in the buffer. I am thinking this may be useful in a Stream that uses this compressor class. In the Close() method on the stream it could check this value to see if anything has been written at all. You may think the stream could easily track the number of bytes it wrote, which would eliminate the need for this. But, there is the case where the stream writes a complete block, and it is full, and then writes no more. In that case the stream may want to check. Accept new bytes into the compressor data buffer This method does the first-level (cheap) run-length encoding, and stores the encoded data into the rle block. Process one input byte into the block. To "process" the byte means to do the run-length encoding. There are 3 possible return values: 0 - the byte was not written, in other words, not encoded into the block. This happens when the byte b would require the start of a new run, and the block has no more room for new runs. 1 - the byte was written, and the block is not full. 2 - the byte was written, and the block is full. 0 if the byte was not written, non-zero if written. Append one run to the output block. This compressor does run-length-encoding before BWT and etc. This method simply appends a run to the output block. The append always succeeds. The return value indicates whether the block is full: false (not full) implies that at least one additional run could be processed. true if the block is now full; otherwise false. Compress the data that has been placed (Run-length-encoded) into the block. The compressed data goes into the CompressedBytes array. Side effects: 1. fills the CompressedBytes array. 2. sets the AvailableBytesOut property. This is the most hammered method of this class.

This is the version using unrolled loops.

Method "mainQSort3", file "blocksort.c", BZip2 1.0.2 Array instance identical to sfmap, both are used only temporarily and independently, so we do not need to allocate additional memory. A read-only decorator stream that performs BZip2 decompression on Read. Compressor State Create a BZip2InputStream, wrapping it around the given input Stream. The input stream will be closed when the BZip2InputStream is closed. The stream from which to read compressed data Create a BZip2InputStream with the given stream, and specifying whether to leave the wrapped stream open when the BZip2InputStream is closed. The stream from which to read compressed data Whether to leave the input stream open, when the BZip2InputStream closes. This example reads a bzip2-compressed file, decompresses it, and writes the decompressed data into a newly created file. var fname = "logfile.log.bz2"; using (var fs = File.OpenRead(fname)) { using (var decompressor = new Ionic.BZip2.BZip2InputStream(fs)) { var outFname = fname + ".decompressed"; using (var output = File.Create(outFname)) { byte[] buffer = new byte[2048]; int n; while ((n = decompressor.Read(buffer, 0, buffer.Length)) > 0) { output.Write(buffer, 0, n); } } } } Read data from the stream. To decompress a BZip2 data stream, create a BZip2InputStream, providing a stream that reads compressed data. Then call Read() on that BZip2InputStream, and the data read will be decompressed as you read. A BZip2InputStream can be used only for Read(), not for Write(). The buffer into which the read data should be placed. the offset within that data array to put the first byte read. the number of bytes to read. the number of bytes actually read Read a single byte from the stream. the byte read from the stream, or -1 if EOF Indicates whether the stream can be read. The return value depends on whether the captive stream supports reading. Indicates whether the stream supports Seek operations. Always returns false. Indicates whether the stream can be written. The return value depends on whether the captive stream supports writing. Flush the stream. Reading this property always throws a . The position of the stream pointer. Setting this property always throws a . Reading will return the total number of uncompressed bytes read in. Calling this method always throws a . this is irrelevant, since it will always throw! this is irrelevant, since it will always throw! irrelevant! Calling this method always throws a . this is irrelevant, since it will always throw! Calling this method always throws a . this parameter is never used this parameter is never used this parameter is never used Dispose the stream. indicates whether the Dispose method was invoked by user code. Read n bits from input, right justifying the result. For example, if you read 1 bit, the result is either 0 or 1. The number of bits to read, always between 1 and 32. Called by createHuffmanDecodingTables() exclusively. Called by recvDecodingTables() exclusively. Freq table collected to save a pass over the data during decompression. Initializes the tt array. This method is called when the required length of the array is known. I don't initialize it at construction time to avoid unneccessary memory allocation when compressing small files. Dump the current state of the decompressor, to restore it in case of an error. This allows the decompressor to be essentially "rewound" and retried when more data arrives. This is only used by IronPython. The current state. Restore the internal compressor state if an error occurred. The old state. A write-only decorator stream that compresses data as it is written using the BZip2 algorithm. Constructs a new BZip2OutputStream, that sends its compressed output to the given output stream. The destination stream, to which compressed output will be sent. This example reads a file, then compresses it with bzip2 file, and writes the compressed data into a newly created file. var fname = "logfile.log"; using (var fs = File.OpenRead(fname)) { var outFname = fname + ".bz2"; using (var output = File.Create(outFname)) { using (var compressor = new Ionic.BZip2.BZip2OutputStream(output)) { byte[] buffer = new byte[2048]; int n; while ((n = fs.Read(buffer, 0, buffer.Length)) > 0) { compressor.Write(buffer, 0, n); } } } } Constructs a new BZip2OutputStream with specified blocksize. the destination stream. The blockSize in units of 100000 bytes. The valid range is 1..9. Constructs a new BZip2OutputStream. the destination stream. whether to leave the captive stream open upon closing this stream. Constructs a new BZip2OutputStream with specified blocksize, and explicitly specifies whether to leave the wrapped stream open. the destination stream. The blockSize in units of 100000 bytes. The valid range is 1..9. whether to leave the captive stream open upon closing this stream. Flush the stream. The blocksize parameter specified at construction time. Write data to the stream. Use the BZip2OutputStream to compress data while writing: create a BZip2OutputStream with a writable output stream. Then call Write() on that BZip2OutputStream, providing uncompressed data as input. The data sent to the output stream will be the compressed form of the input data. A BZip2OutputStream can be used only for Write() not for Read(). The buffer holding data to write to the stream. the offset within that data array to find the first byte to write. the number of bytes to write. Indicates whether the stream can be read. The return value is always false. Indicates whether the stream supports Seek operations. Always returns false. Indicates whether the stream can be written. The return value should always be true, unless and until the object is disposed and closed. Reading this property always throws a . The position of the stream pointer. Setting this property always throws a . Reading will return the total number of uncompressed bytes written through. Calling this method always throws a . this is irrelevant, since it will always throw! this is irrelevant, since it will always throw! irrelevant! Calling this method always throws a . this is irrelevant, since it will always throw! Calling this method always throws a . this parameter is never used this parameter is never used this parameter is never used never returns anything; always throws A write-only decorator stream that compresses data as it is written using the BZip2 algorithm. This stream compresses by block using multiple threads. This class performs BZIP2 compression through writing. For more information on the BZIP2 algorithm, see . This class is similar to , except that this implementation uses an approach that employs multiple worker threads to perform the compression. On a multi-cpu or multi-core computer, the performance of this class can be significantly higher than the single-threaded BZip2OutputStream, particularly for larger streams. How large? Anything over 10mb is a good candidate for parallel compression. The tradeoff is that this class uses more memory and more CPU than the vanilla BZip2OutputStream. Also, for small files, the ParallelBZip2OutputStream can be much slower than the vanilla BZip2OutputStream, because of the overhead associated to using the thread pool. Constructs a new ParallelBZip2OutputStream, that sends its compressed output to the given output stream. The destination stream, to which compressed output will be sent. This example reads a file, then compresses it with bzip2 file, and writes the compressed data into a newly created file. var fname = "logfile.log"; using (var fs = File.OpenRead(fname)) { var outFname = fname + ".bz2"; using (var output = File.Create(outFname)) { using (var compressor = new Ionic.BZip2.ParallelBZip2OutputStream(output)) { byte[] buffer = new byte[2048]; int n; while ((n = fs.Read(buffer, 0, buffer.Length)) > 0) { compressor.Write(buffer, 0, n); } } } } Constructs a new ParallelBZip2OutputStream with specified blocksize. the destination stream. The blockSize in units of 100000 bytes. The valid range is 1..9. Constructs a new ParallelBZip2OutputStream. the destination stream. whether to leave the captive stream open upon closing this stream. Constructs a new ParallelBZip2OutputStream with specified blocksize, and explicitly specifies whether to leave the wrapped stream open. the destination stream. The blockSize in units of 100000 bytes. The valid range is 1..9. whether to leave the captive stream open upon closing this stream. The maximum number of concurrent compression worker threads to use. This property sets an upper limit on the number of concurrent worker threads to employ for compression. The implementation of this stream employs multiple threads from the .NET thread pool, via ThreadPool.QueueUserWorkItem(), to compress the incoming data by block. As each block of data is compressed, this stream re-orders the compressed blocks and writes them to the output stream. A higher number of workers enables a higher degree of parallelism, which tends to increase the speed of compression on multi-cpu computers. On the other hand, a higher number of buffer pairs also implies a larger memory consumption, more active worker threads, and a higher cpu utilization for any compression. This property enables the application to limit its memory consumption and CPU utilization behavior depending on requirements. By default, DotNetZip allocates 4 workers per CPU core, subject to the upper limit specified in this property. For example, suppose the application sets this property to 16. Then, on a machine with 2 cores, DotNetZip will use 8 workers; that number does not exceed the upper limit specified by this property, so the actual number of workers used will be 4 * 2 = 8. On a machine with 4 cores, DotNetZip will use 16 workers; again, the limit does not apply. On a machine with 8 cores, DotNetZip will use 16 workers, because of the limit. For each compression "worker thread" that occurs in parallel, there is up to 2mb of memory allocated, for buffering and processing. The actual number depends on the property. CPU utilization will also go up with additional workers, because a larger number of buffer pairs allows a larger number of background threads to compress in parallel. If you find that parallel compression is consuming too much memory or CPU, you can adjust this value downward. The default value is 16. Different values may deliver better or worse results, depending on your priorities and the dynamic performance characteristics of your storage and compute resources. The application can set this value at any time, but it is effective only before the first call to Write(), which is when the buffers are allocated. Flush the stream. The blocksize parameter specified at construction time. Write data to the stream. Use the ParallelBZip2OutputStream to compress data while writing: create a ParallelBZip2OutputStream with a writable output stream. Then call Write() on that ParallelBZip2OutputStream, providing uncompressed data as input. The data sent to the output stream will be the compressed form of the input data. A ParallelBZip2OutputStream can be used only for Write() not for Read(). The buffer holding data to write to the stream. the offset within that data array to find the first byte to write. the number of bytes to write. Indicates whether the stream can be read. The return value is always false. Indicates whether the stream supports Seek operations. Always returns false. Indicates whether the stream can be written. The return value depends on whether the captive stream supports writing. Reading this property always throws a . The position of the stream pointer. Setting this property always throws a . Reading will return the total number of uncompressed bytes written through. The total number of bytes written out by the stream. This value is meaningful only after a call to Close(). Calling this method always throws a . this is irrelevant, since it will always throw! this is irrelevant, since it will always throw! irrelevant! Calling this method always throws a . this is irrelevant, since it will always throw! Calling this method always throws a . this parameter is never used this parameter is never used this parameter is never used never returns anything; always throws Returns the "random" number at a specific index. the index the random number Computes a CRC-32. The CRC-32 algorithm is parameterized - you can set the polynomial and enable or disable bit reversal. This can be used for GZIP, BZip2, or ZIP. This type is used internally by DotNetZip; it is generally not used directly by applications wishing to create, read, or manipulate zip archive files. Indicates the total number of bytes applied to the CRC. Indicates the current CRC for all blocks slurped in. Returns the CRC32 for the specified stream. The stream over which to calculate the CRC32 the CRC32 calculation Returns the CRC32 for the specified stream, and writes the input into the output stream. The stream over which to calculate the CRC32 The stream into which to deflate the input the CRC32 calculation Get the CRC32 for the given (word,byte) combo. This is a computation defined by PKzip for PKZIP 2.0 (weak) encryption. The word to start with. The byte to combine it with. The CRC-ized result. Update the value for the running CRC32 using the given block of bytes. This is useful when using the CRC32() class in a Stream. block of bytes to slurp starting point in the block how many bytes within the block to slurp Process one byte in the CRC. the byte to include into the CRC . Process a run of N identical bytes into the CRC. This method serves as an optimization for updating the CRC when a run of identical bytes is found. Rather than passing in a buffer of length n, containing all identical bytes b, this method accepts the byte value and the length of the (virtual) buffer - the length of the run. the byte to include into the CRC. the number of times that byte should be repeated. Combines the given CRC32 value with the current running total. This is useful when using a divide-and-conquer approach to calculating a CRC. Multiple threads can each calculate a CRC32 on a segment of the data, and then combine the individual CRC32 values at the end. the crc value to be combined with this one the length of data the CRC value was calculated on Create an instance of the CRC32 class using the default settings: no bit reversal, and a polynomial of 0xEDB88320. Create an instance of the CRC32 class, specifying whether to reverse data bits or not. specify true if the instance should reverse data bits. In the CRC-32 used by BZip2, the bits are reversed. Therefore if you want a CRC32 with compatibility with BZip2, you should pass true here. In the CRC-32 used by GZIP and PKZIP, the bits are not reversed; Therefore if you want a CRC32 with compatibility with those, you should pass false. Create an instance of the CRC32 class, specifying the polynomial and whether to reverse data bits or not. The polynomial to use for the CRC, expressed in the reversed (LSB) format: the highest ordered bit in the polynomial value is the coefficient of the 0th power; the second-highest order bit is the coefficient of the 1 power, and so on. Expressed this way, the polynomial for the CRC-32C used in IEEE 802.3, is 0xEDB88320. specify true if the instance should reverse data bits. In the CRC-32 used by BZip2, the bits are reversed. Therefore if you want a CRC32 with compatibility with BZip2, you should pass true here for the reverseBits parameter. In the CRC-32 used by GZIP and PKZIP, the bits are not reversed; Therefore if you want a CRC32 with compatibility with those, you should pass false for the reverseBits parameter. Reset the CRC-32 class - clear the CRC "remainder register." Use this when employing a single instance of this class to compute multiple, distinct CRCs on multiple, distinct data blocks. A Stream that calculates a CRC32 (a checksum) on all bytes read, or on all bytes written. This class can be used to verify the CRC of a ZipEntry when reading from a stream, or to calculate a CRC when writing to a stream. The stream should be used to either read, or write, but not both. If you intermix reads and writes, the results are not defined. This class is intended primarily for use internally by the DotNetZip library. The default constructor. Instances returned from this constructor will leave the underlying stream open upon Close(). The stream uses the default CRC32 algorithm, which implies a polynomial of 0xEDB88320. The underlying stream The constructor allows the caller to specify how to handle the underlying stream at close. The stream uses the default CRC32 algorithm, which implies a polynomial of 0xEDB88320. The underlying stream true to leave the underlying stream open upon close of the CrcCalculatorStream; false otherwise. A constructor allowing the specification of the length of the stream to read. The stream uses the default CRC32 algorithm, which implies a polynomial of 0xEDB88320. Instances returned from this constructor will leave the underlying stream open upon Close(). The underlying stream The length of the stream to slurp A constructor allowing the specification of the length of the stream to read, as well as whether to keep the underlying stream open upon Close(). The stream uses the default CRC32 algorithm, which implies a polynomial of 0xEDB88320. The underlying stream The length of the stream to slurp true to leave the underlying stream open upon close of the CrcCalculatorStream; false otherwise. A constructor allowing the specification of the length of the stream to read, as well as whether to keep the underlying stream open upon Close(), and the CRC32 instance to use. The stream uses the specified CRC32 instance, which allows the application to specify how the CRC gets calculated. The underlying stream The length of the stream to slurp true to leave the underlying stream open upon close of the CrcCalculatorStream; false otherwise. the CRC32 instance to use to calculate the CRC32 Gets the total number of bytes run through the CRC32 calculator. This is either the total number of bytes read, or the total number of bytes written, depending on the direction of this stream. Provides the current CRC for all blocks slurped in. The running total of the CRC is kept as data is written or read through the stream. read this property after all reads or writes to get an accurate CRC for the entire stream. Indicates whether the underlying stream will be left open when the CrcCalculatorStream is Closed. Set this at any point before calling . Read from the stream the buffer to read the offset at which to start the number of bytes to read the number of bytes actually read Write to the stream. the buffer from which to write the offset at which to start writing the number of bytes to write Indicates whether the stream supports reading. Indicates whether the stream supports seeking. Always returns false. Indicates whether the stream supports writing. Flush the stream. Returns the length of the underlying stream. The getter for this property returns the total bytes read. If you use the setter, it will throw . Seeking is not supported on this stream. This method always throws N/A N/A N/A This method always throws N/A