When overridden in a derived class, decodes a sequence of bytes starting at the specified byte pointer into a set of characters that are stored starting at the specified character pointer.
The actual number of characters written at the location indicated by the chars parameter.
To calculate the exact array size that Encoding.GetChars(Byte[]) requires to store the resulting characters, the application should use Encoding.GetCharCount(Byte[]). To calculate the maximum array size, the application should use Encoding.GetMaxCharCount(int). The Encoding.GetCharCount(Byte[]) method generally allows allocation of less memory, while the Encoding.GetMaxCharCount(int) method generally executes faster.
Encoding.GetChars(Byte*, int, Char*, int) gets characters from an input byte sequence. Encoding.GetChars(Byte*, int, Char*, int) is different than Decoder.GetChars(Byte[], int, int, Char[], int) because System.Text.Encoding expects discrete conversions, while System.Text.Decoder is designed for multiple passes on a single input stream.
If the data to be converted is available only in sequential blocks (such as data read from a stream) or if the amount of data is so large that it needs to be divided into smaller blocks, the application should use the System.Text.Decoder or the System.Text.Encoder object provided by the Encoding.GetDecoder or the Encoding.GetEncoder method, respectively, of a derived class.
Note This method is intended to operate on Unicode characters, not on arbitrary binary data, such as byte arrays. If your application needs to encode arbitrary binary data into text, it should use a protocol such as uuencode, which is implemented by methods such as Convert.ToBase64CharArray(Byte[], int, int, Char[], int).
The Encoding.GetCharCount(Byte[]) method determines how many characters result in decoding a sequence of bytes, and the Encoding.GetChars(Byte[]) method performs the actual decoding. The Encoding.GetChars(Byte[]) method expects discrete conversions, in contrast to the Decoder.GetChars(Byte[], int, int, Char[], int) method, which handles multiple passes on a single input stream.
Several versions of Encoding.GetCharCount(Byte[]) and Encoding.GetChars(Byte[]) are supported. The following are some programming considerations for use of these methods:
The application might need to decode multiple input bytes from a code page and process the bytes using multiple calls. In this case, your application probably needs to maintain state between calls, because byte sequences can be interrupted when processed in batches. (For example, part of an ISO-2022 shift sequence may end one Encoding.GetChars(Byte*, int, Char*, int) call and continue at the beginning of the next Encoding.GetChars(Byte*, int, Char*, int) call. Encoding.GetChars(Byte*, int, Char*, int) will call the fallback for those incomplete sequences, but System.Text.Decoder will remember those sequences for the next call.)
If the application handles string outputs, the Encoding.GetString(Byte[]) method is recommended. Since this method must check string length and allocate a buffer, it is slightly slower, but the resulting string type is to be preferred.
The byte version of Encoding.GetChars(Byte*, int, Char*, int) allows some fast techniques, particularly with multiple calls to large buffers. Bear in mind, however, that this method version is sometimes unsafe, since pointers are required.
If your application must convert a large amount of data, it should reuse the output buffer. In this case, the Encoding.GetChars(Byte[], int, int, Char[], int) version that supports output character buffers is the best choice.
Consider using the erload:System.Text.Decoder.Convert method instead of Encoding.GetCharCount(Byte[]). The conversion method converts as much data as possible and throws an exception if the output buffer is too small. For continuous decoding of a stream, this method is often the best choice.