- bytes
- The byte array containing the sequence of bytes to decode.
- byteIndex
- The index of the first byte to decode.
- byteCount
- The number of bytes to decode.
- chars
- The character array to contain the resulting set of characters.
- charIndex
- The index at which to start writing the resulting set of characters.
The actual number of characters written into chars.
To calculate the exact array size required by UTF7Encoding.GetChars(Byte[], int, int, Char[], int) to store the resulting characters, use UTF7Encoding.GetCharCount(Byte[], int, int). To calculate the maximum array size, the application should use UTF7Encoding.GetMaxCharCount(int). The UTF7Encoding.GetCharCount(Byte[], int, int) method generally allows allocation of less memory, while the UTF7Encoding.GetMaxCharCount(int) method generally executes faster.
Data to be converted, such as data read from a stream, might be available only in sequential blocks. In this case, or if the amount of data is so large that it needs to be divided into smaller blocks, the application should use the System.Text.Decoder or the System.Text.Encoder provided by the UTF7Encoding.GetDecoder method or the UTF7Encoding.GetEncoder method, respectively.
System.Text.UTF7Encoding does not provide error detection. When invalid bytes are encountered, System.Text.UTF7Encoding generally emits the invalid bytes. If a byte is larger than hexadecimal 0x7F, the byte value is zero-extended into a Unicode character, the result is stored in the chars array, and any shift sequence is terminated. For example, if the byte to encode is hexadecimal 0x81, the resulting character is U+0081. For security reasons, your applications are recommended to use System.Text.UTF8Encoding, System.Text.UnicodeEncoding, or System.Text.UTF32Encoding and enable error detection.