Recognize Unicode escape sequences in string literals


WNSupported in Synergy .NET on Windows





The .UNICODE preprocessor directives provide a way to create strings with embedded Unicode (UTF-16) characters by enabling recognition of the Unicode escape sequences in string literals for the rest of the source file. .NOUNICODE turns .UNICODE off.

The presence or absence of .UNICODE determines whether the escape sequences are recognized or are interpreted literally, respectively. If .UNICODE is active, a string literal that contains a “\u” followed by a four-digit hexadecimal number inserts the UTF-16 character into the string, and the string itself is made up of UTF-16 characters. For example, the following Unicode string literal creates a 16-bit Unicode character (UTF-16) called the double-prime (") whose value is 2033 hex:

strvar = "This is a Unicode \u2033 string literal"

Literals that contain Unicodes are intended to be assigned to .NET string types (System.String).

If .UNICODE is not active, “\u” is not recognized as an escape sequence, and the string literal is created verbatim, including the “\u” and any digits that follow.