fltk/utf.h File Reference

#include "FL_API.h"
#include <stdlib.h>

Functions

const char * utf8back (const char *, const char *start, const char *end)
int utf8bytes (unsigned ucs)
unsigned utf8decode (const char *, const char *end, int *len)
int utf8encode (unsigned, char *)
unsigned utf8froma (char *, unsigned, const char *, unsigned)
unsigned utf8frommb (char *, unsigned, const char *, unsigned)
unsigned utf8fromwc (char *, unsigned, const wchar_t *, unsigned)
const char * utf8fwd (const char *, const char *start, const char *end)
int utf8locale ()
int utf8test (const char *, unsigned)
unsigned utf8toa (const char *, unsigned, char *, unsigned)
unsigned utf8tomb (const char *, unsigned, char *, unsigned)
unsigned utf8towc (const char *, unsigned, wchar_t *, unsigned)

Detailed Description

Functions to manipulate UTF-8 strings and convert from/to legacy encodings. These functions are not in the fltk namespace.


Function Documentation

const char* utf8back ( const char *  p,
const char *  start,
const char *  end 
)

Move p backward until it points to the start of a UTF-8 character. If it already points at the start of one then it is returned unchanged. Any UTF-8 errors are treated as though each byte of the error is an individual character.

start is the start of the string and is used to limit the backwards search for the start of a UTF-8 character.

end is the end of the string and is assummed to be a break between characters. It is assummed to be greater than p.

If you wish to decrement a UTF-8 pointer, pass p-1 to this.

int utf8bytes ( unsigned  ucs)

Returns number of bytes that utf8encode() will use to encode the character ucs.

unsigned utf8decode ( const char *  p,
const char *  end,
int *  len 
)

Decode a single UTF-8 encoded character starting at p. The resulting Unicode value (in the range 0-0x10ffff) is returned, and len is set the the number of bytes in the UTF-8 encoding (adding len to p will point at the next character).

If p points at an illegal UTF-8 encoding, including one that would go past end, or where a code is uses more bytes than necessary, then *(unsigned char*)p is translated as though it is in the Microsoft CP1252 character set and len is set to 1. Treating errors this way allows this to decode almost any ISO-8859-1 or CP1252 text that has been mistakenly placed where UTF-8 is expected, and has proven very useful.

If you want errors to be converted to error characters (as the standards recommend), adding a test to see if the length is unexpectedly 1 will work:

    if (*p & 0x80) { // what should be a multibyte encoding
      code = utf8decode(p,end,&len);
      if (len<2) code = 0xFFFD; // Turn errors into REPLACEMENT CHARACTER
    } else { // handle the 1-byte utf8 encoding:
      code = *p;
      len = 1;
    }

Direct testing for the 1-byte case (as shown above) will also speed up the scanning of strings where the majority of characters are ASCII.

int utf8encode ( unsigned  ucs,
char *  buf 
)

Write the UTF-8 encoding of ucs into buf and return the number of bytes written. Up to 4 bytes may be written. If you know that ucs is less than 0x10000 then at most 3 bytes will be written. If you wish to speed this up, remember that anything less than 0x80 is written as a single byte.

If ucs is greater than 0x10ffff this is an illegal character according to RFC 3629. These are converted as though they are 0xFFFD (REPLACEMENT CHARACTER).

RFC 3629 also says many other values for ucs are illegal (in the range 0xd800 to 0xdfff, or ending with 0xfffe or 0xffff). However I encode these as though they are legal, so that utf8encode/utf8decode will be the identity for all codes between 0 and 0x10ffff.

unsigned utf8froma ( char *  dst,
unsigned  dstlen,
const char *  src,
unsigned  srclen 
)

Convert an ISO-8859-1 (ie normal c-string) byte stream to UTF-8.

It is possible this should convert Microsoft's CP1252 to UTF-8 instead. This would translate the codes in the range 0x80-0x9f to different characters. Currently it does not do this.

Up to dstlen bytes are written to dst, including a null terminator. The return value is the number of bytes that would be written, not counting the null terminator. If greater or equal to dstlen then if you malloc a new array of size n+1 you will have the space needed for the entire string. If dstlen is zero then nothing is written and this call just measures the storage space needed.

srclen is the number of bytes in src to convert.

If the return value equals srclen then this indicates that no conversion is necessary, as only ASCII characters are in the string.

unsigned utf8frommb ( char *  dst,
unsigned  dstlen,
const char *  src,
unsigned  srclen 
)

Convert a filename from the locale-specific multibyte encoding used by Windows to UTF-8 as used by FLTK.

Up to dstlen bytes are written to dst, including a null terminator. The return value is the number of bytes that would be written, not counting the null terminator. If greater or equal to dstlen then if you malloc a new array of size n+1 you will have the space needed for the entire string. If dstlen is zero then nothing is written and this call just measures the storage space needed.

On Unix or on Windows when a UTF-8 locale is in effect, this does not change the data. It is copied and truncated as necessary to the destination buffer and srclen is always returned. You may also want to check if utf8test() returns non-zero, so that the filesystem can store filenames in UTF-8 encoding regardless of the locale.

unsigned utf8fromwc ( char *  dst,
unsigned  dstlen,
const wchar_t *  src,
unsigned  srclen 
)

Turn "wide characters" as returned by some system calls (especially on Windows) into UTF-8.

Up to dstlen bytes are written to dst, including a null terminator. The return value is the number of bytes that would be written, not counting the null terminator. If greater or equal to dstlen then if you malloc a new array of size n+1 you will have the space needed for the entire string. If dstlen is zero then nothing is written and this call just measures the storage space needed.

srclen is the number of words in src to convert. On Windows this is not necessairly the number of characters, due to there possibly being "surrogate pairs" in the UTF-16 encoding used. On Unix wchar_t is 32 bits and each location is a character.

On Unix if a src word is greater than 0x10ffff then this is an illegal character according to RFC 3629. These are converted as though they are 0xFFFD (REPLACEMENT CHARACTER). Characters in the range 0xd800 to 0xdfff, or ending with 0xfffe or 0xffff are also illegal according to RFC 3629. However I encode these as though they are legal, so that utf8towc will return the original data.

On Windows "surrogate pairs" are converted to a single character and UTF-8 encoded (as 4 bytes). Mismatched halves of surrogate pairs are converted as though they are individual characters.

const char* utf8fwd ( const char *  p,
const char *  start,
const char *  end 
)

Move p forward until it points to the start of a UTF-8 character. If it already points at the start of one then it is returned unchanged. Any UTF-8 errors are treated as though each byte of the error is an individual character.

start is the start of the string and is used to limit the backwards search for the start of a utf8 character.

end is the end of the string and is assummed to be a break between characters. It is assummed to be greater than p.

This function is for moving a pointer that was jumped to the middle of a string, such as when doing a binary search for a position. You should use either this or utf8back() depending on which direction your algorithim can handle the pointer moving. Do not use this to scan strings, use utf8decode() instead.

int utf8locale ( void  )

Return true if the "locale" seems to indicate that UTF-8 encoding is used. If true the utf8tomb and utf8frommb don't do anything useful.

It is highly recommended that you change your system so this does return true. On Windows this is done by setting the "codepage" to CP_UTF8. On Unix this is done by setting $LC_CTYPE to a string containing the letters "utf" or "UTF" in it, or by deleting all $LC* and $LANG environment variables. In the future it is likely that all non-Asian Unix systems will return true, due to the compatability of UTF-8 with ISO-8859-1.

int utf8test ( const char *  src,
unsigned  srclen 
)

Examines the first srclen bytes in src and return a verdict on whether it is UTF-8 or not.

  • Returns 0 if there is any illegal UTF-8 sequences, using the same rules as utf8decode(). Note that some UCS values considered illegal by RFC 3629, such as 0xffff, are considered legal by this.
  • Returns 1 if there are only single-byte characters (ie no bytes have the high bit set). This is legal UTF-8, but also indicates plain ASCII. It also returns 1 if srclen is zero.
  • Returns 2 if there are only characters less than 0x800.
  • Returns 3 if there are only characters less than 0x10000.
  • Returns 4 if there are characters in the 0x10000 to 0x10ffff range.

Because there are many illegal sequences in UTF-8, it is almost impossible for a string in another encoding to be confused with UTF-8. This is very useful for transitioning Unix to UTF-8 filenames, you can simply test each filename with this to decide if it is UTF-8 or in the locale encoding. My hope is that if this is done we will be able to cleanly transition to a locale-less encoding.

unsigned utf8toa ( const char *  src,
unsigned  srclen,
char *  dst,
unsigned  dstlen 
)

Convert a UTF-8 sequence into an array of 1-byte characters.

If the UTF-8 decodes to a character greater than 0xff then it is replaced with '?'.

Errors in the UTF-8 are converted as individual bytes, same as utf8decode() does. This allows ISO-8859-1 text mistakenly identified as UTF-8 to be printed correctly (and possibly CP1512 on Windows).

src points at the UTF-8, and srclen is the number of bytes to convert.

Up to dstlen bytes are written to dst, including a null terminator. The return value is the number of bytes that would be written, not counting the null terminator. If greater or equal to dstlen then if you malloc a new array of size n+1 you will have the space needed for the entire string. If dstlen is zero then nothing is written and this call just measures the storage space needed.

unsigned utf8tomb ( const char *  src,
unsigned  srclen,
char *  dst,
unsigned  dstlen 
)

Convert the UTF-8 used by FLTK to the locale-specific encoding used for filenames (and sometimes used for data in files). Unfortunatley due to stupid design you will have to do this as needed for filenames. This is a bug on both Unix and Windows.

Up to dstlen bytes are written to dst, including a null terminator. The return value is the number of bytes that would be written, not counting the null terminator. If greater or equal to dstlen then if you malloc a new array of size n+1 you will have the space needed for the entire string. If dstlen is zero then nothing is written and this call just measures the storage space needed.

If utf8locale() returns true then this does not change the data. It is copied and truncated as necessary to the destination buffer and srclen is always returned.

unsigned utf8towc ( const char *  src,
unsigned  srclen,
wchar_t *  dst,
unsigned  dstlen 
)

Convert a UTF-8 sequence into an array of wchar_t. These are used by some system calls, especially on Windows.

src points at the UTF-8, and srclen is the number of bytes to convert.

dst points at an array to write, and dstlen is the number of locations in this array. At most dstlen-1 words will be written there, plus a 0 terminating word. Thus this function will never overwrite the buffer and will always return a zero-terminated string. If dstlen is zero then dst can be null and no data is written, but the length is returned.

The return value is the number of words that would be written to dst if it were long enough, not counting the terminating zero. If the return value is greater or equal to dstlen it indicates truncation, you can then allocate a new array of size return+1 and call this again.

Errors in the UTF-8 are converted as though each byte in the erroneous string is in the Microsoft CP1252 encoding. This allows ISO-8859-1 text mistakenly identified as UTF-8 to be printed correctly.

Notice that sizeof(wchar_t) is 2 on Windows and is 4 on Linux and most other systems. Where wchar_t is 16 bits, Unicode characters in the range 0x10000 to 0x10ffff are converted to "surrogate pairs" which take two words each (this is called UTF-16 encoding). If wchar_t is 32 bits this rather nasty problem is avoided.