I promised to finish the series on Unicode and UTF-8 so here is the final instalment, better late than never. Before reading this article I suggest that you read Part 1 and Part 2 which cover some important background. As usual, I’m trying to avoid simply repeating the huge wealth of information already published on this topic, but (hopefully) it will provide a few additional details which may assist with understanding. Additionally, I’m missing out a lot of detail and not taking a “rigorous” approach in my explanations, so I’d be grateful to know if readers feel whether or not it is useful.
Reminder on code points: The Unicode encoding scheme assigns each character with a unique integer in the range 0 to 1,114,111; each integer is called a code point.
The “TF” in UTF-8 stands for Transformation Format so, in essence, you can think of UTF-8 as a “recipe” for converting (transforming) a Unicode code point value into a sequence of 1 to 4 byte-sized chunks. Converting the smallest code points (00
to 7F
) to UTF-8 generates 1 byte whilst the higher code point values (10000
to 10FFFF
) generate 4 bytes.
For example, the Arabic letter ش (“sheen”) is allocated the Unicode code point value 0634
(hex) and its representation in UTF-8 is the two byte sequence D8 B4
(hex). In the remainder of this article I will use examples from the Unicode encoding for Arabic, which is split into 4 blocks within the Basic Multilingual Plane.
Aside: refresher on hexadecimal: In technical literature discussing computer storage of numbers you will likely come across binary, octal and hexadecimal number systems. Consider the decimal number 251 which can be written as 251 = 2 x 102 + 5 x 101 + 1 x 100 = 200 + 50 + 1. Here we are breaking 251 down into powers of 10: 102, 101 and 100. We call 10 the base. However, we can use other bases including 2 (binary), 8 (octal) and 16 (hex). Note: x0 = 1 for any value of x not equal to 0.
Starting with binary (base 2) we can write 251 as
27 |
26 |
25 |
24 |
23 |
22 |
21 |
20 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
If we use 8 as the base (called octal), 251 can be written as
= 3 x 82 + 7 x 81 + 3 x 80
= 3 x 64 + 7 x 8 + 3 x 1
If we use 16 as the base (called hexidecimal), 251 can be written as
Ah, but writing 251 as “1511” in hex (= 15 x 161 + 11 x 160) is very confusing and problematic. Consequently, for numbers between 10 and 15 we choose to represent them in hex as follows
- A=10
- B=11
- C=12
- D=13
- E=14
- F=15
Consequently, 251 written in hex, is represented as F x 161 + B x 160, so that 251 = FB in hex. Each byte can be represented by a pair of hex digits.
So where do we start?
To convert code points into UTF-8 byte sequences the code points are divided up into the following ranges and use the UTF-8 conversion pattern shown in the following table to map each code point value into a series of bytes.
Code point range |
Code point binary sequences |
UTF-8 bytes |
00 to7F |
0xxxxxxx |
0xxxxxxx |
0080 to 07FF |
00000yyy yyxxxxxx |
110yyyyy 10xxxxxx |
0800 to FFFF |
zzzzyyyy yyxxxxxx |
1110zzzz 10yyyyyy 10xxxxxx |
010000 to 10FFFF |
000wwwzz zzzzyyyy yyxxxxxx |
11110www 10zzzzzz 10yyyyyy 10xxxxxx |
Source: Wikipedia
Just a small point but you’ll note that the code points in the table have a number of leading zeros, for example 0080
. Recalling that a byte is a pair of hex digits, the leading zeros help to indicate the number of bytes being used to represent the numbers. For example, 0080
is two bytes (00
and 80
) and you’ll see that in the second column where the code point is written out in its binary representation.
A note on storage of integers: An extremely important topic, but not one I’m not going to address in detail, is the storage of different integer types on various computer platforms: issues include the lengths of integer storage units and endianness. The interested reader can start with these articles on Wikipedia:
- Integer (computer science)
- Short integer
- Endianness
For simplicity, I will assume that the code point range 0080
to 07FF
is stored in a 16-bit storage unit called an unsigned short integer.
The code point range 010000
to 10FFFF
contains code points that need a maximum of 21 bits of storage (100001111111111111111
for 10FFFF
) but in practice they would usually be stored in a 32-bit unsigned integer.
Let’s walk through the process for the Arabic letter ش (“sheen”) which is allocated the Unicode code point of 0634
(in hex). Looking at our table, 0634
is in the range 0080
to 07FF
so we need to transform 0634
into 2 UTF-8 bytes.
Tip for Windows users: The calculator utility shipped with Windows will generate bit patterns for you from decimal, hex and octal numbers.
Looking back at the table, we note that the UTF-8 bytes are constructed from ranges of bits contained in our code points. For example, referring to the code point range 0080
to 07FF
, the first UTF-8 byte 110
yyy
yy
contains the bit range yyy
yy
from our code point. Recalling our (simplifying) assumption that we are storing numbers 0080
to 07FF
in 16-bit integers, the first step is to write 0634
(hex) as a pattern of bits, which is the 16-bit pattern 0000011000110100
.
Our task is to “extract” the bit patterns yyy
yy
and xxxxxx
so we place the appropriate bit pattern from the table next to our code point value:
0 |
0 |
0 |
0 |
0 |
1 |
1 |
0 |
0 |
0 |
1 |
1 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
y |
y |
y |
y |
y |
x |
x |
x |
x |
x |
x |
By doing this we can quickly see that
yyy
yy
= 11000
xxxxxx
= 110100
The UTF-8 conversion “template” for this code point value yields two separate bytes according to the pattern
110yyy
yy
10
xxxxxx
Hence we can write the UTF-8 bytes as 11011000 10110100
which, in hex notation, is D8 B4
.
So, to transform the code point value 0634
into UTF-8 we have to generate 2 bytes by isolating the individual bit patterns of our code point value and using those bit patterns to construct two individual UTF-8 bytes. And the same general principle applies whether we need to create 2, 3 or 4 UTF-8 bytes for a particular code point: just follow the appropriate conversion pattern in the table. Of course, the conversion is trivial for 00
to 7F
and is just the value of the code point itself.
How do we do this programmatically?
In C this is achieved by “bit masking” and “bit shifting”, which are fast, low-level operations. One simple algorithm could be:
- Apply a bit mask to the code point to isolate the bits of interest.
- If required, apply a right shift operator (
>>
) to “shuffle” the bit pattern to the right.
- Add the appropriate quantity to give the UTF-8 value.
- Store the result in a byte.
Bit masking
Bit masking uses the binary AND operator (&
) which has the following properties:
1 & 1 = 1
1 & 0 = 0
0 & 1 = 0
0 & 0 = 0
We can use this property of the &
operator to isolate individual bit patterns in a number by using a suitable bit mask which zeros out all but the bits we want to keep. From our table, code point values in the range 0080
to 07FF
have a general 16-bit pattern represented as
00000yyy
yyxxxxxx
We want to extract the two series of bit patterns: yyy
yy
and xxxxxx
from our code point value so that we can use them to create two separate UTF-8 bytes:
UTF-8 byte 1 = 110
yyy
yy
UTF-8 byte 2 = 10
xxxxxx
Isolating yyy
yy
To isolate yyy
yy
we can use the following bit mask with the &
operator
0 |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
This masking value is 0000011111000000 = 0x07C0
(hex number in C notation).
0 |
0 |
0 |
0 |
0 |
y |
y |
y |
y |
y |
x |
x |
x |
x |
x |
x |
Generic bit pattern |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
Binary AND operator |
0 |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
0 |
Bit mask |
0 |
0 |
0 |
0 |
0 |
y |
y |
y |
y |
y |
0 |
0 |
0 |
0 |
0 |
0 |
Result of operation |
Note that the result of the masking operation for yyy
yy
leaves this bit pattern “stranded” in the middle of the number. So, we need to “shuffle” yyy
yy
along to the right by 6 places. To do this in C we use the right shift operator >>
.
Isolating xxxxxx
To isolate xxxxxx
we can use the following bit mask with the &
operator:
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
The masking value is 0000000000111111 = 0x003F
(hex number in C notation).
0 |
0 |
0 |
0 |
0 |
y |
y |
y |
y |
y |
x |
x |
x |
x |
x |
x |
Generic bit pattern |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
& |
Binary AND operator |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
Bit mask |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
x |
x |
x |
x |
x |
x |
Result of operation |
The result of bit masking for xxxxxx
leaves it at the right so we do not need to shuffle via the right shift operator >>.
Noting that
110
yyy
yy
= 11000000
+ 000
yyy
yy
= 0xC0
+ 000
yyy
yy
and that
10
xxxxxx
= 10000000
+ 00
xxxxxx
= 0x80
+ 00
xxxxxx
we can summarize the process of transforming a code point between 0080
and 07FF
into 2 bytes of UTF-8 data with a short snippet of C code.
unsigned char arabic_utf_byte1;
unsigned char arabic_utf_byte2;
unsigned short p; // our code point between 0080 and 07FF
arabic_utf_byte1= (unsigned char)(((p & 0x07c0) >> 6) + 0xC0);
arabic_utf_byte2= (unsigned char)((p & 0x003F) + 0x80);
Which takes a lot less space than the explanation!
Other Arabic code point ranges
We have laboriously worked through the UTF-8 conversion process for code points which span the range 0080
to 07FF
, a range which includes the “core” Arabic character code point range of 0600
to 06FF
and the Arabic Supplement code point range of 0750
to 077F
.
There are two further ranges we need to explore:
- Arabic presentation forms A:
FB50
to FDFF
- Arabic presentation forms B:
FE70
to FEFF
Looking back to our table, these two Arabic presentation form ranges fall within 0800
to FFFF
so we need to generate 3 bytes to encode them into UTF-8. The principles follow the reasoning above so I will not repeat that here but simply offer some sample C code. Note that there is no error checking whatsoever in this code, it is simply meant to be an illustrative example and certainly needs to be improved for any form of production use.
You can download the C source and a file “arabic.txt” which contains the a sample of output from the code below. I hope it is useful.
#include <stdio.h>
void presentationforms(unsigned short min, unsigned short max, FILE* arabic);
void coreandsupplement(unsigned short min, unsigned short max, FILE* arabic);
void main() {
FILE * arabic= fopen("arabic.txt", "wb");
coreandsupplement(0x600, 0x6FF, arabic);
coreandsupplement(0x750, 0x77F, arabic);
presentationforms(0xFB50, 0xFDFF, arabic);
presentationforms(0xFE70, 0xFEFF, arabic);
fclose(arabic);
}
void coreandsupplement(unsigned short min, unsigned short max, FILE* arabic)
{
unsigned char arabic_utf_byte1;
unsigned char arabic_utf_byte2;
unsigned short p;
for(p = min; p <= max; p++)
{
arabic_utf_byte1= (unsigned char)(((p & 0x07c0) >> 6) + 0xC0);
arabic_utf_byte2= (unsigned char)((p & 0x003F) + 0x80);
fwrite(&arabic_utf_byte1,1,1,arabic);
fwrite(&arabic_utf_byte2,1,1,arabic);
}
return;
}
void presentationforms(unsigned short min, unsigned short max, FILE* arabic)
{
unsigned char arabic_utf_byte1;
unsigned char arabic_utf_byte2;
unsigned char arabic_utf_byte3;
unsigned short p;
for(p = min; p <= max; p++)
{
arabic_utf_byte1 = (unsigned char)(((p & 0xF000) >> 12) + 0xE0);
arabic_utf_byte2 = (unsigned char)(((p & 0x0FC0) >> 6) + 0x80);
arabic_utf_byte3 = (unsigned char)((p & 0x003F)+ 0x80);
fwrite(&arabic_utf_byte1,1,1,arabic);
fwrite(&arabic_utf_byte2,1,1,arabic);
fwrite(&arabic_utf_byte3,1,1,arabic);
}
return;
}