What exactly does String.codePointAt do?

Short answer: it gives you the Unicode codepoint that starts at the specified index in String. i.e. the “unicode number” of the character at that position.

Longer answer: Java was created when 16 bit (aka a char) was enough to hold any Unicode character that existed (those parts are now known as the Basic Multilingual Plane or BMP). Later, Unicode was extended to include characters with a codepoint > 216. This means that a char could no longer hold all possible Unicode codepoints.

UTF-16 was the solution: it stores the “old” Unicode codepoints in 16 bit (i.e. exactly one char) and all the new ones in 32 bit (i.e. two char values). Those two 16 bit values are called a “surrogate pair”. Now strictly speaking a char holds a “UTF-16 code unit” instead of “a Unicode character” as it used to.

Now all the “old” methods (handling only char) could be used just fine as long as you didn’t use any of the “new” Unicode characters (or didn’t really care about them), but if you cared about the new characters as well (or simply need to have complete Unicode support), then you’ll need to use the “codepoint” versions that actually support all possible Unicode codepoints.

Note: A very well known example of unicode characters that are not in the BMP (i.e. work only when using the codepoint variant) are Emojis: Even the simple Grinning Face 😀 U+1F600 can’t be represented in a single char.

Leave a Comment