The “problem” is that a Swift character does not directly correspond to a Unicode
code point, but represents an “extended grapheme cluster” which can consist of
multiple Unicode scalars. For example
let c : Character = "🇺🇸" // REGIONAL INDICATOR SYMBOL LETTERS US
is actually a sequence of two Unicode scalars.
If we ignore this fact then you can retrieve the initial Unicode scalar of the
character (compare How can I get the Unicode code point(s) of a Character?) and test its membership in a character set:
let c : Character = "5"
let s = String(c).unicodeScalars
let uni = s[s.startIndex]
let digits = NSCharacterSet.decimalDigitCharacterSet()
let isADigit = digits.longCharacterIsMember(uni.value)
This returns “true” for the characters “0” … “9”, but actually for all
Unicode scalars of the “decimal digit category”, for example:
let c1 : Character = "৯" // BENGALI DIGIT NINE U+09EF
let c2 : Character = "𝟙" // MATHEMATICAL DOUBLE-STRUCK DIGIT ONE U+1D7D9
If you care only for the (ASCII) digits “0” … “9”, then the easiest method is probably:
if c >= "0" && c <= "9" { }
or, using ranges:
if "0"..."9" ~= c { }
Update: As of Swift 5 you can check for ASCII digits with
if c.isASCII && c.isNumber { }
using the “Character properties“ introduced with SE-0221.
This solves also the problem with digits modified by a variation selected U+FE0F, like the Keycap Emoji “1️⃣”. (Thanks to Lukas Kukacka for reporting this problem.)
let c: Character = "1️⃣"
print(Array(c.unicodeScalars)) // ["1", "\u{FE0F}", "\u{20E3}"]
print(c.isASCII && c.isNumber) // false