ASCII and Unicode are both standards that refer to the digital representation of text, specifically characters that make up text. However, the two standards are significantly different, with many properties reflecting their respective order of creation.

America Versus the Universe

The American Standard Code for Information Interchange (ASCII), unsurprisingly, caters to an American audience, writing in the English alphabet. It deals with unaccented letters, such as A-Z and a-z, plus a small number of punctuation symbols and control characters.

The standard letters of the Latin alphabet in uppercase and lowercase forms

In particular, there is no way of representing loan words adopted from other languages, such as café in ASCII, without anglicizing them by substituting accented characters (e.g., cafe). Localized ASCII extensions were developed to cater to various languages' needs, but these efforts made interoperability awkward and were clearly stretching ASCII’s capabilities.

In contrast, the Universal Coded Character Set (Unicode) lies at the opposite end of the ambition scale. Unicode attempts to cater to as many of the world’s writing systems as possible, to the extent that it covers ancient languages and everyone’s favorite set of expressive symbols, emoji.

Character Set or Character Encoding?

In simple terms, a character set is a selection of characters (e.g., A-Z) whilst a character encoding is a mapping between a character set and a value that can be represented digitally (e.g., A=1, B=2).

The ASCII standard is effectively both: it defines the set of characters that it represents and a method of mapping each character to a numeric value.

In contrast, the word Unicode is used in several different contexts to mean different things. You can think of it as an all-encompassing term, like ASCII, to refer to a character set and a number of encodings. But, because there are several encodings, the term Unicode is often used to refer to the overall set of characters, rather than how they are mapped.

Size

Due to its scope, Unicode represents far more characters than ASCII. Standard ASCII uses a 7-bit range to encode 128 distinct characters. Unicode, on the other hand, is so large that we need to use different terminology just to talk about it!

Unicode caters to 1,111,998 addressable code points. A code point is roughly analogous to a space reserved for a character, but the situation is a lot more complicated than that when you start to delve into the details!

A more useful comparison is how many scripts (or writing systems) are currently supported. Of course, ASCII only handles the English alphabet, essentially the Latin or Roman script. The version of Unicode produced in 2020 goes a lot further: it includes support for a total of 154 scripts.

A grid showing various emojis of human faces with different skin tones

Storage

ASCII’s 7-bit range means that each character is stored in a single 8-bit byte; the spare bit is unused in standard ASCII. This makes size calculations trivial: the length of text, in characters, is the file's size in bytes.

You can confirm this with the following sequence of bash commands. First, we create a file containing 12 letters of text:

        $ echo -n 'Hello, world' > foo
    

To check that the text is in the ASCII encoding, we can use the file command:

        $ file foo
foo: ASCII text, with no line terminators

Finally, to get the exact number of bytes the file occupies, we use the stat command:

        $ stat -f%z foo
12

Since the Unicode standard deals with a far greater range of characters, a Unicode file naturally takes up more storage space. Exactly how much depends on the encoding.

Repeating the same set of commands from before, using a character that cannot be represented in ASCII, gives the following:

        $ echo -n '€' > foo
$ file foo
foo: UTF-8 Unicode text, with no line terminators
$ stat -f%z foo
3

That single character occupies 3 bytes in a Unicode file. Note that bash automatically created a UTF-8 file since an ASCII file cannot store the chosen character (€). UTF-8 is by far the most common character encoding for Unicode; UTF-16 and UTF-32 are two alternative encodings, but they are used far less.

UTF-8 is a variable-width encoding, which means it uses different amounts of storage for different code points. Each code point will occupy between one and four bytes, with the intent that more common characters require less space, providing a type of built-in compression. The disadvantage is that determining the length or size requirements of a given chunk of text becomes much more complicated.

ASCII Is Unicode, but Unicode Is Not ASCII

For backward compatibility, the first 128 Unicode code points represent the equivalent ASCII characters. Since UTF-8 encodes each of these characters with a single byte, any ASCII text is also a UTF-8 text. Unicode is a superset of ASCII.

However, as shown above, many Unicode files cannot be used in an ASCII context. Any character that is out-of-bounds will be displayed in an unexpected manner, often with substituted characters that are completely different from those that were intended.

Modern Usage

For most purposes, ASCII is largely considered a legacy standard. Even in situations that only support the Latin script—where full support for the complexities of Unicode is unnecessary, for example—it is usually more convenient to use UTF-8 and take advantage of its ASCII compatibility.

In particular, web pages should be saved and transmitted using UTF-8, which is the default for HTML5. This is in contrast to the earlier web, which dealt in ASCII by default before that was superseded by Latin 1.

A Standard That’s Changing

The last revision of ASCII took place in 1986.

In contrast, Unicode continues to be updated yearly. New scripts, characters, and, particularly, new emoji are regularly added. With only a small fraction of these allocated, the full character set is likely to grow and grow for the foreseeable future.

Related: The 100 Most Popular Emojis Explained

ASCII Versus Unicode

ASCII served its purpose for many decades, but Unicode has now effectively replaced it for all practical purposes other than legacy systems. Unicode is larger and, hence, more expressive. It represents a worldwide, collaborative effort and offers far greater flexibility, albeit at the expense of some complexity.