Monday, January 7, 2008

ASCII
American Standard Code for Information Interchange (ASCII), generally pronounced [ˈæski] ([[2]]) , is a character encoding based on the English alphabet. ASCII codes represent text in computers, communications equipment, and other devices that work with text. Most modern character encodings — which support many more characters — have a historical basis in ASCII.
Work on ASCII began in 1960. The first edition of the standard was published in 1963, a major revision in 1967, and the most recent update in 1986. It currently defines codes for 128 characters: 33 are non-printing, mostly obsolete control characters that affect how text is processed, and 95 are printable characters.
All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.

Overview
ASCII reserves the first 32 codes (numbers 0–31 decimal) for control characters: codes originally intended not to carry printable information, but rather to control devices (such as printers) that make use of ASCII, or to provide meta-information about data streams such as those stored on magnetic tape. For example, character 10 represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace".
The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this left was sometimes intentional (where a character would be used slightly differently on a terminal link than on a data stream) and sometimes more accidental (such as what "delete" means).
Probably the most influential single device on the interpretation of these characters was the Teletype corporation model 33 series, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage up through the 1980s, lower cost and in some ways less fragile than magnetic tape. In particular, the Teletype 33 machine assignments for codes 17 (Control-Q, DC1, also known as XON), 19 (Control-S, DC3, also known as XOFF), and 127 (DELete) became de-facto standards. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (Control-O, Shift In) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually faded out.
The use of Control-S (XOFF, an abbreviation for "transmit off") as a handshaking signal warning a sender to stop transmission because of impending overflow, and Control-Q (XON, "transmit on") to resume sending, persists to this day in many systems as a manual output control technique. On some systems Control-S retains its meaning but Control-Q is replaced by a second Control-S to resume output.
Code 127 is officially named "delete" but the Teletype label was "rubout". Since the original standard gave no detailed interpretation for most control codes, interpretations of this code varied. The original Teletype meaning, and the intent of the standard, was to make it an ignored character, the same as NUL (all zeroes). This was specifically useful for paper tape, because punching the all-ones bit pattern on top of an existing mark would obliterate it. Tapes designed to be "hand edited" could even be produced with spaces of extra NULs (blank tape) so that a block of characters could be "rubbed out" and then replacements put into the empty space.
As video terminals began to replace printing ones, the value of the "rubout" character was lost. DEC systems, for example, interpreted "Delete" to mean "remove the character before the cursor," and this interpretation also became common in Unix systems. Most other systems used "Backspace" for that meaning and used "Delete" as it was used on paper tape, to mean "remove the character after the cursor". That latter interpretation is the most common today.
Many more of the control codes have taken on meanings quite different from their original ones. The "escape" character (code 27), for example, was originally intended to allow sending other control characters as literals instead of invoking their meaning. This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this meaning has been coopted and has eventually drifted. In modern use, an ESC sent to the terminal usually indicates the start of a command sequence, usually in the form of an ANSI escape code. An ESC sent from the terminal is most often used as an "out of band" character used to terminate an operation, as in the TECO and vi text editors.
The inherent ambiguity of many control characters, combined with their historical usage, has also created problems when transferring "plain text" files between systems. The clearest example of this is the newline problem on various operating systems. On printing terminals there is no question that you terminate a line of text with both "Carriage Return" and "Linefeed". The first returns the printing carriage to the beginning of the line and the second advances to the next line without moving the carriage. However, requiring two characters to mark the end of a line introduced unnecessary complexity and questions as to how to interpret each character when encountered alone. To simplify matters, plain text files on Unix systems use line feeds alone to separate lines. Similarly, older Macintosh systems, among others, use only carriage returns in plain text files. Various DEC operating systems used both characters to mark the end of a line, perhaps for compatibility with teletypes, and this de facto standard was copied in the CP/M operating system and then in MS-DOS and eventually Microsoft Windows. The DEC operating systems, along with CP/M, tracked file length only in units of disk blocks and used Control-Z (SUB) to mark the end of the actual text in the file (also done for CP/M compatibility in some cases in MS-DOS, though MS-DOS has always recorded exact file-lengths). Control-C (ETX, End of TeXt) might have made more sense, but was already in wide use as a program abort signal. UNIX's use of Control-D (EOT, End of Transmission) appears on its face similar, but is used only from the terminal and never stored in a file.
While the codes mentioned above have retained some semblance of their original meanings, many of the codes originally intended for stream delimiters or for link control on a terminal have lost all meaning except their relation to a letter. Control-A is almost never used to mean "start of header" except on an ANSI magnetic tape. When connecting a terminal to a system, or asking the system to recognize that a logged-out terminal wants to log in, modern systems are much more likely to want a carriage return or an ESCape than Control-E (ENQuire, meaning "is there anybody out there?").

^[a]  Printable Representation, the Unicode characters from the area U+2400 to U+2421 reserved for representing control characters when it is necessary to print or display them rather than have them perform their intended function. Some browsers may not display these properly.
^[b]  Control key Sequence/caret notation, the traditional key sequences for inputting control characters. The caret (^) represents the "Control" or "Ctrl" key that must be held down while pressing the second key in the sequence. The caret-key representation is also used by some software to represent control characters.
^[c]  Character Escape Codes in C programming language and many other languages influenced by it, such as Java and Perl.
^[d]  The Backspace character can also be entered by pressing the "Backspace", "Bksp", or ← key on some systems.
^[e]  The Delete character can also be entered by pressing the "Delete" or "Del" key. It can also be entered by pressing the "Backspace", "Bksp", or ← key on some systems.
^[f]  The 'e' escape sequence is not part of ISO C and many other language specifications. However, it is understood by several compilers.
^[g]  The Escape character can also be entered by pressing the "Escape" or "Esc" key on some systems.
^[h]  The Carriage Return character can also be entered by pressing the "Return", "Ret", "Enter", or ↵ key on most systems.
[i]  The ambiguity surrounding Backspace comes from mismatches between the intent of the human or software transmitting the Backspace and the interpretation by the software receiving it. If the transmitter expects Backspace to erase the previous character and the receiver expects Delete to be used to erase the previous character, many receivers will echo the Backspace as "^H", just as they would echo any other uninterpreted control character. (A similar mismatch in the other direction may yield Delete displayed as "^?".) "^H" persists in messages today as a deliberate humorous device — for example, "there's a sucker^H^H^H^H^H^Hpotential customer born every minute". A less common variant of this involves the use of "^W", which in some user interfaces means "delete previous word". The example sentence would therefore also work as "there's a sucker^W potential customer born every minute". ASCII control characters
RFC 2822 refers to NO-WS-CTL, non-whitespace control characters. These are control characters that do not include carriage return, line feed, and white space characters (see here), i.e.: decimal 1–8, 11–12, 14–31, and 127.

Non-whitespace control characters
Code 32, the "space" character, denotes the space between words, as produced by the large space-bar of a keyboard. Codes 33 to 126, known as the printable characters, represent letters, digits, punctuation marks, and a few miscellaneous symbols.
Seven-bit ASCII provided seven "national" characters and, if the combined hardware and software permit, can use overstrikes to simulate some additional international characters: in such a scenario a backspace can precede a grave accent (which the American and British standards, but only those standards, also call "opening single quotation mark"), a backtick, or a breath mark (inverted vel).

ASCII printable characters

The digits 0–9 are represented with their values in binary prefixed with 0011 (this means that converting BCD to ASCII is simply a matter of taking each BCD nibble separately and prefixing 0011 to it).
Lowercase and uppercase letters only differ in bit pattern by a single bit, simplifying case conversion to a range test (to avoid converting characters that are not letters) and a single bitwise operation. Fast case conversion is important because it is often used in case-ignoring search algorithms.
In contrast with EBCDIC, the lowercase and uppercase letters each occupy 26 consecutive positions. Structural features
RFC 1345 (published in June 1992) and the IANA registry of character sets (ongoing), both recognize the following case-insensitive aliases for ASCII as suitable for use on the Internet:
Of these, only the aliases "US-ASCII" and "ASCII" have achieved widespread use. One often finds them in the optional "charset" parameter in the Content-Type header of some MIME messages, in the equivalent "meta" element of some HTML documents, and in the encoding declaration part of the prolog of some XML documents.

ANSI_X3.4-1968 (canonical name)
ANSI_X3.4-1986
ASCII (with ASCII-7 and ASCII-8 variants)
US-ASCII (preferred MIME name)
us
ISO646-US
ISO_646.irv:1991
iso-ir-6
IBM367
cp367
csASCII Aliases
As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII in order to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some mis-apply that term to cover all variants, including those that do not preserve ASCII's character-map in the 7-bit range.
The PETSCII Code used by Commodore International for their 8-bit systems is probably unique among post-1970 codes in being based on ASCII-1963 instead of the far more common ASCII-1967.

Variants
ISO 646 (1972), the first attempt to remedy the pro-English-language bias, created compatibility problems, since it remained a 7-bit character-set. It made no additional codes available, so it reassigned some in language-specific variants. It thus became impossible to know what character a code represented without knowing which variant to work with, and text-processing systems could generally cope with only one variant anyway.
Eventually, improved technology brought out-of-band means to represent the information formerly encoded in the eighth bit of each byte, freeing this bit to add another 128 additional character-codes for new assignments.
For example, IBM developed 8-bit code pages, such as code page 437, which replaced the control-characters with graphic symbols such as smiley faces, and mapped additional graphic characters to the upper 128 positions. Operating systems such as DOS supported these code-pages, and manufacturers of IBM PCs supported them in hardware. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal.
Eight-bit standards such as ISO/IEC 8859 (derived from the DEC-MCS) and Mac OS Roman developed as true extensions of ASCII, leaving the original character-mapping intact and just adding additional values above the 7-bit range.
This enabled the representation of a broader range of languages, but these standards continued to suffer from incompatibilities and limitations. Still, ISO-8859-1, its variant Windows-1252 (often mislabeled as ISO-8859-1 even by Microsoft software) and original 7-bit ASCII remain the most common character encodings in use today.

Incompatibility vs interoperability
Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters, and their various encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. While ASCII basically uses 7-bit codes, Unicode and the UCS use relatively abstract "code points": non-negative integer numbers that map, using different encoding forms and schemes, to sequences of one or more 8-bit bytes. To permit backward compatibility, Unicode and the UCS assign the first 128 code points to the same characters as ASCII, and the first 256 code points to the same characters as ISO 8859-1 (Latin 1). One can therefore think of ASCII as a 7-bit encoding scheme for a very small subset of Unicode and of the UCS.
The popular UTF-8 (and UTF-7) encoding-form prescribes the use of one to four 8-bit code values for each code point character, and equates exactly to ASCII for the code values below 128. In other words, every properly encoded ASCII file is also a valid UTF-8 and UTF-7 file. Other encoding forms such as UTF-16 resemble ASCII in how they represent the first 128 characters of Unicode, but tend to use 16 or 32 bits per character, so they require conversion for compatibility.

Order

The abbreviation ASCIIZ or ASCIZ refers to a null-terminated ASCII string (also known as a C string).
Asteroid 3568 ASCII is named after the character encoding. Trivia

American National Standards Institute (ANSI)
ASCII art
ASCII games
ASCII Ribbon Campaign
Binary
Bob Bemer
Control character
Extended Binary Coded Decimal Interchange Code (EBCDIC)
Latin characters in Unicode
Text file
Unicode ASCII See also
(where all ASCII printable characters are identical to ASCII)
Extended ASCII
Indian Script Code for Information Interchange (ISCII)
ISO 8859
Mac Roman
UTF-8
Vietnamese Standard Code for Information Interchange (VISCII)
Windows code pages ASCII variants

Tom Jennings. World Power Systems:Texts:Annotated history of character codes. Retrieved on 2006-11-06.