In some installations, telnet is not 8-bit clean by default. In order to be able to send Unicode keystrokes to the remote host, you need to set telnet into "outbinary" mode. There are two ways to do this:
and
$ telnet -L <host>
$ telnet telnet> set outbinary telnet> open <host>
The communications program C-Kermit http://www.columbia.edu/kermit/ckermit.html, (an interactive tool for connection setup, telnet, file transfer, with support for TCP/IP and serial lines), in versions 7.0 or newer, understands the file and transfer encodings UTF-8 and UCS-2, and understands the terminal encoding UTF-8, and converts between these encodings and many others. Documentation of these features can be found in http://www.columbia.edu/kermit/ckermit2.html#x6.6.
Netscape 4.05 or newer can display HTML documents in UTF-8 encoding. All a document needs is the following line between the <head> and </head> tags:
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
Netscape 4.05 or newer can also display HTML and text files in UCS-2 encoding with byte-order mark.
http://www.netscape.com/computing/download/
Mozilla milestone M16 has much better internationalization than Netscape 4. It can display HTML documents in UTF-8 encoding with support for more languages. Alas, there is a cosmetic problem with CJK fonts: some glyphs can be bigger than the line's height, thus overlapping the previous or next line.
http://www.mozilla.org/
lynx-2.8 has an options screen (key 'O') which permits to set the display character set. When running in an xterm or Linux console in UTF-8 mode, set this to "UNICODE UTF-8". Note that for this setting to take effect in the current browser session, you have to confirm on the "Accept Changes" field, and for this setting to take effect in future browser sessions, you have to enable the "Save options to disk" field and then confirm it on the "Accept Changes" field.
Now, again, all a document needs is the following line between the <head> and </head> tags:
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
When you are viewing text files in UTF-8 encoding, you also need to pass the command-line option "-assume_local_charset=UTF-8" (affects only file:/... URLs) or "-assume_charset=UTF-8" (affects all URLs). In lynx-2.8.2 you can alternatively, in the options screen (key 'O'), change the assumed document character set to "utf-8".
There is also an option in the options screen, to set the "preferred document character set". But it has no effect, at least with file:/... URLs and with http://... URLs served by apache-1.3.0.
There is a spacing and line-breaking problem, however. (Look at the russian section of x-utf8.html, or at utf-8-demo.txt.)
Also, in lynx-2.8.2, configured with --enable-prettysrc, the nice colour scheme does not work correctly any more when the display character set has been set to "UNICODE UTF-8". This is fixed by a simple patch lynx282.diff.
The Lynx developers say: "For any serious use of UTF-8 screen output with lynx, compiling with slang lib and -DSLANG_MBCS_HACK is still recommended."
Latest stable release: ftp://ftp.gnu.org/pub/gnu/lynx/lynx-2.8.2.tar.gz
http://lynx.isc.org/
General home page: http://lynx.browser.org/
http://www.slcc.edu/lynx/
Newer development shapshots: http://lynx.isc.org/current/, ftp://lynx.isc.org/current/
w3m by Akinori Ito http://ei5nazha.yz.yamagata-u.ac.jp/~aito/w3m/eng/ is a text mode browser for HTML pages and plain-text files. Its layout of HTML tables, enumerations etc. is much prettier than lynx' one. w3m can also be used as a high quality HTML to plain text converter.
w3m 0.1.10 has command line options for the three major Japanese encodings, but can also be used for UTF-8 encoded files. Without command line options, you often have to press Ctrl-L to refresh the display, and line breaking in Cyrillic and CJK paragraphs is not good.
To fix this, by Hironori Sakamoto has a patch http://www2u.biglobe.ne.jp/~hsaka/w3m/ which adds UTF-8 as display encoding.
Some test pages for browsers can be found at the pages of Alan Wood http://www.hclrss.demon.co.uk/unicode/#links and James Kass http://home.att.net/~jameskass/.
yudit by Gáspár Sinai http://czyborra.com/yudit/ is a first-class unicode text editor for the X Window System. It supports simultaneous processing of many languages, input methods, conversions for local character standards. It has facilities for entering text in all languages with only an English keyboard, using keyboard configuration maps.
It can be compiled in three versions: Xlib GUI, KDE GUI, or Motif GUI.
Customization is very easy. Typically you will first customize your font. From the font menu I chose "Unicode". Then, since the command "xlsfonts '*-*-iso10646-1'" still showed some ambiguity, I chose a font size of 13 (to match Markus Kuhn's 13-pixel fixed font).
Next, you will customize your input method. The input methods "Straight", "Unicode" and "SGML" are most remarkable. For details about the other built-in input methods, look in /usr/local/share/yudit/data/.
To make a change the default for the next session, edit your $HOME/.yuditrc file.
The general editor functionality is limited to editing, cut&paste and search&replace. No undo.
yudit can display text using a TrueType font; see section "TrueType fonts" above. The Bitstream Cyberbit gives good results. For yudit to find the font, symlink it to /usr/local/share/yudit/data/cyberbit.ttf
.
vim (as of version 6.0b) has good support for UTF-8: when started in an UTF-8 locale, it assumes UTF-8 encoding for the console and the text files being edited. It supports double-wide (CJK) characters as well and combining characters and therefore fits perfectly into UTF-8 enabled xterm.
Installation: Download from http://www.vim.org/. After unpacking the four parts, edit src/Makefile to include the --with-features=big
option. This will turn on the features FEAT_MBYTE, FEAT_RIGHTLEFT, FEAT_LANGMAP. Then do "make" and "make install".
First of all, you should read the section "International Character Set Support" (node "International") in the Emacs manual. In particular, note that you need to start Emacs using the command
so that it will use a font set comprising a lot of international characters.
$ emacs -fn fontset-standard
In the short term, there are two packages for using UTF-8 in Emacs. None of them needs recompiling Emacs.
You can use either of these packages, or both together. The advantages of the emacs-utf "unicode-utf8" encoding are: it loads faster, and it deals better with combining characters (important for Thai). The advantage of the Mule-UCS / oc-unicode "utf-8" encoding is: it can apply to a process buffer (such as M-x shell), not only to loading and saving of files; and it respects the widths of characters better (important for Ethiopian). However, it is less reliable: After heavy editing of a file, I have seen some Unicode characters replaced with U+FFFD after the file was saved.
To install the emacs-utf package, compile the program "utf2mule" and install it somewhere in your $PATH, also install unicode.el, muleuni-1.el, unicode-char.el somewhere. Then add the lines
to your $HOME/.emacs file. To activate any of the font sets, use the Mule menu item "Set Font/FontSet" or Shift-down-mouse-1. Currently the font sets with height 15 and 13 have the best Unicode coverage, due to Markus Kuhn's 9x15 and 6x13 fonts. To designate a font set as the initial font set for the first frame at startup, uncomment the
(setq load-path (cons "/home/user/somewhere/emacs" load-path)) (if (not (string-match "XEmacs" emacs-version)) (progn (require 'unicode) ;(setq unicode-data-path "..../UnicodeData-3.0.0.txt") (if (eq window-system 'x) (progn (setq fontset12 (create-fontset-from-fontset-spec "-misc-fixed-medium-r-normal-*-12-*-*-*-*-*-fontset-standard")) (setq fontset13 (create-fontset-from-fontset-spec "-misc-fixed-medium-r-normal-*-13-*-*-*-*-*-fontset-standard")) (setq fontset14 (create-fontset-from-fontset-spec "-misc-fixed-medium-r-normal-*-14-*-*-*-*-*-fontset-standard")) (setq fontset15 (create-fontset-from-fontset-spec "-misc-fixed-medium-r-normal-*-15-*-*-*-*-*-fontset-standard")) (setq fontset16 (create-fontset-from-fontset-spec "-misc-fixed-medium-r-normal-*-16-*-*-*-*-*-fontset-standard")) (setq fontset18 (create-fontset-from-fontset-spec "-misc-fixed-medium-r-normal-*-18-*-*-*-*-*-fontset-standard")) ; (set-default-font fontset15) ))))
set-default-font
line in the code snippet above.
To install the oc-unicode package, execute the command
and install the resulting file
$ emacs -batch -l oc-comp.el
un-define.elc
, as well as oc-unicode.el
, oc-charsets.el
, oc-tools.el
, somewhere. Then add the lines
to your $HOME/.emacs file. You can choose your appropriate font set as with the emacs-utf package.
(setq load-path (cons "/home/user/somewhere/emacs" load-path)) (if (not (string-match "XEmacs" emacs-version)) (progn (require 'oc-unicode) ;(setq unicode-data-path "..../UnicodeData-3.0.0.txt") (if (eq window-system 'x) (progn (setq fontset12 (oc-create-fontset "-misc-fixed-medium-r-normal-*-12-*-*-*-*-*-fontset-standard" "-misc-fixed-medium-r-normal-ja-12-*-iso10646-*")) (setq fontset13 (oc-create-fontset "-misc-fixed-medium-r-normal-*-13-*-*-*-*-*-fontset-standard" "-misc-fixed-medium-r-normal-ja-13-*-iso10646-*")) (setq fontset14 (oc-create-fontset "-misc-fixed-medium-r-normal-*-14-*-*-*-*-*-fontset-standard" "-misc-fixed-medium-r-normal-ja-14-*-iso10646-*")) (setq fontset15 (oc-create-fontset "-misc-fixed-medium-r-normal-*-15-*-*-*-*-*-fontset-standard" "-misc-fixed-medium-r-normal-ja-15-*-iso10646-*")) (setq fontset16 (oc-create-fontset "-misc-fixed-medium-r-normal-*-16-*-*-*-*-*-fontset-standard" "-misc-fixed-medium-r-normal-ja-16-*-iso10646-*")) (setq fontset18 (oc-create-fontset "-misc-fixed-medium-r-normal-*-18-*-*-*-*-*-fontset-standard" "-misc-fixed-medium-r-normal-ja-18-*-iso10646-*")) ; (set-default-font fontset15) ))))
In order to open an UTF-8 encoded file, you will type
or
M-x universal-coding-system-argument unicode-utf8 RET M-x find-file filename RET
(or utf-8 instead of unicode-utf8, if you prefer oc-unicode/Mule-UCS).
C-x RET c unicode-utf8 RET C-x C-f filename RET
In order to start a shell buffer with UTF-8 I/O, you will type
(This works with oc-unicode/Mule-UCS only.)
M-x universal-coding-system-argument utf-8 RET M-x shell RET
Note that all this works with Emacs in windowing mode only, not in terminal mode.
Richard Stallman plans to add integrated UTF-8 support to Emacs in the long term, and so does the XEmacs developers group.
(This section is written by Gilbert Baumann.)
Here is how to teach XEmacs (20.4 configured with MULE) the UTF-8 encoding. Unfortunately you need its sources to be able to patch it.
First you need these files provided by Tomohiko Morioka:
http://turnbull.sk.tsukuba.ac.jp/Tools/XEmacs/xemacs-21.0-b55-emc-b55-ucs.diff and http://turnbull.sk.tsukuba.ac.jp/Tools/XEmacs/xemacs-ucs-conv-0.1.tar.gz
The .diff is a diff against the C sources. The tar ball is elisp code, which provides lots of code tables to map to and from Unicode. As the name of the diff file suggests it is against XEmacs-21; I needed to help `patch' a bit. The most notable difference to my XEmacs-20.4 sources is that file-coding.[ch] was called mule-coding.[ch].
For those unfamilar with the XEmacs-MULE stuff (as I am) a quick guide:
What we call an encoding is called by MULE a `coding-system'. The most important commands are:
M-x set-file-coding-system M-x set-buffer-process-coding-system [comint buffers]
and the variable `file-coding-system-alist', which guides `find-file' to guess the encoding used. After stuff was running, the very first thing I did was this.
This code looks into the special mode line introduced by -*- somewhere in the first 600 bytes of the file about to opened; if now there is a field "Encoding: xyz;" and the xyz encoding ("coding system" in Emacs speak) exists, choose that. So now you could do e.g.
;;; -*- Mode: Lisp; Syntax: Common-Lisp; Package: CLEX; Encoding: utf-8; -*-
and XEmacs goes into utf-8 mode here.
Atfer everything was running I defined \u03BB (greek lambda) as a macro like:
(defmacro \u03BB (x) `(lambda .,x))
With XFree86-4.0.1, xedit is able to edit UTF-8 files if you set the locale accordingly (see above), and add the line "Xedit*international: true" to your $HOME/.Xdefaults file.
As of version 6.1.2, aXe supports only 8-bit locales. If you add the line "Axe*international: true" to your $HOME/.Xdefaults file, it will simply dump core.
mined98 is a small text editor by Michiel Huisjes, Achim Müller and Thomas Wolff. http://www.inf.fu-berlin.de/~wolff/mined.html It lets you edit UTF-8 or 8-bit encoded files, in an UTF-8 or 8-bit xterm. It also has powerful capabilities for entering Unicode characters.
mined lets you edit both 8-bit encoded and UTF-8 encoded files. By default it uses an autodetection heuristic. If you don't want to rely on heuristics, pass the command-line option -u
when editing an UTF-8 file, or +u
when editing an 8-bit encoded file. You can change the interpretation at any time from within the editor: It displays the encoding ("L:h" for 8-bit, "U:h" for UTF-8) in the menu line. Click on the first of these characters to change it.
mined knows about double-width and combining characters and displays them correctly.
mined also has very nice pull-down menus. Alas, the "Home", "End", "Delete" keys do not work.
MIME: RFC 2279 defines UTF-8 as a MIME charset, which can be transported under the 8bit, quoted-printable and base64 encodings. The older MIME UTF-7 proposal (RFC 2152) is considered to be deprecated and should not be used any further.
Mail clients released after January 1, 1999, should be capable of sending and displaying UTF-8 encoded mails, otherwise they are considered deficient. But these mails have to carry the MIME labels
Simply piping an UTF-8 file into "mail" without caring about the MIME labels will not work.
Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
Mail client implementors should take a look at http://www.imc.org/imc-intl/ and http://www.imc.org/mail-i18n.html.
Now about the individual mail clients (or "mail user agents"):
The situation for an unpatched pine version 4.10 is as follows.
Pine does not do character set conversions. But it allows you to view UTF-8 mails in an UTF-8 text window (Linux console or xterm).
Normally, Pine will warn about different character sets each time you view an UTF-8 encoded mail. To get rid of this warning, choose S (setup), then C (config), then change the value of "character-set" to UTF-8. This option will not do anything, except to reduce the warnings, as Pine has no built-in knowledge of UTF-8.
Also note that Pine's notion of Unicode characters is pretty limited: It will display Latin and Greek characters, but not other kinds of Unicode characters.
A patch by Robert Brady http://www.ents.susu.soton.ac.uk/~robert/pine-utf8-0.1.diff adds UTF-8 support to Pine. With this patch, it decodes and prints headers and bodies properly. The patch depends on the GNOME libunicode http://cvs.gnome.org/lxr/source/libunicode/.
However, alignment remains broken in many places; replying to a mail does not cause the character set to be converted as appropriate; and the editor, pico, cannot deal with multibyte characters.
kmail (as of KDE 1.0) does not support UTF-8 mails at all.
Netscape Communicator's Messenger can send and display mails in UTF-8 encoding, but it needs a little bit of manual user intervention.
To send an UTF-8 encoded mail: After opening the "Compose" window, but before starting to compose the message, select from the menu "View -> Character Set -> Unicode (UTF-8)". Then compose the message and send it.
When you receive an UTF-8 encoded mail, Netscape unfortunately does not display it in UTF-8 right away, and does not even give a visual clue that the mail was encoded in UTF-8. You have to manually select from the menu "View -> Character Set -> Unicode (UTF-8)".
For displaying UTF-8 mails, Netscape uses different fonts. You can adjust your font settings in the "Edit -> Preferences -> Fonts" dialog; choose the "Unicode" font category.
mutt-1.0, as available from http://www.mutt.org/, contains only rudimentary UTF-8 support. For full UTF-8 support, there are patches by Edmund Grimley Evans at http://www.rano.demon.co.uk/mutt.html.
exmh 2.1.2 with Tk 8.4a1 can recognize and correctly display UTF-8 mails (without CJK characters) if you add the following lines to your $HOME/.Xdefaults
file.
! ! Exmh ! exmh.mimeUCharsets: utf-8 exmh.mime_utf-8_registry: iso10646 exmh.mime_utf-8_encoding: 1 exmh.mime_utf-8_plain_families: fixed exmh.mime_utf-8_fixed_families: fixed exmh.mime_utf-8_proportional_families: fixed exmh.mime_utf-8_title_families: fixed
groff 1.16, the GNU implementation of the traditional Unix text processing system troff/nroff, can output UTF-8 formatted text. Simply use `groff -Tutf8
' instead of `groff -Tlatin1
' or `groff -Tascii
'.
The teTeX 0.9 (and newer) distribution contains an Unicode adaptation of TeX, called Omega ( http://www.gutenberg.eu.org/omega/, ftp://ftp.ens.fr/pub/tex/yannis/omega). Together with the unicode.tex file contained in utf8-tex-0.1.tar.gz it enables you to use UTF-8 encoded sources as input for TeX. A thousand of Unicode characters are currently supported.
All that changes is that you run `omega' (instead of `tex') or `lambda' (instead of `latex'), and insert the following lines at the head of your source input.
\ocp\TexUTF=inutf8 \InputTranslation currentfile \TexUTF
\input unicode
Other maybe related links: http://www.dante.de/projekte/nts/NTS-FAQ.html, ftp://ftp.dante.de/pub/tex/language/chinese/CJK/.
PostgreSQL 6.4 or newer can be built with the configuration option --with-mb=UNICODE
.
With http://www.flash.net/~marknu/less/less-358.tar.gz you can browse UTF-8 encoded text files in an UTF-8 xterm or console. Make sure that the environment variable LESSCHARSET is not set (or is set to utf-8). If you also have a LESSKEY environment variable set, also make sure that the file it points to does not define LESSCHARSET. If necessary, regenerate this file using the `lesskey' command, or unset the LESSKEY environment variable.
lv-4.21 by Tomio Narita http://www.mt.cs.keio.ac.jp/person/narita/lv/ is a file viewer with builtin character set converters. To view UTF-8 files in an UTF-8 console, use "lv -Au8". But it can also be used to view files in other CJK encodings in an UTF-8 console.
There is a small glitch: lv turns off xterm's cursor and doesn't turn it on again.
Get the GNU textutils-2.0 and apply the patch textutils-2.0.diff, then configure, add "#define HAVE_MBRTOWC 1", "#define HAVE_FGETWC 1", "#define HAVE_FPUTWC 1" to config.h. In src/Makefile, modify CFLAGS and LDFLAGS so that they include the directories where libutf8 is installed. Then rebuild.
Get the util-linux-2.9y package, configure it, then define ENABLE_WIDECHAR in defines.h, change the "#if 0" to "#if 1" in lib/widechar.h. In text-utils/Makefile, modify CFLAGS and LDFLAGS so that they include the directories where libutf8 is installed. Then rebuild.
figlet 2.2 has an option for UTF-8 input: "figlet -C utf8"
The Li18nux list of commands and utilities that ought to be made interoperable with UTF-8 is as follows. Useful information needs to get added here; I just didn't get around it yet :-)
As of glibc-2.2, regular expressions will only work for 8-bit characters. In an UTF-8 locale, regular expressions that contain non-ASCII characters or that expect to match a single multibyte character with "." will not work. This affects all commands and utilities listed below.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of at-3.1.8: The two uses of isalnum in at.c are invalid and should be replaced with a use of quotearg.c or an exclude list of the (fixed) list of shell metacharacters. The two uses of %8s in at.c and atd.c are invalid and should become arbitrary length.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
As of fileutils-4.0u: OK.
As of fileutils-4.0u: OK.
As of sh-utils-2.0i: OK.
As of textutils-2.0e: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
As of fileutils-4.0u: The conv=lcase, conv=ucase options don't work correctly.
No info available yet.
As of fileutils-4.0u: OK.
As of diffutils-2.7 (1994): diff is not locale aware; the --side-by-side mode therefore doesn't compute column width correctly, not even in ISO-8859-1 locales.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of fileutils-4.0u: OK.
As of sh-utils-2.0i: OK.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: The operators "match", "substr", "index", "length" don't work correctly.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
As of findutils-4.1.5: The "-ok" option is not internationalized; a patch has been submitted to the maintainer. The "-iregex" does not work correctly; this needs a fix in function find/parser.c:insert_regex.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
gzip-1.3 is UTF-8 capable, but it uses only English messages in ASCII charset. Proper internationalization would require: Use gettext. Call setlocale. In function check_ofname (file gzip.c), use the function rpmatch from GNU text/sh/fileutils instead of asking for "y" or "n". The use of strlen in gzip.c:852 is wrong, needs to use the function mbswidth.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No complete info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0y: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of fileutils-4.0u: OK.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: The string "<undef>" should not be translated; this needs a fix in function stty.c:visible.
No info available yet.
As of textutils-2.0e: OK.
No info available yet.
No info available yet.
No info available yet.
As of tar-1.13.17: OK, if user and group names are always ASCII.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of fileutils-4.0u: OK.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
No info available yet.
As of textutils-2.0e: wc cannot count characters; a patch has been submitted to the maintainer.
As of sh-utils-2.0i: OK.
No info available yet.
No info available yet.
As of findutils-4.1.5: The program uses strstr; a patch has been submitted to the maintainer.
No info available yet.
No info available yet.
Owen Taylor is currently developing a library for rendering multilingual text, called pango. http://www.labs.redhat.com/~otaylor/pango/, http://www.pango.org/.