From: moonhkt on 4 Oct 2009 22:49 Hi All I have UTF-8 file with Traditional Chinese, Simplified Chinese, Japanese, Korean charset. Since my database can not support UTF-8 Characters. We need converted to ISO8859-1 character. Then we can import and export those data. This process like Garbage In, Garbage Out How to using iconv force UTF-8 to ISO8859-1 conversion ? or other tools to convert to ISO8859-1 ? In Windows, using UTF-8 editor can have into ISO8859-1 character set. after conversion, the character like Garbage. e.g. In AIX, iconv -f UTF-8 -t ISO8859-1 utf_file_01.text > abc.txt All CJK characters missing on abc.txt file
From: Ben Bacarisse on 4 Oct 2009 23:08 moonhkt <moonhkt(a)gmail.com> writes: > I have UTF-8 file with Traditional Chinese, Simplified Chinese, > Japanese, Korean charset. > Since my database can not support UTF-8 Characters. We need converted > to ISO8859-1 character. > Then we can import and export those data. This process like Garbage > In, Garbage Out > > How to using iconv force UTF-8 to ISO8859-1 conversion ? or other > tools to convert to ISO8859-1 ? > > In Windows, using UTF-8 editor can have into ISO8859-1 character set. > after conversion, the character like Garbage. The editor must use the same character encoding as the file. Editing an ISO 8859-1 encoded file as if it were UTF-8 will not work. > e.g. > In AIX, > iconv -f UTF-8 -t ISO8859-1 utf_file_01.text > abc.txt > All CJK characters missing on abc.txt file This may be the best you can hope for. ISO 8859-1 simply can't represent the Chinese, Japanese and Korean characters, so iconv just omits them. -- Ben.
|
Pages: 1 Prev: how to remove non printing characters Next: Fork Concept |