mirror of
https://github.com/python/cpython.git
synced 2025-07-24 11:44:31 +00:00

which unfortunately means the errors from the bytes type change somewhat: bytes([300]) still raises a ValueError, but bytes([10**100]) now raises a TypeError (either that, or bytes(1.0) also raises a ValueError -- PyNumber_AsSsize_t() can only raise one type of exception.) Merged revisions 51188-51433 via svnmerge from svn+ssh://pythondev@svn.python.org/python/trunk ........ r51189 | kurt.kaiser | 2006-08-10 19:11:09 +0200 (Thu, 10 Aug 2006) | 4 lines Retrieval of previous shell command was not always preserving indentation since 1.2a1) Patch 1528468 Tal Einat. ........ r51190 | guido.van.rossum | 2006-08-10 19:41:07 +0200 (Thu, 10 Aug 2006) | 3 lines Chris McDonough's patch to defend against certain DoS attacks on FieldStorage. SF bug #1112549. ........ r51191 | guido.van.rossum | 2006-08-10 19:42:50 +0200 (Thu, 10 Aug 2006) | 2 lines News item for SF bug 1112549. ........ r51192 | guido.van.rossum | 2006-08-10 20:09:25 +0200 (Thu, 10 Aug 2006) | 2 lines Fix title -- it's rc1, not beta3. ........ r51194 | martin.v.loewis | 2006-08-10 21:04:00 +0200 (Thu, 10 Aug 2006) | 3 lines Update dangling references to the 3.2 database to mention that this is UCD 4.1 now. ........ r51195 | tim.peters | 2006-08-11 00:45:34 +0200 (Fri, 11 Aug 2006) | 6 lines Followup to bug #1069160. PyThreadState_SetAsyncExc(): internal correctness changes wrt refcount safety and deadlock avoidance. Also added a basic test case (relying on ctypes) and repaired the docs. ........ r51196 | tim.peters | 2006-08-11 00:48:45 +0200 (Fri, 11 Aug 2006) | 2 lines Whitespace normalization. ........ r51197 | tim.peters | 2006-08-11 01:22:13 +0200 (Fri, 11 Aug 2006) | 5 lines Whitespace normalization broke test_cgi, because a line of quoted test data relied on preserving a single trailing blank. Changed the string from raw to regular, and forced in the trailing blank via an explicit \x20 escape. ........ r51198 | tim.peters | 2006-08-11 02:49:01 +0200 (Fri, 11 Aug 2006) | 10 lines test_PyThreadState_SetAsyncExc(): This is failing on some 64-bit boxes. I have no idea what the ctypes docs mean by "integers", and blind-guessing here that it intended to mean the signed C "int" type, in which case perhaps I can repair this by feeding the thread id argument to type ctypes.c_long(). Also made the worker thread daemonic, so it doesn't hang Python shutdown if the test continues to fail. ........ r51199 | tim.peters | 2006-08-11 05:49:10 +0200 (Fri, 11 Aug 2006) | 6 lines force_test_exit(): This has been completely ineffective at stopping test_signal from hanging forever on the Tru64 buildbot. That could be because there's no such thing as signal.SIGALARM. Changed to the idiotic (but standard) signal.SIGALRM instead, and added some more debug output. ........ r51202 | neal.norwitz | 2006-08-11 08:09:41 +0200 (Fri, 11 Aug 2006) | 6 lines Fix the failures on cygwin (2006-08-10 fixed the actual locking issue). The first hunk changes the colon to an ! like other Windows variants. We need to always wait on the child so the lock gets released and no other tests fail. This is the try/finally in the second hunk. ........ r51205 | georg.brandl | 2006-08-11 09:15:38 +0200 (Fri, 11 Aug 2006) | 3 lines Add Chris McDonough (latest cgi.py patch) ........ r51206 | georg.brandl | 2006-08-11 09:26:10 +0200 (Fri, 11 Aug 2006) | 3 lines logging's atexit hook now runs even if the rest of the module has already been cleaned up. ........ r51212 | thomas.wouters | 2006-08-11 17:02:39 +0200 (Fri, 11 Aug 2006) | 4 lines Add ignore of *.pyc and *.pyo to Lib/xml/etree/. ........ r51215 | thomas.heller | 2006-08-11 21:55:35 +0200 (Fri, 11 Aug 2006) | 7 lines When a ctypes C callback function is called, zero out the result storage before converting the result to C data. See the comment in the code for details. Provide a better context for errors when the conversion of a callback function's result cannot be converted. ........ r51218 | neal.norwitz | 2006-08-12 03:43:40 +0200 (Sat, 12 Aug 2006) | 6 lines Klocwork made another run and found a bunch more problems. This is the first batch of fixes that should be easy to verify based on context. This fixes problem numbers: 220 (ast), 323-324 (symtable), 321-322 (structseq), 215 (array), 210 (hotshot), 182 (codecs), 209 (etree). ........ r51219 | neal.norwitz | 2006-08-12 03:45:47 +0200 (Sat, 12 Aug 2006) | 9 lines Even though _Py_Mangle() isn't truly public anyone can call it and there was no verification that privateobj was a PyString. If it wasn't a string, this could have allowed a NULL pointer to creep in below and crash. I wonder if this should be PyString_CheckExact? Must identifiers be strings or can they be subclasses? Klocwork #275 ........ r51220 | neal.norwitz | 2006-08-12 03:46:42 +0200 (Sat, 12 Aug 2006) | 5 lines It's highly unlikely, though possible for PyEval_Get*() to return NULLs. So be safe and do an XINCREF. Klocwork # 221-222. ........ r51221 | neal.norwitz | 2006-08-12 03:47:59 +0200 (Sat, 12 Aug 2006) | 7 lines This code is actually not used unless WITHOUT_COMPLEX is defined. However, there was no error checking that PyFloat_FromDouble returned a valid pointer. I believe this change is correct as it seemed to follow other code in the area. Klocwork # 292. ........ r51222 | neal.norwitz | 2006-08-12 03:49:12 +0200 (Sat, 12 Aug 2006) | 5 lines Handle NULL nodes while parsing. I'm not entirely sure this is correct. There might be something else that needs to be done to setup the error. Klocwork #295. ........ r51223 | neal.norwitz | 2006-08-12 03:50:38 +0200 (Sat, 12 Aug 2006) | 6 lines If _stat_float_times is false, we will try to INCREF ival which could be NULL. Return early in that case. The caller checks for PyErr_Occurred so this should be ok. Klocwork #297 ........ r51224 | neal.norwitz | 2006-08-12 03:51:12 +0200 (Sat, 12 Aug 2006) | 3 lines Move the assert which checks for a NULL pointer first. Klocwork #274. ........ r51225 | neal.norwitz | 2006-08-12 03:53:28 +0200 (Sat, 12 Aug 2006) | 5 lines Try to handle a malloc failure. I'm not entirely sure this is correct. There might be something else we need to do to handle the exception. Klocwork # 212-213 ........ r51226 | neal.norwitz | 2006-08-12 03:57:47 +0200 (Sat, 12 Aug 2006) | 6 lines I'm not sure why this code allocates this string for the error message. I think it would be better to always use snprintf and have the format limit the size of the name appropriately (like %.200s). Klocwork #340 ........ r51227 | neal.norwitz | 2006-08-12 04:06:34 +0200 (Sat, 12 Aug 2006) | 3 lines Check returned pointer is valid. Klocwork #233 ........ r51228 | neal.norwitz | 2006-08-12 04:12:30 +0200 (Sat, 12 Aug 2006) | 1 line Whoops, how did that get in there. :-) Revert all the parts of 51227 that were not supposed to go it. Only Modules/_ctypes/cfields.c was supposed to be changed ........ r51229 | neal.norwitz | 2006-08-12 04:33:36 +0200 (Sat, 12 Aug 2006) | 4 lines Don't deref v if it's NULL. Klocwork #214 ........ r51230 | neal.norwitz | 2006-08-12 05:16:54 +0200 (Sat, 12 Aug 2006) | 5 lines Check return of PyMem_MALLOC (garbage) is non-NULL. Check seq in both portions of if/else. Klocwork #289-290. ........ r51231 | neal.norwitz | 2006-08-12 05:17:41 +0200 (Sat, 12 Aug 2006) | 4 lines PyModule_GetDict() can fail, produce fatal errors if this happens on startup. Klocwork #298-299. ........ r51232 | neal.norwitz | 2006-08-12 05:18:50 +0200 (Sat, 12 Aug 2006) | 5 lines Verify verdat which is returned from malloc is not NULL. Ensure we don't pass NULL to free. Klocwork #306 (at least the first part, checking malloc) ........ r51233 | tim.peters | 2006-08-12 06:42:47 +0200 (Sat, 12 Aug 2006) | 35 lines test_signal: Signal handling on the Tru64 buildbot appears to be utterly insane. Plug some theoretical insecurities in the test script: - Verify that the SIGALRM handler was actually installed. - Don't call alarm() before the handler is installed. - Move everything that can fail inside the try/finally, so the test cleans up after itself more often. - Try sending all the expected signals in force_test_exit(), not just SIGALRM. Since that was fixed to actually send SIGALRM (instead of invisibly dying with an AttributeError), we've seen that sending SIGALRM alone does not stop this from hanging. - Move the "kill the child" business into the finally clause, so the child doesn't survive test failure to send SIGALRM to other tests later (there are also baffling SIGALRM-related failures in test_socket). - Cancel the alarm in the finally clause -- if the test dies early, we again don't want SIGALRM showing up to confuse a later test. Alas, this still relies on timing luck wrt the spawned script that sends the test signals, but it's hard to see how waiting for seconds can so often be so unlucky. test_threadedsignals: curiously, this test never fails on Tru64, but doesn't normally signal SIGALRM. Anyway, fixed an obvious (but probably inconsequential) logic error. ........ r51234 | tim.peters | 2006-08-12 07:17:41 +0200 (Sat, 12 Aug 2006) | 8 lines Ah, fudge. One of the prints here actually "shouldn't be" protected by "if verbose:", which caused the test to fail on all non-Windows boxes. Note that I deliberately didn't convert this to unittest yet, because I expect it would be even harder to debug this on Tru64 after conversion. ........ r51235 | georg.brandl | 2006-08-12 10:32:02 +0200 (Sat, 12 Aug 2006) | 3 lines Repair logging test spew caused by rev. 51206. ........ r51236 | neal.norwitz | 2006-08-12 19:03:09 +0200 (Sat, 12 Aug 2006) | 8 lines Patch #1538606, Patch to fix __index__() clipping. I modified this patch some by fixing style, some error checking, and adding XXX comments. This patch requires review and some changes are to be expected. I'm checking in now to get the greatest possible review and establish a baseline for moving forward. I don't want this to hold up release if possible. ........ r51238 | neal.norwitz | 2006-08-12 20:44:06 +0200 (Sat, 12 Aug 2006) | 10 lines Fix a couple of bugs exposed by the new __index__ code. The 64-bit buildbots were failing due to inappropriate clipping of numbers larger than 2**31 with new-style classes. (typeobject.c) In reviewing the code for classic classes, there were 2 problems. Any negative value return could be returned. Always return -1 if there was an error. Also make the checks similar with the new-style classes. I believe this is correct for 32 and 64 bit boxes, including Windows64. Add a test of classic classes too. ........ r51240 | neal.norwitz | 2006-08-13 02:20:49 +0200 (Sun, 13 Aug 2006) | 1 line SF bug #1539336, distutils example code missing ........ r51245 | neal.norwitz | 2006-08-13 20:10:10 +0200 (Sun, 13 Aug 2006) | 6 lines Move/copy assert for tstate != NULL before first use. Verify that PyEval_Get{Globals,Locals} returned valid pointers. Klocwork 231-232 ........ r51246 | neal.norwitz | 2006-08-13 20:10:28 +0200 (Sun, 13 Aug 2006) | 5 lines Handle a whole lot of failures from PyString_FromInternedString(). Should fix most of Klocwork 234-272. ........ r51247 | neal.norwitz | 2006-08-13 20:10:47 +0200 (Sun, 13 Aug 2006) | 8 lines cpathname could be NULL if it was longer than MAXPATHLEN. Don't try to write the .pyc to NULL. Check results of PyList_GetItem() and PyModule_GetDict() are not NULL. Klocwork 282, 283, 285 ........ r51248 | neal.norwitz | 2006-08-13 20:11:08 +0200 (Sun, 13 Aug 2006) | 6 lines Fix segfault when doing string formatting on subclasses of long if __oct__, __hex__ don't return a string. Klocwork 308 ........ r51250 | neal.norwitz | 2006-08-13 20:11:27 +0200 (Sun, 13 Aug 2006) | 5 lines Check return result of PyModule_GetDict(). Fix a bunch of refleaks in the init of the module. This would only be found when running python -v. ........ r51251 | neal.norwitz | 2006-08-13 20:11:43 +0200 (Sun, 13 Aug 2006) | 5 lines Handle malloc and fopen failures more gracefully. Klocwork 180-181 ........ r51252 | neal.norwitz | 2006-08-13 20:12:03 +0200 (Sun, 13 Aug 2006) | 7 lines It's very unlikely, though possible that source is not a string. Verify that PyString_AsString() returns a valid pointer. (The problem can arise when zlib.decompress doesn't return a string.) Klocwork 346 ........ r51253 | neal.norwitz | 2006-08-13 20:12:26 +0200 (Sun, 13 Aug 2006) | 5 lines Handle failures from lookup. Klocwork 341-342 ........ r51254 | neal.norwitz | 2006-08-13 20:12:45 +0200 (Sun, 13 Aug 2006) | 6 lines Handle failure from PyModule_GetDict() (Klocwork 208). Fix a bunch of refleaks in the init of the module. This would only be found when running python -v. ........ r51255 | neal.norwitz | 2006-08-13 20:13:02 +0200 (Sun, 13 Aug 2006) | 4 lines Really address the issue of where to place the assert for leftblock. (Followup of Klocwork 274) ........ r51256 | neal.norwitz | 2006-08-13 20:13:36 +0200 (Sun, 13 Aug 2006) | 4 lines Handle malloc failure. Klocwork 281 ........ r51258 | neal.norwitz | 2006-08-13 20:40:39 +0200 (Sun, 13 Aug 2006) | 4 lines Handle alloca failures. Klocwork 225-228 ........ r51259 | neal.norwitz | 2006-08-13 20:41:15 +0200 (Sun, 13 Aug 2006) | 1 line Get rid of compiler warning ........ r51261 | neal.norwitz | 2006-08-14 02:51:15 +0200 (Mon, 14 Aug 2006) | 1 line Ignore pgen.exe and kill_python.exe for cygwin ........ r51262 | neal.norwitz | 2006-08-14 02:59:03 +0200 (Mon, 14 Aug 2006) | 4 lines Can't return NULL from a void function. If there is a memory error, about the best we can do is call PyErr_WriteUnraisable and go on. We won't be able to do the call below either, so verify delstr is valid. ........ r51263 | neal.norwitz | 2006-08-14 03:49:54 +0200 (Mon, 14 Aug 2006) | 1 line Update purify doc some. ........ r51264 | thomas.heller | 2006-08-14 09:13:05 +0200 (Mon, 14 Aug 2006) | 2 lines Remove unused, buggy test function. Fixes klockwork issue #207. ........ r51265 | thomas.heller | 2006-08-14 09:14:09 +0200 (Mon, 14 Aug 2006) | 2 lines Check for NULL return value from new_CArgObject(). Fixes klockwork issues #183, #184, #185. ........ r51266 | thomas.heller | 2006-08-14 09:50:14 +0200 (Mon, 14 Aug 2006) | 2 lines Check for NULL return value of GenericCData_new(). Fixes klockwork issues #188, #189. ........ r51274 | thomas.heller | 2006-08-14 12:02:24 +0200 (Mon, 14 Aug 2006) | 2 lines Revert the change that tries to zero out a closure's result storage area because the size if unknown in source/callproc.c. ........ r51276 | marc-andre.lemburg | 2006-08-14 12:55:19 +0200 (Mon, 14 Aug 2006) | 11 lines Slightly revised version of patch #1538956: Replace UnicodeDecodeErrors raised during == and != compares of Unicode and other objects with a new UnicodeWarning. All other comparisons continue to raise exceptions. Exceptions other than UnicodeDecodeErrors are also left untouched. ........ r51277 | thomas.heller | 2006-08-14 13:17:48 +0200 (Mon, 14 Aug 2006) | 13 lines Apply the patch #1532975 plus ideas from the patch #1533481. ctypes instances no longer have the internal and undocumented '_as_parameter_' attribute which was used to adapt them to foreign function calls; this mechanism is replaced by a function pointer in the type's stgdict. In the 'from_param' class methods, try the _as_parameter_ attribute if other conversions are not possible. This makes the documented _as_parameter_ mechanism work as intended. Change the ctypes version number to 1.0.1. ........ r51278 | marc-andre.lemburg | 2006-08-14 13:44:34 +0200 (Mon, 14 Aug 2006) | 3 lines Readd NEWS items that were accidentally removed by r51276. ........ r51279 | georg.brandl | 2006-08-14 14:36:06 +0200 (Mon, 14 Aug 2006) | 3 lines Improve markup in PyUnicode_RichCompare. ........ r51280 | marc-andre.lemburg | 2006-08-14 14:57:27 +0200 (Mon, 14 Aug 2006) | 3 lines Correct an accidentally removed previous patch. ........ r51281 | thomas.heller | 2006-08-14 18:17:41 +0200 (Mon, 14 Aug 2006) | 3 lines Patch #1536908: Add support for AMD64 / OpenBSD. Remove the -no-stack-protector compiler flag for OpenBSD as it has been reported to be unneeded. ........ r51282 | thomas.heller | 2006-08-14 18:20:04 +0200 (Mon, 14 Aug 2006) | 1 line News item for rev 51281. ........ r51283 | georg.brandl | 2006-08-14 22:25:39 +0200 (Mon, 14 Aug 2006) | 3 lines Fix refleak introduced in rev. 51248. ........ r51284 | georg.brandl | 2006-08-14 23:34:08 +0200 (Mon, 14 Aug 2006) | 5 lines Make tabnanny recognize IndentationErrors raised by tokenize. Add a test to test_inspect to make sure indented source is recognized correctly. (fixes #1224621) ........ r51285 | georg.brandl | 2006-08-14 23:42:55 +0200 (Mon, 14 Aug 2006) | 3 lines Patch #1535500: fix segfault in BZ2File.writelines and make sure it raises the correct exceptions. ........ r51287 | georg.brandl | 2006-08-14 23:45:32 +0200 (Mon, 14 Aug 2006) | 3 lines Add an additional test: BZ2File write methods should raise IOError when file is read-only. ........ r51289 | georg.brandl | 2006-08-14 23:55:28 +0200 (Mon, 14 Aug 2006) | 3 lines Patch #1536071: trace.py should now find the full module name of a file correctly even on Windows. ........ r51290 | georg.brandl | 2006-08-15 00:01:24 +0200 (Tue, 15 Aug 2006) | 3 lines Cookie.py shouldn't "bogusly" use string._idmap. ........ r51291 | georg.brandl | 2006-08-15 00:10:24 +0200 (Tue, 15 Aug 2006) | 3 lines Patch #1511317: don't crash on invalid hostname info ........ r51292 | tim.peters | 2006-08-15 02:25:04 +0200 (Tue, 15 Aug 2006) | 2 lines Whitespace normalization. ........ r51293 | neal.norwitz | 2006-08-15 06:14:57 +0200 (Tue, 15 Aug 2006) | 3 lines Georg fixed one of my bugs, so I'll repay him with 2 NEWS entries. Now we're even. :-) ........ r51295 | neal.norwitz | 2006-08-15 06:58:28 +0200 (Tue, 15 Aug 2006) | 8 lines Fix the test for SocketServer so it should pass on cygwin and not fail sporadically on other platforms. This is really a band-aid that doesn't fix the underlying issue in SocketServer. It's not clear if it's worth it to fix SocketServer, however, I opened a bug to track it: http://python.org/sf/1540386 ........ r51296 | neal.norwitz | 2006-08-15 06:59:30 +0200 (Tue, 15 Aug 2006) | 3 lines Update the docstring to use a version a little newer than 1999. This was taken from a Debian patch. Should we update the version for each release? ........ r51298 | neal.norwitz | 2006-08-15 08:29:03 +0200 (Tue, 15 Aug 2006) | 2 lines Subclasses of int/long are allowed to define an __index__. ........ r51300 | thomas.heller | 2006-08-15 15:07:21 +0200 (Tue, 15 Aug 2006) | 1 line Check for NULL return value from new_CArgObject calls. ........ r51303 | kurt.kaiser | 2006-08-16 05:15:26 +0200 (Wed, 16 Aug 2006) | 2 lines The 'with' statement is now a Code Context block opener ........ r51304 | anthony.baxter | 2006-08-16 05:42:26 +0200 (Wed, 16 Aug 2006) | 1 line preparing for 2.5c1 ........ r51305 | anthony.baxter | 2006-08-16 05:58:37 +0200 (Wed, 16 Aug 2006) | 1 line preparing for 2.5c1 - no, really this time ........ r51306 | kurt.kaiser | 2006-08-16 07:01:42 +0200 (Wed, 16 Aug 2006) | 9 lines Patch #1540892: site.py Quitter() class attempts to close sys.stdin before raising SystemExit, allowing IDLE to honor quit() and exit(). M Lib/site.py M Lib/idlelib/PyShell.py M Lib/idlelib/CREDITS.txt M Lib/idlelib/NEWS.txt M Misc/NEWS ........ r51307 | ka-ping.yee | 2006-08-16 09:02:50 +0200 (Wed, 16 Aug 2006) | 6 lines Update code and tests to support the 'bytes_le' attribute (for little-endian byte order on Windows), and to work around clocks with low resolution yielding duplicate UUIDs. Anthony Baxter has approved this change. ........ r51308 | kurt.kaiser | 2006-08-16 09:04:17 +0200 (Wed, 16 Aug 2006) | 2 lines Get quit() and exit() to work cleanly when not using subprocess. ........ r51309 | marc-andre.lemburg | 2006-08-16 10:13:26 +0200 (Wed, 16 Aug 2006) | 2 lines Revert to having static version numbers again. ........ r51310 | martin.v.loewis | 2006-08-16 14:55:10 +0200 (Wed, 16 Aug 2006) | 2 lines Build _hashlib on Windows. Build OpenSSL with masm assembler code. Fixes #1535502. ........ r51311 | thomas.heller | 2006-08-16 15:03:11 +0200 (Wed, 16 Aug 2006) | 6 lines Add commented assert statements to check that the result of PyObject_stgdict() and PyType_stgdict() calls are non-NULL before dereferencing the result. Hopefully this fixes what klocwork is complaining about. Fix a few other nits as well. ........ r51312 | anthony.baxter | 2006-08-16 15:08:25 +0200 (Wed, 16 Aug 2006) | 1 line news entry for 51307 ........ r51313 | andrew.kuchling | 2006-08-16 15:22:20 +0200 (Wed, 16 Aug 2006) | 1 line Add UnicodeWarning ........ r51314 | andrew.kuchling | 2006-08-16 15:41:52 +0200 (Wed, 16 Aug 2006) | 1 line Bump document version to 1.0; remove pystone paragraph ........ r51315 | andrew.kuchling | 2006-08-16 15:51:32 +0200 (Wed, 16 Aug 2006) | 1 line Link to docs; remove an XXX comment ........ r51316 | martin.v.loewis | 2006-08-16 15:58:51 +0200 (Wed, 16 Aug 2006) | 1 line Make cl build step compile-only (/c). Remove libs from source list. ........ r51317 | thomas.heller | 2006-08-16 16:07:44 +0200 (Wed, 16 Aug 2006) | 5 lines The __repr__ method of a NULL py_object does no longer raise an exception. Remove a stray '?' character from the exception text when the value is retrieved of such an object. Includes tests. ........ r51318 | andrew.kuchling | 2006-08-16 16:18:23 +0200 (Wed, 16 Aug 2006) | 1 line Update bug/patch counts ........ r51319 | andrew.kuchling | 2006-08-16 16:21:14 +0200 (Wed, 16 Aug 2006) | 1 line Wording/typo fixes ........ r51320 | thomas.heller | 2006-08-16 17:10:12 +0200 (Wed, 16 Aug 2006) | 9 lines Remove the special casing of Py_None when converting the return value of the Python part of a callback function to C. If it cannot be converted, call PyErr_WriteUnraisable with the exception we got. Before, arbitrary data has been passed to the calling C code in this case. (I'm not really sure the NEWS entry is understandable, but I cannot find better words) ........ r51321 | marc-andre.lemburg | 2006-08-16 18:11:01 +0200 (Wed, 16 Aug 2006) | 2 lines Add NEWS item mentioning the reverted distutils version number patch. ........ r51322 | fredrik.lundh | 2006-08-16 18:47:07 +0200 (Wed, 16 Aug 2006) | 5 lines SF#1534630 ignore data that arrives before the opening start tag ........ r51324 | andrew.kuchling | 2006-08-16 19:11:18 +0200 (Wed, 16 Aug 2006) | 1 line Grammar fix ........ r51328 | thomas.heller | 2006-08-16 20:02:11 +0200 (Wed, 16 Aug 2006) | 12 lines Tutorial: Clarify somewhat how parameters are passed to functions (especially explain what integer means). Correct the table - Python integers and longs can both be used. Further clarification to the table comparing ctypes types, Python types, and C types. Reference: Replace integer by C ``int`` where it makes sense. ........ r51329 | kurt.kaiser | 2006-08-16 23:45:59 +0200 (Wed, 16 Aug 2006) | 8 lines File menu hotkeys: there were three 'p' assignments. Reassign the 'Save Copy As' and 'Print' hotkeys to 'y' and 't'. Change the Shell menu hotkey from 's' to 'l'. M Bindings.py M PyShell.py M NEWS.txt ........ r51330 | neil.schemenauer | 2006-08-17 01:38:05 +0200 (Thu, 17 Aug 2006) | 3 lines Fix a bug in the ``compiler`` package that caused invalid code to be generated for generator expressions. ........ r51342 | martin.v.loewis | 2006-08-17 21:19:32 +0200 (Thu, 17 Aug 2006) | 3 lines Merge 51340 and 51341 from 2.5 branch: Leave tk build directory to restore original path. Invoke debug mk1mf.pl after running Configure. ........ r51354 | martin.v.loewis | 2006-08-18 05:47:18 +0200 (Fri, 18 Aug 2006) | 3 lines Bug #1541863: uuid.uuid1 failed to generate unique identifiers on systems with low clock resolution. ........ r51355 | neal.norwitz | 2006-08-18 05:57:54 +0200 (Fri, 18 Aug 2006) | 1 line Add template for 2.6 on HEAD ........ r51356 | neal.norwitz | 2006-08-18 06:01:38 +0200 (Fri, 18 Aug 2006) | 1 line More post-release wibble ........ r51357 | neal.norwitz | 2006-08-18 06:58:33 +0200 (Fri, 18 Aug 2006) | 1 line Try to get Windows bots working again ........ r51358 | neal.norwitz | 2006-08-18 07:10:00 +0200 (Fri, 18 Aug 2006) | 1 line Try to get Windows bots working again. Take 2 ........ r51359 | neal.norwitz | 2006-08-18 07:39:20 +0200 (Fri, 18 Aug 2006) | 1 line Try to get Unix bots install working again. ........ r51360 | neal.norwitz | 2006-08-18 07:41:46 +0200 (Fri, 18 Aug 2006) | 1 line Set version to 2.6a0, seems more consistent. ........ r51362 | neal.norwitz | 2006-08-18 08:14:52 +0200 (Fri, 18 Aug 2006) | 1 line More version wibble ........ r51364 | georg.brandl | 2006-08-18 09:27:59 +0200 (Fri, 18 Aug 2006) | 4 lines Bug #1541682: Fix example in the "Refcount details" API docs. Additionally, remove a faulty example showing PySequence_SetItem applied to a newly created list object and add notes that this isn't a good idea. ........ r51366 | anthony.baxter | 2006-08-18 09:29:02 +0200 (Fri, 18 Aug 2006) | 3 lines Updating IDLE's version number to match Python's (as per python-dev discussion). ........ r51367 | anthony.baxter | 2006-08-18 09:30:07 +0200 (Fri, 18 Aug 2006) | 1 line RPM specfile updates ........ r51368 | georg.brandl | 2006-08-18 09:35:47 +0200 (Fri, 18 Aug 2006) | 2 lines Typo in tp_clear docs. ........ r51378 | andrew.kuchling | 2006-08-18 15:57:13 +0200 (Fri, 18 Aug 2006) | 1 line Minor edits ........ r51379 | thomas.heller | 2006-08-18 16:38:46 +0200 (Fri, 18 Aug 2006) | 6 lines Add asserts to check for 'impossible' NULL values, with comments. In one place where I'n not 1000% sure about the non-NULL, raise a RuntimeError for safety. This should fix the klocwork issues that Neal sent me. If so, it should be applied to the release25-maint branch also. ........ r51400 | neal.norwitz | 2006-08-19 06:22:33 +0200 (Sat, 19 Aug 2006) | 5 lines Move initialization of interned strings to before allocating the object so we don't leak op. (Fixes an earlier patch to this code) Klockwork #350 ........ r51401 | neal.norwitz | 2006-08-19 06:23:04 +0200 (Sat, 19 Aug 2006) | 4 lines Move assert to after NULL check, otherwise we deref NULL in the assert. Klocwork #307 ........ r51402 | neal.norwitz | 2006-08-19 06:25:29 +0200 (Sat, 19 Aug 2006) | 2 lines SF #1542693: Remove semi-colon at end of PyImport_ImportModuleEx macro ........ r51403 | neal.norwitz | 2006-08-19 06:28:55 +0200 (Sat, 19 Aug 2006) | 6 lines Move initialization to after the asserts for non-NULL values. Klocwork 286-287. (I'm not backporting this, but if someone wants to, feel free.) ........ r51404 | neal.norwitz | 2006-08-19 06:52:03 +0200 (Sat, 19 Aug 2006) | 6 lines Handle PyString_FromInternedString() failing (unlikely, but possible). Klocwork #325 (I'm not backporting this, but if someone wants to, feel free.) ........ r51416 | georg.brandl | 2006-08-20 15:15:39 +0200 (Sun, 20 Aug 2006) | 2 lines Patch #1542948: fix urllib2 header casing issue. With new test. ........ r51428 | jeremy.hylton | 2006-08-21 18:19:37 +0200 (Mon, 21 Aug 2006) | 3 lines Move peephole optimizer to separate file. ........ r51429 | jeremy.hylton | 2006-08-21 18:20:29 +0200 (Mon, 21 Aug 2006) | 2 lines Move peephole optimizer to separate file. (Forgot .h in previous checkin.) ........ r51432 | neal.norwitz | 2006-08-21 19:59:46 +0200 (Mon, 21 Aug 2006) | 5 lines Fix bug #1543303, tarfile adds padding that breaks gunzip. Patch # 1543897. Will backport to 2.5 ........ r51433 | neal.norwitz | 2006-08-21 20:01:30 +0200 (Mon, 21 Aug 2006) | 2 lines Add assert to make Klocwork happy (#276) ........
1223 lines
35 KiB
C
1223 lines
35 KiB
C
/* ------------------------------------------------------------------------
|
|
|
|
unicodedata -- Provides access to the Unicode 4.1 data base.
|
|
|
|
Data was extracted from the Unicode 4.1 UnicodeData.txt file.
|
|
|
|
Written by Marc-Andre Lemburg (mal@lemburg.com).
|
|
Modified for Python 2.0 by Fredrik Lundh (fredrik@pythonware.com)
|
|
Modified by Martin v. Löwis (martin@v.loewis.de)
|
|
|
|
Copyright (c) Corporation for National Research Initiatives.
|
|
|
|
------------------------------------------------------------------------ */
|
|
|
|
#include "Python.h"
|
|
#include "ucnhash.h"
|
|
#include "structmember.h"
|
|
|
|
/* character properties */
|
|
|
|
typedef struct {
|
|
const unsigned char category; /* index into
|
|
_PyUnicode_CategoryNames */
|
|
const unsigned char combining; /* combining class value 0 - 255 */
|
|
const unsigned char bidirectional; /* index into
|
|
_PyUnicode_BidirectionalNames */
|
|
const unsigned char mirrored; /* true if mirrored in bidir mode */
|
|
const unsigned char east_asian_width; /* index into
|
|
_PyUnicode_EastAsianWidth */
|
|
} _PyUnicode_DatabaseRecord;
|
|
|
|
typedef struct change_record {
|
|
/* sequence of fields should be the same as in merge_old_version */
|
|
const unsigned char bidir_changed;
|
|
const unsigned char category_changed;
|
|
const unsigned char decimal_changed;
|
|
const int numeric_changed;
|
|
} change_record;
|
|
|
|
/* data file generated by Tools/unicode/makeunicodedata.py */
|
|
#include "unicodedata_db.h"
|
|
|
|
static const _PyUnicode_DatabaseRecord*
|
|
_getrecord_ex(Py_UCS4 code)
|
|
{
|
|
int index;
|
|
if (code >= 0x110000)
|
|
index = 0;
|
|
else {
|
|
index = index1[(code>>SHIFT)];
|
|
index = index2[(index<<SHIFT)+(code&((1<<SHIFT)-1))];
|
|
}
|
|
|
|
return &_PyUnicode_Database_Records[index];
|
|
}
|
|
|
|
static const _PyUnicode_DatabaseRecord*
|
|
_getrecord(PyUnicodeObject* v)
|
|
{
|
|
return _getrecord_ex(*PyUnicode_AS_UNICODE(v));
|
|
}
|
|
|
|
/* ------------- Previous-version API ------------------------------------- */
|
|
typedef struct previous_version {
|
|
PyObject_HEAD
|
|
const char *name;
|
|
const change_record* (*getrecord)(Py_UCS4);
|
|
Py_UCS4 (*normalization)(Py_UCS4);
|
|
} PreviousDBVersion;
|
|
|
|
#define get_old_record(self, v) ((((PreviousDBVersion*)self)->getrecord)(v))
|
|
|
|
static PyMemberDef DB_members[] = {
|
|
{"unidata_version", T_STRING, offsetof(PreviousDBVersion, name), READONLY},
|
|
{NULL}
|
|
};
|
|
|
|
// forward declaration
|
|
static PyTypeObject UCD_Type;
|
|
|
|
static PyObject*
|
|
new_previous_version(const char*name, const change_record* (*getrecord)(Py_UCS4),
|
|
Py_UCS4 (*normalization)(Py_UCS4))
|
|
{
|
|
PreviousDBVersion *self;
|
|
self = PyObject_New(PreviousDBVersion, &UCD_Type);
|
|
if (self == NULL)
|
|
return NULL;
|
|
self->name = name;
|
|
self->getrecord = getrecord;
|
|
self->normalization = normalization;
|
|
return (PyObject*)self;
|
|
}
|
|
|
|
/* --- Module API --------------------------------------------------------- */
|
|
|
|
PyDoc_STRVAR(unicodedata_decimal__doc__,
|
|
"decimal(unichr[, default])\n\
|
|
\n\
|
|
Returns the decimal value assigned to the Unicode character unichr\n\
|
|
as integer. If no such value is defined, default is returned, or, if\n\
|
|
not given, ValueError is raised.");
|
|
|
|
static PyObject *
|
|
unicodedata_decimal(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
PyObject *defobj = NULL;
|
|
int have_old = 0;
|
|
long rc;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!|O:decimal", &PyUnicode_Type, &v, &defobj))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, *PyUnicode_AS_UNICODE(v));
|
|
if (old->category_changed == 0) {
|
|
/* unassigned */
|
|
have_old = 1;
|
|
rc = -1;
|
|
}
|
|
else if (old->decimal_changed != 0xFF) {
|
|
have_old = 1;
|
|
rc = old->decimal_changed;
|
|
}
|
|
}
|
|
|
|
if (!have_old)
|
|
rc = Py_UNICODE_TODECIMAL(*PyUnicode_AS_UNICODE(v));
|
|
if (rc < 0) {
|
|
if (defobj == NULL) {
|
|
PyErr_SetString(PyExc_ValueError,
|
|
"not a decimal");
|
|
return NULL;
|
|
}
|
|
else {
|
|
Py_INCREF(defobj);
|
|
return defobj;
|
|
}
|
|
}
|
|
return PyInt_FromLong(rc);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_digit__doc__,
|
|
"digit(unichr[, default])\n\
|
|
\n\
|
|
Returns the digit value assigned to the Unicode character unichr as\n\
|
|
integer. If no such value is defined, default is returned, or, if\n\
|
|
not given, ValueError is raised.");
|
|
|
|
static PyObject *
|
|
unicodedata_digit(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
PyObject *defobj = NULL;
|
|
long rc;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!|O:digit", &PyUnicode_Type, &v, &defobj))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
rc = Py_UNICODE_TODIGIT(*PyUnicode_AS_UNICODE(v));
|
|
if (rc < 0) {
|
|
if (defobj == NULL) {
|
|
PyErr_SetString(PyExc_ValueError, "not a digit");
|
|
return NULL;
|
|
}
|
|
else {
|
|
Py_INCREF(defobj);
|
|
return defobj;
|
|
}
|
|
}
|
|
return PyInt_FromLong(rc);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_numeric__doc__,
|
|
"numeric(unichr[, default])\n\
|
|
\n\
|
|
Returns the numeric value assigned to the Unicode character unichr\n\
|
|
as float. If no such value is defined, default is returned, or, if\n\
|
|
not given, ValueError is raised.");
|
|
|
|
static PyObject *
|
|
unicodedata_numeric(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
PyObject *defobj = NULL;
|
|
int have_old = 0;
|
|
double rc;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!|O:numeric", &PyUnicode_Type, &v, &defobj))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, *PyUnicode_AS_UNICODE(v));
|
|
if (old->category_changed == 0) {
|
|
/* unassigned */
|
|
have_old = 1;
|
|
rc = -1.0;
|
|
}
|
|
else if (old->decimal_changed != 0xFF) {
|
|
have_old = 1;
|
|
rc = old->decimal_changed;
|
|
}
|
|
}
|
|
|
|
if (!have_old)
|
|
rc = Py_UNICODE_TONUMERIC(*PyUnicode_AS_UNICODE(v));
|
|
if (rc == -1.0) {
|
|
if (defobj == NULL) {
|
|
PyErr_SetString(PyExc_ValueError, "not a numeric character");
|
|
return NULL;
|
|
}
|
|
else {
|
|
Py_INCREF(defobj);
|
|
return defobj;
|
|
}
|
|
}
|
|
return PyFloat_FromDouble(rc);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_category__doc__,
|
|
"category(unichr)\n\
|
|
\n\
|
|
Returns the general category assigned to the Unicode character\n\
|
|
unichr as string.");
|
|
|
|
static PyObject *
|
|
unicodedata_category(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
int index;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!:category",
|
|
&PyUnicode_Type, &v))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
index = (int) _getrecord(v)->category;
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, *PyUnicode_AS_UNICODE(v));
|
|
if (old->category_changed != 0xFF)
|
|
index = old->category_changed;
|
|
}
|
|
return PyString_FromString(_PyUnicode_CategoryNames[index]);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_bidirectional__doc__,
|
|
"bidirectional(unichr)\n\
|
|
\n\
|
|
Returns the bidirectional category assigned to the Unicode character\n\
|
|
unichr as string. If no such value is defined, an empty string is\n\
|
|
returned.");
|
|
|
|
static PyObject *
|
|
unicodedata_bidirectional(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
int index;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!:bidirectional",
|
|
&PyUnicode_Type, &v))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
index = (int) _getrecord(v)->bidirectional;
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, *PyUnicode_AS_UNICODE(v));
|
|
if (old->category_changed == 0)
|
|
index = 0; /* unassigned */
|
|
else if (old->bidir_changed != 0xFF)
|
|
index = old->bidir_changed;
|
|
}
|
|
return PyString_FromString(_PyUnicode_BidirectionalNames[index]);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_combining__doc__,
|
|
"combining(unichr)\n\
|
|
\n\
|
|
Returns the canonical combining class assigned to the Unicode\n\
|
|
character unichr as integer. Returns 0 if no combining class is\n\
|
|
defined.");
|
|
|
|
static PyObject *
|
|
unicodedata_combining(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
int index;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!:combining",
|
|
&PyUnicode_Type, &v))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
index = (int) _getrecord(v)->combining;
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, *PyUnicode_AS_UNICODE(v));
|
|
if (old->category_changed == 0)
|
|
index = 0; /* unassigned */
|
|
}
|
|
return PyInt_FromLong(index);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_mirrored__doc__,
|
|
"mirrored(unichr)\n\
|
|
\n\
|
|
Returns the mirrored property assigned to the Unicode character\n\
|
|
unichr as integer. Returns 1 if the character has been identified as\n\
|
|
a \"mirrored\" character in bidirectional text, 0 otherwise.");
|
|
|
|
static PyObject *
|
|
unicodedata_mirrored(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
int index;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!:mirrored",
|
|
&PyUnicode_Type, &v))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
index = (int) _getrecord(v)->mirrored;
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, *PyUnicode_AS_UNICODE(v));
|
|
if (old->category_changed == 0)
|
|
index = 0; /* unassigned */
|
|
}
|
|
return PyInt_FromLong(index);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_east_asian_width__doc__,
|
|
"east_asian_width(unichr)\n\
|
|
\n\
|
|
Returns the east asian width assigned to the Unicode character\n\
|
|
unichr as string.");
|
|
|
|
static PyObject *
|
|
unicodedata_east_asian_width(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
int index;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!:east_asian_width",
|
|
&PyUnicode_Type, &v))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
index = (int) _getrecord(v)->east_asian_width;
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, *PyUnicode_AS_UNICODE(v));
|
|
if (old->category_changed == 0)
|
|
index = 0; /* unassigned */
|
|
}
|
|
return PyString_FromString(_PyUnicode_EastAsianWidthNames[index]);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_decomposition__doc__,
|
|
"decomposition(unichr)\n\
|
|
\n\
|
|
Returns the character decomposition mapping assigned to the Unicode\n\
|
|
character unichr as string. An empty string is returned in case no\n\
|
|
such mapping is defined.");
|
|
|
|
static PyObject *
|
|
unicodedata_decomposition(PyObject *self, PyObject *args)
|
|
{
|
|
PyUnicodeObject *v;
|
|
char decomp[256];
|
|
int code, index, count, i;
|
|
unsigned int prefix_index;
|
|
|
|
if (!PyArg_ParseTuple(args, "O!:decomposition",
|
|
&PyUnicode_Type, &v))
|
|
return NULL;
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
|
|
code = (int) *PyUnicode_AS_UNICODE(v);
|
|
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, *PyUnicode_AS_UNICODE(v));
|
|
if (old->category_changed == 0)
|
|
return PyString_FromString(""); /* unassigned */
|
|
}
|
|
|
|
if (code < 0 || code >= 0x110000)
|
|
index = 0;
|
|
else {
|
|
index = decomp_index1[(code>>DECOMP_SHIFT)];
|
|
index = decomp_index2[(index<<DECOMP_SHIFT)+
|
|
(code&((1<<DECOMP_SHIFT)-1))];
|
|
}
|
|
|
|
/* high byte is number of hex bytes (usually one or two), low byte
|
|
is prefix code (from*/
|
|
count = decomp_data[index] >> 8;
|
|
|
|
/* XXX: could allocate the PyString up front instead
|
|
(strlen(prefix) + 5 * count + 1 bytes) */
|
|
|
|
/* Based on how index is calculated above and decomp_data is generated
|
|
from Tools/unicode/makeunicodedata.py, it should not be possible
|
|
to overflow decomp_prefix. */
|
|
prefix_index = decomp_data[index] & 255;
|
|
assert(prefix_index < (sizeof(decomp_prefix)/sizeof(*decomp_prefix)));
|
|
|
|
/* copy prefix */
|
|
i = strlen(decomp_prefix[prefix_index]);
|
|
memcpy(decomp, decomp_prefix[prefix_index], i);
|
|
|
|
while (count-- > 0) {
|
|
if (i)
|
|
decomp[i++] = ' ';
|
|
assert((size_t)i < sizeof(decomp));
|
|
PyOS_snprintf(decomp + i, sizeof(decomp) - i, "%04X",
|
|
decomp_data[++index]);
|
|
i += strlen(decomp + i);
|
|
}
|
|
|
|
decomp[i] = '\0';
|
|
|
|
return PyString_FromString(decomp);
|
|
}
|
|
|
|
static void
|
|
get_decomp_record(PyObject *self, Py_UCS4 code, int *index, int *prefix, int *count)
|
|
{
|
|
if (code >= 0x110000) {
|
|
*index = 0;
|
|
} else if (self && get_old_record(self, code)->category_changed==0) {
|
|
/* unassigned in old version */
|
|
*index = 0;
|
|
}
|
|
else {
|
|
*index = decomp_index1[(code>>DECOMP_SHIFT)];
|
|
*index = decomp_index2[(*index<<DECOMP_SHIFT)+
|
|
(code&((1<<DECOMP_SHIFT)-1))];
|
|
}
|
|
|
|
/* high byte is number of hex bytes (usually one or two), low byte
|
|
is prefix code (from*/
|
|
*count = decomp_data[*index] >> 8;
|
|
*prefix = decomp_data[*index] & 255;
|
|
|
|
(*index)++;
|
|
}
|
|
|
|
#define SBase 0xAC00
|
|
#define LBase 0x1100
|
|
#define VBase 0x1161
|
|
#define TBase 0x11A7
|
|
#define LCount 19
|
|
#define VCount 21
|
|
#define TCount 28
|
|
#define NCount (VCount*TCount)
|
|
#define SCount (LCount*NCount)
|
|
|
|
static PyObject*
|
|
nfd_nfkd(PyObject *self, PyObject *input, int k)
|
|
{
|
|
PyObject *result;
|
|
Py_UNICODE *i, *end, *o;
|
|
/* Longest decomposition in Unicode 3.2: U+FDFA */
|
|
Py_UNICODE stack[20];
|
|
Py_ssize_t space, isize;
|
|
int index, prefix, count, stackptr;
|
|
unsigned char prev, cur;
|
|
|
|
stackptr = 0;
|
|
isize = PyUnicode_GET_SIZE(input);
|
|
/* Overallocate atmost 10 characters. */
|
|
space = (isize > 10 ? 10 : isize) + isize;
|
|
result = PyUnicode_FromUnicode(NULL, space);
|
|
if (!result)
|
|
return NULL;
|
|
i = PyUnicode_AS_UNICODE(input);
|
|
end = i + isize;
|
|
o = PyUnicode_AS_UNICODE(result);
|
|
|
|
while (i < end) {
|
|
stack[stackptr++] = *i++;
|
|
while(stackptr) {
|
|
Py_UNICODE code = stack[--stackptr];
|
|
/* Hangul Decomposition adds three characters in
|
|
a single step, so we need atleast that much room. */
|
|
if (space < 3) {
|
|
Py_ssize_t newsize = PyString_GET_SIZE(result) + 10;
|
|
space += 10;
|
|
if (PyUnicode_Resize(&result, newsize) == -1)
|
|
return NULL;
|
|
o = PyUnicode_AS_UNICODE(result) + newsize - space;
|
|
}
|
|
/* Hangul Decomposition. */
|
|
if (SBase <= code && code < (SBase+SCount)) {
|
|
int SIndex = code - SBase;
|
|
int L = LBase + SIndex / NCount;
|
|
int V = VBase + (SIndex % NCount) / TCount;
|
|
int T = TBase + SIndex % TCount;
|
|
*o++ = L;
|
|
*o++ = V;
|
|
space -= 2;
|
|
if (T != TBase) {
|
|
*o++ = T;
|
|
space --;
|
|
}
|
|
continue;
|
|
}
|
|
/* normalization changes */
|
|
if (self) {
|
|
Py_UCS4 value = ((PreviousDBVersion*)self)->normalization(code);
|
|
if (value != 0) {
|
|
stack[stackptr++] = value;
|
|
continue;
|
|
}
|
|
}
|
|
|
|
/* Other decompositions. */
|
|
get_decomp_record(self, code, &index, &prefix, &count);
|
|
|
|
/* Copy character if it is not decomposable, or has a
|
|
compatibility decomposition, but we do NFD. */
|
|
if (!count || (prefix && !k)) {
|
|
*o++ = code;
|
|
space--;
|
|
continue;
|
|
}
|
|
/* Copy decomposition onto the stack, in reverse
|
|
order. */
|
|
while(count) {
|
|
code = decomp_data[index + (--count)];
|
|
stack[stackptr++] = code;
|
|
}
|
|
}
|
|
}
|
|
|
|
/* Drop overallocation. Cannot fail. */
|
|
PyUnicode_Resize(&result, PyUnicode_GET_SIZE(result) - space);
|
|
|
|
/* Sort canonically. */
|
|
i = PyUnicode_AS_UNICODE(result);
|
|
prev = _getrecord_ex(*i)->combining;
|
|
end = i + PyUnicode_GET_SIZE(result);
|
|
for (i++; i < end; i++) {
|
|
cur = _getrecord_ex(*i)->combining;
|
|
if (prev == 0 || cur == 0 || prev <= cur) {
|
|
prev = cur;
|
|
continue;
|
|
}
|
|
/* Non-canonical order. Need to switch *i with previous. */
|
|
o = i - 1;
|
|
while (1) {
|
|
Py_UNICODE tmp = o[1];
|
|
o[1] = o[0];
|
|
o[0] = tmp;
|
|
o--;
|
|
if (o < PyUnicode_AS_UNICODE(result))
|
|
break;
|
|
prev = _getrecord_ex(*o)->combining;
|
|
if (prev == 0 || prev <= cur)
|
|
break;
|
|
}
|
|
prev = _getrecord_ex(*i)->combining;
|
|
}
|
|
return result;
|
|
}
|
|
|
|
static int
|
|
find_nfc_index(PyObject *self, struct reindex* nfc, Py_UNICODE code)
|
|
{
|
|
int index;
|
|
for (index = 0; nfc[index].start; index++) {
|
|
int start = nfc[index].start;
|
|
if (code < start)
|
|
return -1;
|
|
if (code <= start + nfc[index].count) {
|
|
int delta = code - start;
|
|
return nfc[index].index + delta;
|
|
}
|
|
}
|
|
return -1;
|
|
}
|
|
|
|
static PyObject*
|
|
nfc_nfkc(PyObject *self, PyObject *input, int k)
|
|
{
|
|
PyObject *result;
|
|
Py_UNICODE *i, *i1, *o, *end;
|
|
int f,l,index,index1,comb;
|
|
Py_UNICODE code;
|
|
Py_UNICODE *skipped[20];
|
|
int cskipped = 0;
|
|
|
|
result = nfd_nfkd(self, input, k);
|
|
if (!result)
|
|
return NULL;
|
|
|
|
/* We are going to modify result in-place.
|
|
If nfd_nfkd is changed to sometimes return the input,
|
|
this code needs to be reviewed. */
|
|
assert(result != input);
|
|
|
|
i = PyUnicode_AS_UNICODE(result);
|
|
end = i + PyUnicode_GET_SIZE(result);
|
|
o = PyUnicode_AS_UNICODE(result);
|
|
|
|
again:
|
|
while (i < end) {
|
|
for (index = 0; index < cskipped; index++) {
|
|
if (skipped[index] == i) {
|
|
/* *i character is skipped.
|
|
Remove from list. */
|
|
skipped[index] = skipped[cskipped-1];
|
|
cskipped--;
|
|
i++;
|
|
goto again; /* continue while */
|
|
}
|
|
}
|
|
/* Hangul Composition. We don't need to check for <LV,T>
|
|
pairs, since we always have decomposed data. */
|
|
if (LBase <= *i && *i < (LBase+LCount) &&
|
|
i + 1 < end &&
|
|
VBase <= i[1] && i[1] <= (VBase+VCount)) {
|
|
int LIndex, VIndex;
|
|
LIndex = i[0] - LBase;
|
|
VIndex = i[1] - VBase;
|
|
code = SBase + (LIndex*VCount+VIndex)*TCount;
|
|
i+=2;
|
|
if (i < end &&
|
|
TBase <= *i && *i <= (TBase+TCount)) {
|
|
code += *i-TBase;
|
|
i++;
|
|
}
|
|
*o++ = code;
|
|
continue;
|
|
}
|
|
|
|
f = find_nfc_index(self, nfc_first, *i);
|
|
if (f == -1) {
|
|
*o++ = *i++;
|
|
continue;
|
|
}
|
|
/* Find next unblocked character. */
|
|
i1 = i+1;
|
|
comb = 0;
|
|
while (i1 < end) {
|
|
int comb1 = _getrecord_ex(*i1)->combining;
|
|
if (comb1 && comb == comb1) {
|
|
/* Character is blocked. */
|
|
i1++;
|
|
continue;
|
|
}
|
|
l = find_nfc_index(self, nfc_last, *i1);
|
|
/* *i1 cannot be combined with *i. If *i1
|
|
is a starter, we don't need to look further.
|
|
Otherwise, record the combining class. */
|
|
if (l == -1) {
|
|
not_combinable:
|
|
if (comb1 == 0)
|
|
break;
|
|
comb = comb1;
|
|
i1++;
|
|
continue;
|
|
}
|
|
index = f*TOTAL_LAST + l;
|
|
index1 = comp_index[index >> COMP_SHIFT];
|
|
code = comp_data[(index1<<COMP_SHIFT)+
|
|
(index&((1<<COMP_SHIFT)-1))];
|
|
if (code == 0)
|
|
goto not_combinable;
|
|
|
|
/* Replace the original character. */
|
|
*i = code;
|
|
/* Mark the second character unused. */
|
|
skipped[cskipped++] = i1;
|
|
i1++;
|
|
f = find_nfc_index(self, nfc_first, *i);
|
|
if (f == -1)
|
|
break;
|
|
}
|
|
*o++ = *i++;
|
|
}
|
|
if (o != end)
|
|
PyUnicode_Resize(&result, o - PyUnicode_AS_UNICODE(result));
|
|
return result;
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_normalize__doc__,
|
|
"normalize(form, unistr)\n\
|
|
\n\
|
|
Return the normal form 'form' for the Unicode string unistr. Valid\n\
|
|
values for form are 'NFC', 'NFKC', 'NFD', and 'NFKD'.");
|
|
|
|
static PyObject*
|
|
unicodedata_normalize(PyObject *self, PyObject *args)
|
|
{
|
|
char *form;
|
|
PyObject *input;
|
|
|
|
if(!PyArg_ParseTuple(args, "sO!:normalize",
|
|
&form, &PyUnicode_Type, &input))
|
|
return NULL;
|
|
|
|
if (PyUnicode_GetSize(input) == 0) {
|
|
/* Special case empty input strings, since resizing
|
|
them later would cause internal errors. */
|
|
Py_INCREF(input);
|
|
return input;
|
|
}
|
|
|
|
if (strcmp(form, "NFC") == 0)
|
|
return nfc_nfkc(self, input, 0);
|
|
if (strcmp(form, "NFKC") == 0)
|
|
return nfc_nfkc(self, input, 1);
|
|
if (strcmp(form, "NFD") == 0)
|
|
return nfd_nfkd(self, input, 0);
|
|
if (strcmp(form, "NFKD") == 0)
|
|
return nfd_nfkd(self, input, 1);
|
|
PyErr_SetString(PyExc_ValueError, "invalid normalization form");
|
|
return NULL;
|
|
}
|
|
|
|
/* -------------------------------------------------------------------- */
|
|
/* unicode character name tables */
|
|
|
|
/* data file generated by Tools/unicode/makeunicodedata.py */
|
|
#include "unicodename_db.h"
|
|
|
|
/* -------------------------------------------------------------------- */
|
|
/* database code (cut and pasted from the unidb package) */
|
|
|
|
static unsigned long
|
|
_gethash(const char *s, int len, int scale)
|
|
{
|
|
int i;
|
|
unsigned long h = 0;
|
|
unsigned long ix;
|
|
for (i = 0; i < len; i++) {
|
|
h = (h * scale) + (unsigned char) toupper(Py_CHARMASK(s[i]));
|
|
ix = h & 0xff000000;
|
|
if (ix)
|
|
h = (h ^ ((ix>>24) & 0xff)) & 0x00ffffff;
|
|
}
|
|
return h;
|
|
}
|
|
|
|
static char *hangul_syllables[][3] = {
|
|
{ "G", "A", "" },
|
|
{ "GG", "AE", "G" },
|
|
{ "N", "YA", "GG" },
|
|
{ "D", "YAE", "GS" },
|
|
{ "DD", "EO", "N", },
|
|
{ "R", "E", "NJ" },
|
|
{ "M", "YEO", "NH" },
|
|
{ "B", "YE", "D" },
|
|
{ "BB", "O", "L" },
|
|
{ "S", "WA", "LG" },
|
|
{ "SS", "WAE", "LM" },
|
|
{ "", "OE", "LB" },
|
|
{ "J", "YO", "LS" },
|
|
{ "JJ", "U", "LT" },
|
|
{ "C", "WEO", "LP" },
|
|
{ "K", "WE", "LH" },
|
|
{ "T", "WI", "M" },
|
|
{ "P", "YU", "B" },
|
|
{ "H", "EU", "BS" },
|
|
{ 0, "YI", "S" },
|
|
{ 0, "I", "SS" },
|
|
{ 0, 0, "NG" },
|
|
{ 0, 0, "J" },
|
|
{ 0, 0, "C" },
|
|
{ 0, 0, "K" },
|
|
{ 0, 0, "T" },
|
|
{ 0, 0, "P" },
|
|
{ 0, 0, "H" }
|
|
};
|
|
|
|
static int
|
|
is_unified_ideograph(Py_UCS4 code)
|
|
{
|
|
return (
|
|
(0x3400 <= code && code <= 0x4DB5) || /* CJK Ideograph Extension A */
|
|
(0x4E00 <= code && code <= 0x9FBB) || /* CJK Ideograph */
|
|
(0x20000 <= code && code <= 0x2A6D6));/* CJK Ideograph Extension B */
|
|
}
|
|
|
|
static int
|
|
_getucname(PyObject *self, Py_UCS4 code, char* buffer, int buflen)
|
|
{
|
|
int offset;
|
|
int i;
|
|
int word;
|
|
unsigned char* w;
|
|
|
|
if (code >= 0x110000)
|
|
return 0;
|
|
|
|
if (self) {
|
|
const change_record *old = get_old_record(self, code);
|
|
if (old->category_changed == 0) {
|
|
/* unassigned */
|
|
return 0;
|
|
}
|
|
}
|
|
|
|
if (SBase <= code && code < SBase+SCount) {
|
|
/* Hangul syllable. */
|
|
int SIndex = code - SBase;
|
|
int L = SIndex / NCount;
|
|
int V = (SIndex % NCount) / TCount;
|
|
int T = SIndex % TCount;
|
|
|
|
if (buflen < 27)
|
|
/* Worst case: HANGUL SYLLABLE <10chars>. */
|
|
return 0;
|
|
strcpy(buffer, "HANGUL SYLLABLE ");
|
|
buffer += 16;
|
|
strcpy(buffer, hangul_syllables[L][0]);
|
|
buffer += strlen(hangul_syllables[L][0]);
|
|
strcpy(buffer, hangul_syllables[V][1]);
|
|
buffer += strlen(hangul_syllables[V][1]);
|
|
strcpy(buffer, hangul_syllables[T][2]);
|
|
buffer += strlen(hangul_syllables[T][2]);
|
|
*buffer = '\0';
|
|
return 1;
|
|
}
|
|
|
|
if (is_unified_ideograph(code)) {
|
|
if (buflen < 28)
|
|
/* Worst case: CJK UNIFIED IDEOGRAPH-20000 */
|
|
return 0;
|
|
sprintf(buffer, "CJK UNIFIED IDEOGRAPH-%X", code);
|
|
return 1;
|
|
}
|
|
|
|
/* get offset into phrasebook */
|
|
offset = phrasebook_offset1[(code>>phrasebook_shift)];
|
|
offset = phrasebook_offset2[(offset<<phrasebook_shift) +
|
|
(code&((1<<phrasebook_shift)-1))];
|
|
if (!offset)
|
|
return 0;
|
|
|
|
i = 0;
|
|
|
|
for (;;) {
|
|
/* get word index */
|
|
word = phrasebook[offset] - phrasebook_short;
|
|
if (word >= 0) {
|
|
word = (word << 8) + phrasebook[offset+1];
|
|
offset += 2;
|
|
} else
|
|
word = phrasebook[offset++];
|
|
if (i) {
|
|
if (i > buflen)
|
|
return 0; /* buffer overflow */
|
|
buffer[i++] = ' ';
|
|
}
|
|
/* copy word string from lexicon. the last character in the
|
|
word has bit 7 set. the last word in a string ends with
|
|
0x80 */
|
|
w = lexicon + lexicon_offset[word];
|
|
while (*w < 128) {
|
|
if (i >= buflen)
|
|
return 0; /* buffer overflow */
|
|
buffer[i++] = *w++;
|
|
}
|
|
if (i >= buflen)
|
|
return 0; /* buffer overflow */
|
|
buffer[i++] = *w & 127;
|
|
if (*w == 128)
|
|
break; /* end of word */
|
|
}
|
|
|
|
return 1;
|
|
}
|
|
|
|
static int
|
|
_cmpname(PyObject *self, int code, const char* name, int namelen)
|
|
{
|
|
/* check if code corresponds to the given name */
|
|
int i;
|
|
char buffer[NAME_MAXLEN];
|
|
if (!_getucname(self, code, buffer, sizeof(buffer)))
|
|
return 0;
|
|
for (i = 0; i < namelen; i++) {
|
|
if (toupper(Py_CHARMASK(name[i])) != buffer[i])
|
|
return 0;
|
|
}
|
|
return buffer[namelen] == '\0';
|
|
}
|
|
|
|
static void
|
|
find_syllable(const char *str, int *len, int *pos, int count, int column)
|
|
{
|
|
int i, len1;
|
|
*len = -1;
|
|
for (i = 0; i < count; i++) {
|
|
char *s = hangul_syllables[i][column];
|
|
len1 = strlen(s);
|
|
if (len1 <= *len)
|
|
continue;
|
|
if (strncmp(str, s, len1) == 0) {
|
|
*len = len1;
|
|
*pos = i;
|
|
}
|
|
}
|
|
if (*len == -1) {
|
|
*len = 0;
|
|
}
|
|
}
|
|
|
|
static int
|
|
_getcode(PyObject* self, const char* name, int namelen, Py_UCS4* code)
|
|
{
|
|
unsigned int h, v;
|
|
unsigned int mask = code_size-1;
|
|
unsigned int i, incr;
|
|
|
|
/* Check for hangul syllables. */
|
|
if (strncmp(name, "HANGUL SYLLABLE ", 16) == 0) {
|
|
int len, L = -1, V = -1, T = -1;
|
|
const char *pos = name + 16;
|
|
find_syllable(pos, &len, &L, LCount, 0);
|
|
pos += len;
|
|
find_syllable(pos, &len, &V, VCount, 1);
|
|
pos += len;
|
|
find_syllable(pos, &len, &T, TCount, 2);
|
|
pos += len;
|
|
if (L != -1 && V != -1 && T != -1 && pos-name == namelen) {
|
|
*code = SBase + (L*VCount+V)*TCount + T;
|
|
return 1;
|
|
}
|
|
/* Otherwise, it's an illegal syllable name. */
|
|
return 0;
|
|
}
|
|
|
|
/* Check for unified ideographs. */
|
|
if (strncmp(name, "CJK UNIFIED IDEOGRAPH-", 22) == 0) {
|
|
/* Four or five hexdigits must follow. */
|
|
v = 0;
|
|
name += 22;
|
|
namelen -= 22;
|
|
if (namelen != 4 && namelen != 5)
|
|
return 0;
|
|
while (namelen--) {
|
|
v *= 16;
|
|
if (*name >= '0' && *name <= '9')
|
|
v += *name - '0';
|
|
else if (*name >= 'A' && *name <= 'F')
|
|
v += *name - 'A' + 10;
|
|
else
|
|
return 0;
|
|
name++;
|
|
}
|
|
if (!is_unified_ideograph(v))
|
|
return 0;
|
|
*code = v;
|
|
return 1;
|
|
}
|
|
|
|
/* the following is the same as python's dictionary lookup, with
|
|
only minor changes. see the makeunicodedata script for more
|
|
details */
|
|
|
|
h = (unsigned int) _gethash(name, namelen, code_magic);
|
|
i = (~h) & mask;
|
|
v = code_hash[i];
|
|
if (!v)
|
|
return 0;
|
|
if (_cmpname(self, v, name, namelen)) {
|
|
*code = v;
|
|
return 1;
|
|
}
|
|
incr = (h ^ (h >> 3)) & mask;
|
|
if (!incr)
|
|
incr = mask;
|
|
for (;;) {
|
|
i = (i + incr) & mask;
|
|
v = code_hash[i];
|
|
if (!v)
|
|
return 0;
|
|
if (_cmpname(self, v, name, namelen)) {
|
|
*code = v;
|
|
return 1;
|
|
}
|
|
incr = incr << 1;
|
|
if (incr > mask)
|
|
incr = incr ^ code_poly;
|
|
}
|
|
}
|
|
|
|
static const _PyUnicode_Name_CAPI hashAPI =
|
|
{
|
|
sizeof(_PyUnicode_Name_CAPI),
|
|
_getucname,
|
|
_getcode
|
|
};
|
|
|
|
/* -------------------------------------------------------------------- */
|
|
/* Python bindings */
|
|
|
|
PyDoc_STRVAR(unicodedata_name__doc__,
|
|
"name(unichr[, default])\n\
|
|
Returns the name assigned to the Unicode character unichr as a\n\
|
|
string. If no name is defined, default is returned, or, if not\n\
|
|
given, ValueError is raised.");
|
|
|
|
static PyObject *
|
|
unicodedata_name(PyObject* self, PyObject* args)
|
|
{
|
|
char name[NAME_MAXLEN];
|
|
|
|
PyUnicodeObject* v;
|
|
PyObject* defobj = NULL;
|
|
if (!PyArg_ParseTuple(args, "O!|O:name", &PyUnicode_Type, &v, &defobj))
|
|
return NULL;
|
|
|
|
if (PyUnicode_GET_SIZE(v) != 1) {
|
|
PyErr_SetString(PyExc_TypeError,
|
|
"need a single Unicode character as parameter");
|
|
return NULL;
|
|
}
|
|
|
|
if (!_getucname(self, (Py_UCS4) *PyUnicode_AS_UNICODE(v),
|
|
name, sizeof(name))) {
|
|
if (defobj == NULL) {
|
|
PyErr_SetString(PyExc_ValueError, "no such name");
|
|
return NULL;
|
|
}
|
|
else {
|
|
Py_INCREF(defobj);
|
|
return defobj;
|
|
}
|
|
}
|
|
|
|
return Py_BuildValue("s", name);
|
|
}
|
|
|
|
PyDoc_STRVAR(unicodedata_lookup__doc__,
|
|
"lookup(name)\n\
|
|
\n\
|
|
Look up character by name. If a character with the\n\
|
|
given name is found, return the corresponding Unicode\n\
|
|
character. If not found, KeyError is raised.");
|
|
|
|
static PyObject *
|
|
unicodedata_lookup(PyObject* self, PyObject* args)
|
|
{
|
|
Py_UCS4 code;
|
|
Py_UNICODE str[1];
|
|
char errbuf[256];
|
|
|
|
char* name;
|
|
int namelen;
|
|
if (!PyArg_ParseTuple(args, "s#:lookup", &name, &namelen))
|
|
return NULL;
|
|
|
|
if (!_getcode(self, name, namelen, &code)) {
|
|
/* XXX(nnorwitz): why are we allocating for the error msg?
|
|
Why not always use snprintf? */
|
|
char fmt[] = "undefined character name '%s'";
|
|
char *buf = PyMem_MALLOC(sizeof(fmt) + namelen);
|
|
if (buf)
|
|
sprintf(buf, fmt, name);
|
|
else {
|
|
buf = errbuf;
|
|
PyOS_snprintf(buf, sizeof(errbuf), fmt, name);
|
|
}
|
|
PyErr_SetString(PyExc_KeyError, buf);
|
|
if (buf != errbuf)
|
|
PyMem_FREE(buf);
|
|
return NULL;
|
|
}
|
|
|
|
str[0] = (Py_UNICODE) code;
|
|
return PyUnicode_FromUnicode(str, 1);
|
|
}
|
|
|
|
/* XXX Add doc strings. */
|
|
|
|
static PyMethodDef unicodedata_functions[] = {
|
|
{"decimal", unicodedata_decimal, METH_VARARGS, unicodedata_decimal__doc__},
|
|
{"digit", unicodedata_digit, METH_VARARGS, unicodedata_digit__doc__},
|
|
{"numeric", unicodedata_numeric, METH_VARARGS, unicodedata_numeric__doc__},
|
|
{"category", unicodedata_category, METH_VARARGS,
|
|
unicodedata_category__doc__},
|
|
{"bidirectional", unicodedata_bidirectional, METH_VARARGS,
|
|
unicodedata_bidirectional__doc__},
|
|
{"combining", unicodedata_combining, METH_VARARGS,
|
|
unicodedata_combining__doc__},
|
|
{"mirrored", unicodedata_mirrored, METH_VARARGS,
|
|
unicodedata_mirrored__doc__},
|
|
{"east_asian_width", unicodedata_east_asian_width, METH_VARARGS,
|
|
unicodedata_east_asian_width__doc__},
|
|
{"decomposition", unicodedata_decomposition, METH_VARARGS,
|
|
unicodedata_decomposition__doc__},
|
|
{"name", unicodedata_name, METH_VARARGS, unicodedata_name__doc__},
|
|
{"lookup", unicodedata_lookup, METH_VARARGS, unicodedata_lookup__doc__},
|
|
{"normalize", unicodedata_normalize, METH_VARARGS,
|
|
unicodedata_normalize__doc__},
|
|
{NULL, NULL} /* sentinel */
|
|
};
|
|
|
|
static PyTypeObject UCD_Type = {
|
|
/* The ob_type field must be initialized in the module init function
|
|
* to be portable to Windows without using C++. */
|
|
PyObject_HEAD_INIT(NULL)
|
|
0, /*ob_size*/
|
|
"unicodedata.UCD", /*tp_name*/
|
|
sizeof(PreviousDBVersion), /*tp_basicsize*/
|
|
0, /*tp_itemsize*/
|
|
/* methods */
|
|
(destructor)PyObject_Del, /*tp_dealloc*/
|
|
0, /*tp_print*/
|
|
0, /*tp_getattr*/
|
|
0, /*tp_setattr*/
|
|
0, /*tp_compare*/
|
|
0, /*tp_repr*/
|
|
0, /*tp_as_number*/
|
|
0, /*tp_as_sequence*/
|
|
0, /*tp_as_mapping*/
|
|
0, /*tp_hash*/
|
|
0, /*tp_call*/
|
|
0, /*tp_str*/
|
|
PyObject_GenericGetAttr,/*tp_getattro*/
|
|
0, /*tp_setattro*/
|
|
0, /*tp_as_buffer*/
|
|
Py_TPFLAGS_DEFAULT, /*tp_flags*/
|
|
0, /*tp_doc*/
|
|
0, /*tp_traverse*/
|
|
0, /*tp_clear*/
|
|
0, /*tp_richcompare*/
|
|
0, /*tp_weaklistoffset*/
|
|
0, /*tp_iter*/
|
|
0, /*tp_iternext*/
|
|
unicodedata_functions, /*tp_methods*/
|
|
DB_members, /*tp_members*/
|
|
0, /*tp_getset*/
|
|
0, /*tp_base*/
|
|
0, /*tp_dict*/
|
|
0, /*tp_descr_get*/
|
|
0, /*tp_descr_set*/
|
|
0, /*tp_dictoffset*/
|
|
0, /*tp_init*/
|
|
0, /*tp_alloc*/
|
|
0, /*tp_new*/
|
|
0, /*tp_free*/
|
|
0, /*tp_is_gc*/
|
|
};
|
|
|
|
PyDoc_STRVAR(unicodedata_docstring,
|
|
"This module provides access to the Unicode Character Database which\n\
|
|
defines character properties for all Unicode characters. The data in\n\
|
|
this database is based on the UnicodeData.txt file version\n\
|
|
4.1.0 which is publically available from ftp://ftp.unicode.org/.\n\
|
|
\n\
|
|
The module uses the same names and symbols as defined by the\n\
|
|
UnicodeData File Format 4.1.0 (see\n\
|
|
http://www.unicode.org/Public/4.1.0/ucd/UCD.html).");
|
|
|
|
PyMODINIT_FUNC
|
|
initunicodedata(void)
|
|
{
|
|
PyObject *m, *v;
|
|
|
|
UCD_Type.ob_type = &PyType_Type;
|
|
|
|
m = Py_InitModule3(
|
|
"unicodedata", unicodedata_functions, unicodedata_docstring);
|
|
if (!m)
|
|
return;
|
|
|
|
PyModule_AddStringConstant(m, "unidata_version", UNIDATA_VERSION);
|
|
Py_INCREF(&UCD_Type);
|
|
PyModule_AddObject(m, "UCD", (PyObject*)&UCD_Type);
|
|
|
|
/* Previous versions */
|
|
v = new_previous_version("3.2.0", get_change_3_2_0, normalization_3_2_0);
|
|
if (v != NULL)
|
|
PyModule_AddObject(m, "ucd_3_2_0", v);
|
|
|
|
/* Export C API */
|
|
v = PyCObject_FromVoidPtr((void *) &hashAPI, NULL);
|
|
if (v != NULL)
|
|
PyModule_AddObject(m, "ucnhash_CAPI", v);
|
|
}
|
|
|
|
/*
|
|
Local variables:
|
|
c-basic-offset: 4
|
|
indent-tabs-mode: nil
|
|
End:
|
|
*/
|