cpython/Modules/bz2module.c
Thomas Wouters 00ee7baf49 Merge current trunk into p3yk. This includes the PyNumber_Index API change,
which unfortunately means the errors from the bytes type change somewhat:

bytes([300]) still raises a ValueError, but bytes([10**100]) now raises a
TypeError (either that, or bytes(1.0) also raises a ValueError --
PyNumber_AsSsize_t() can only raise one type of exception.)

Merged revisions 51188-51433 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk

........
  r51189 | kurt.kaiser | 2006-08-10 19:11:09 +0200 (Thu, 10 Aug 2006) | 4 lines

  Retrieval of previous shell command was not always preserving indentation
  since 1.2a1) Patch 1528468 Tal Einat.
........
  r51190 | guido.van.rossum | 2006-08-10 19:41:07 +0200 (Thu, 10 Aug 2006) | 3 lines

  Chris McDonough's patch to defend against certain DoS attacks on FieldStorage.
  SF bug #1112549.
........
  r51191 | guido.van.rossum | 2006-08-10 19:42:50 +0200 (Thu, 10 Aug 2006) | 2 lines

  News item for SF bug 1112549.
........
  r51192 | guido.van.rossum | 2006-08-10 20:09:25 +0200 (Thu, 10 Aug 2006) | 2 lines

  Fix title -- it's rc1, not beta3.
........
  r51194 | martin.v.loewis | 2006-08-10 21:04:00 +0200 (Thu, 10 Aug 2006) | 3 lines

  Update dangling references to the 3.2 database to
  mention that this is UCD 4.1 now.
........
  r51195 | tim.peters | 2006-08-11 00:45:34 +0200 (Fri, 11 Aug 2006) | 6 lines

  Followup to bug #1069160.

  PyThreadState_SetAsyncExc():  internal correctness changes wrt
  refcount safety and deadlock avoidance.  Also added a basic test
  case (relying on ctypes) and repaired the docs.
........
  r51196 | tim.peters | 2006-08-11 00:48:45 +0200 (Fri, 11 Aug 2006) | 2 lines

  Whitespace normalization.
........
  r51197 | tim.peters | 2006-08-11 01:22:13 +0200 (Fri, 11 Aug 2006) | 5 lines

  Whitespace normalization broke test_cgi, because a line
  of quoted test data relied on preserving a single trailing
  blank.  Changed the string from raw to regular, and forced
  in the trailing blank via an explicit \x20 escape.
........
  r51198 | tim.peters | 2006-08-11 02:49:01 +0200 (Fri, 11 Aug 2006) | 10 lines

  test_PyThreadState_SetAsyncExc():  This is failing on some
  64-bit boxes.  I have no idea what the ctypes docs mean
  by "integers", and blind-guessing here that it intended to
  mean the signed C "int" type, in which case perhaps I can
  repair this by feeding the thread id argument to type
  ctypes.c_long().

  Also made the worker thread daemonic, so it doesn't hang
  Python shutdown if the test continues to fail.
........
  r51199 | tim.peters | 2006-08-11 05:49:10 +0200 (Fri, 11 Aug 2006) | 6 lines

  force_test_exit():  This has been completely ineffective
  at stopping test_signal from hanging forever on the Tru64
  buildbot.  That could be because there's no such thing as
  signal.SIGALARM.  Changed to the idiotic (but standard)
  signal.SIGALRM instead, and added some more debug output.
........
  r51202 | neal.norwitz | 2006-08-11 08:09:41 +0200 (Fri, 11 Aug 2006) | 6 lines

  Fix the failures on cygwin (2006-08-10 fixed the actual locking issue).

  The first hunk changes the colon to an ! like other Windows variants.
  We need to always wait on the child so the lock gets released and
  no other tests fail.  This is the try/finally in the second hunk.
........
  r51205 | georg.brandl | 2006-08-11 09:15:38 +0200 (Fri, 11 Aug 2006) | 3 lines

  Add Chris McDonough (latest cgi.py patch)
........
  r51206 | georg.brandl | 2006-08-11 09:26:10 +0200 (Fri, 11 Aug 2006) | 3 lines

  logging's atexit hook now runs even if the rest of the module has
  already been cleaned up.
........
  r51212 | thomas.wouters | 2006-08-11 17:02:39 +0200 (Fri, 11 Aug 2006) | 4 lines


  Add ignore of *.pyc and *.pyo to Lib/xml/etree/.
........
  r51215 | thomas.heller | 2006-08-11 21:55:35 +0200 (Fri, 11 Aug 2006) | 7 lines

  When a ctypes C callback function is called, zero out the result
  storage before converting the result to C data.  See the comment in
  the code for details.

  Provide a better context for errors when the conversion of a callback
  function's result cannot be converted.
........
  r51218 | neal.norwitz | 2006-08-12 03:43:40 +0200 (Sat, 12 Aug 2006) | 6 lines

  Klocwork made another run and found a bunch more problems.
  This is the first batch of fixes that should be easy to verify based on context.

  This fixes problem numbers: 220 (ast), 323-324 (symtable),
  321-322 (structseq), 215 (array), 210 (hotshot), 182 (codecs), 209 (etree).
........
  r51219 | neal.norwitz | 2006-08-12 03:45:47 +0200 (Sat, 12 Aug 2006) | 9 lines

  Even though _Py_Mangle() isn't truly public anyone can call it and
  there was no verification that privateobj was a PyString.  If it wasn't
  a string, this could have allowed a NULL pointer to creep in below and crash.

  I wonder if this should be PyString_CheckExact?  Must identifiers be strings
  or can they be subclasses?

  Klocwork #275
........
  r51220 | neal.norwitz | 2006-08-12 03:46:42 +0200 (Sat, 12 Aug 2006) | 5 lines

  It's highly unlikely, though possible for PyEval_Get*() to return NULLs.
  So be safe and do an XINCREF.

  Klocwork # 221-222.
........
  r51221 | neal.norwitz | 2006-08-12 03:47:59 +0200 (Sat, 12 Aug 2006) | 7 lines

  This code is actually not used unless WITHOUT_COMPLEX is defined.
  However, there was no error checking that PyFloat_FromDouble returned
  a valid pointer.  I believe this change is correct as it seemed
  to follow other code in the area.

  Klocwork # 292.
........
  r51222 | neal.norwitz | 2006-08-12 03:49:12 +0200 (Sat, 12 Aug 2006) | 5 lines

  Handle NULL nodes while parsing.  I'm not entirely sure this is correct.
  There might be something else that needs to be done to setup the error.

  Klocwork #295.
........
  r51223 | neal.norwitz | 2006-08-12 03:50:38 +0200 (Sat, 12 Aug 2006) | 6 lines

  If _stat_float_times is false, we will try to INCREF ival which could be NULL.
  Return early in that case.  The caller checks for PyErr_Occurred so this
  should be ok.

  Klocwork #297
........
  r51224 | neal.norwitz | 2006-08-12 03:51:12 +0200 (Sat, 12 Aug 2006) | 3 lines

  Move the assert which checks for a NULL pointer first.
  Klocwork #274.
........
  r51225 | neal.norwitz | 2006-08-12 03:53:28 +0200 (Sat, 12 Aug 2006) | 5 lines

  Try to handle a malloc failure.  I'm not entirely sure this is correct.
  There might be something else we need to do to handle the exception.

  Klocwork # 212-213
........
  r51226 | neal.norwitz | 2006-08-12 03:57:47 +0200 (Sat, 12 Aug 2006) | 6 lines

  I'm not sure why this code allocates this string for the error message.
  I think it would be better to always use snprintf and have the format
  limit the size of the name appropriately (like %.200s).

  Klocwork #340
........
  r51227 | neal.norwitz | 2006-08-12 04:06:34 +0200 (Sat, 12 Aug 2006) | 3 lines

  Check returned pointer is valid.
  Klocwork #233
........
  r51228 | neal.norwitz | 2006-08-12 04:12:30 +0200 (Sat, 12 Aug 2006) | 1 line

  Whoops, how did that get in there. :-)  Revert all the parts of 51227 that were not supposed to go it.  Only Modules/_ctypes/cfields.c was supposed to be changed
........
  r51229 | neal.norwitz | 2006-08-12 04:33:36 +0200 (Sat, 12 Aug 2006) | 4 lines

  Don't deref v if it's NULL.

  Klocwork #214
........
  r51230 | neal.norwitz | 2006-08-12 05:16:54 +0200 (Sat, 12 Aug 2006) | 5 lines

  Check return of PyMem_MALLOC (garbage) is non-NULL.
  Check seq in both portions of if/else.

  Klocwork #289-290.
........
  r51231 | neal.norwitz | 2006-08-12 05:17:41 +0200 (Sat, 12 Aug 2006) | 4 lines

  PyModule_GetDict() can fail, produce fatal errors if this happens on startup.

  Klocwork #298-299.
........
  r51232 | neal.norwitz | 2006-08-12 05:18:50 +0200 (Sat, 12 Aug 2006) | 5 lines

  Verify verdat which is returned from malloc is not NULL.
  Ensure we don't pass NULL to free.

  Klocwork #306 (at least the first part, checking malloc)
........
  r51233 | tim.peters | 2006-08-12 06:42:47 +0200 (Sat, 12 Aug 2006) | 35 lines

  test_signal:  Signal handling on the Tru64 buildbot
  appears to be utterly insane.  Plug some theoretical
  insecurities in the test script:

  - Verify that the SIGALRM handler was actually installed.

  - Don't call alarm() before the handler is installed.

  - Move everything that can fail inside the try/finally,
    so the test cleans up after itself more often.

  - Try sending all the expected signals in
    force_test_exit(), not just SIGALRM.  Since that was
    fixed to actually send SIGALRM (instead of invisibly
    dying with an AttributeError), we've seen that sending
    SIGALRM alone does not stop this from hanging.

  - Move the "kill the child" business into the finally
    clause, so the child doesn't survive test failure
    to send SIGALRM to other tests later (there are also
    baffling SIGALRM-related failures in test_socket).

  - Cancel the alarm in the finally clause -- if the
    test dies early, we again don't want SIGALRM showing
    up to confuse a later test.

  Alas, this still relies on timing luck wrt the spawned
  script that sends the test signals, but it's hard to see
  how waiting for seconds can so often be so unlucky.

  test_threadedsignals:  curiously, this test never fails
  on Tru64, but doesn't normally signal SIGALRM.  Anyway,
  fixed an obvious (but probably inconsequential) logic
  error.
........
  r51234 | tim.peters | 2006-08-12 07:17:41 +0200 (Sat, 12 Aug 2006) | 8 lines

  Ah, fudge.  One of the prints here actually "shouldn't be"
  protected by "if verbose:", which caused the test to fail on
  all non-Windows boxes.

  Note that I deliberately didn't convert this to unittest yet,
  because I expect it would be even harder to debug this on Tru64
  after conversion.
........
  r51235 | georg.brandl | 2006-08-12 10:32:02 +0200 (Sat, 12 Aug 2006) | 3 lines

  Repair logging test spew caused by rev. 51206.
........
  r51236 | neal.norwitz | 2006-08-12 19:03:09 +0200 (Sat, 12 Aug 2006) | 8 lines

  Patch #1538606, Patch to fix __index__() clipping.

  I modified this patch some by fixing style, some error checking, and adding
  XXX comments.  This patch requires review and some changes are to be expected.
  I'm checking in now to get the greatest possible review and establish a
  baseline for moving forward.  I don't want this to hold up release if possible.
........
  r51238 | neal.norwitz | 2006-08-12 20:44:06 +0200 (Sat, 12 Aug 2006) | 10 lines

  Fix a couple of bugs exposed by the new __index__ code.  The 64-bit buildbots
  were failing due to inappropriate clipping of numbers larger than 2**31
  with new-style classes. (typeobject.c)  In reviewing the code for classic
  classes, there were 2 problems.  Any negative value return could be returned.
  Always return -1 if there was an error.  Also make the checks similar
  with the new-style classes.  I believe this is correct for 32 and 64 bit
  boxes, including Windows64.

  Add a test of classic classes too.
........
  r51240 | neal.norwitz | 2006-08-13 02:20:49 +0200 (Sun, 13 Aug 2006) | 1 line

  SF bug #1539336, distutils example code missing
........
  r51245 | neal.norwitz | 2006-08-13 20:10:10 +0200 (Sun, 13 Aug 2006) | 6 lines

  Move/copy assert for tstate != NULL before first use.
  Verify that PyEval_Get{Globals,Locals} returned valid pointers.

  Klocwork 231-232
........
  r51246 | neal.norwitz | 2006-08-13 20:10:28 +0200 (Sun, 13 Aug 2006) | 5 lines

  Handle a whole lot of failures from PyString_FromInternedString().

  Should fix most of Klocwork 234-272.
........
  r51247 | neal.norwitz | 2006-08-13 20:10:47 +0200 (Sun, 13 Aug 2006) | 8 lines

  cpathname could be NULL if it was longer than MAXPATHLEN.  Don't try
  to write the .pyc to NULL.

  Check results of PyList_GetItem() and PyModule_GetDict() are not NULL.

  Klocwork 282, 283, 285
........
  r51248 | neal.norwitz | 2006-08-13 20:11:08 +0200 (Sun, 13 Aug 2006) | 6 lines

  Fix segfault when doing string formatting on subclasses of long if
  __oct__, __hex__ don't return a string.

  Klocwork 308
........
  r51250 | neal.norwitz | 2006-08-13 20:11:27 +0200 (Sun, 13 Aug 2006) | 5 lines

  Check return result of PyModule_GetDict().
  Fix a bunch of refleaks in the init of the module.  This would only be found
  when running python -v.
........
  r51251 | neal.norwitz | 2006-08-13 20:11:43 +0200 (Sun, 13 Aug 2006) | 5 lines

  Handle malloc and fopen failures more gracefully.

  Klocwork 180-181
........
  r51252 | neal.norwitz | 2006-08-13 20:12:03 +0200 (Sun, 13 Aug 2006) | 7 lines

  It's very unlikely, though possible that source is not a string.  Verify
  that PyString_AsString() returns a valid pointer.  (The problem can
  arise when zlib.decompress doesn't return a string.)

  Klocwork 346
........
  r51253 | neal.norwitz | 2006-08-13 20:12:26 +0200 (Sun, 13 Aug 2006) | 5 lines

  Handle failures from lookup.

  Klocwork 341-342
........
  r51254 | neal.norwitz | 2006-08-13 20:12:45 +0200 (Sun, 13 Aug 2006) | 6 lines

  Handle failure from PyModule_GetDict() (Klocwork 208).

  Fix a bunch of refleaks in the init of the module.  This would only be found
  when running python -v.
........
  r51255 | neal.norwitz | 2006-08-13 20:13:02 +0200 (Sun, 13 Aug 2006) | 4 lines

  Really address the issue of where to place the assert for leftblock.
  (Followup of Klocwork 274)
........
  r51256 | neal.norwitz | 2006-08-13 20:13:36 +0200 (Sun, 13 Aug 2006) | 4 lines

  Handle malloc failure.

  Klocwork 281
........
  r51258 | neal.norwitz | 2006-08-13 20:40:39 +0200 (Sun, 13 Aug 2006) | 4 lines

  Handle alloca failures.

  Klocwork 225-228
........
  r51259 | neal.norwitz | 2006-08-13 20:41:15 +0200 (Sun, 13 Aug 2006) | 1 line

  Get rid of compiler warning
........
  r51261 | neal.norwitz | 2006-08-14 02:51:15 +0200 (Mon, 14 Aug 2006) | 1 line

  Ignore pgen.exe and kill_python.exe for cygwin
........
  r51262 | neal.norwitz | 2006-08-14 02:59:03 +0200 (Mon, 14 Aug 2006) | 4 lines

  Can't return NULL from a void function.  If there is a memory error,
  about the best we can do is call PyErr_WriteUnraisable and go on.
  We won't be able to do the call below either, so verify delstr is valid.
........
  r51263 | neal.norwitz | 2006-08-14 03:49:54 +0200 (Mon, 14 Aug 2006) | 1 line

  Update purify doc some.
........
  r51264 | thomas.heller | 2006-08-14 09:13:05 +0200 (Mon, 14 Aug 2006) | 2 lines

  Remove unused, buggy test function.
  Fixes klockwork issue #207.
........
  r51265 | thomas.heller | 2006-08-14 09:14:09 +0200 (Mon, 14 Aug 2006) | 2 lines

  Check for NULL return value from new_CArgObject().
  Fixes klockwork issues #183, #184, #185.
........
  r51266 | thomas.heller | 2006-08-14 09:50:14 +0200 (Mon, 14 Aug 2006) | 2 lines

  Check for NULL return value of GenericCData_new().
  Fixes klockwork issues #188, #189.
........
  r51274 | thomas.heller | 2006-08-14 12:02:24 +0200 (Mon, 14 Aug 2006) | 2 lines

  Revert the change that tries to zero out a closure's result storage
  area because the size if unknown in source/callproc.c.
........
  r51276 | marc-andre.lemburg | 2006-08-14 12:55:19 +0200 (Mon, 14 Aug 2006) | 11 lines

  Slightly revised version of patch #1538956:

  Replace UnicodeDecodeErrors raised during == and !=
  compares of Unicode and other objects with a new
  UnicodeWarning.

  All other comparisons continue to raise exceptions.
  Exceptions other than UnicodeDecodeErrors are also left
  untouched.
........
  r51277 | thomas.heller | 2006-08-14 13:17:48 +0200 (Mon, 14 Aug 2006) | 13 lines

  Apply the patch #1532975 plus ideas from the patch #1533481.

  ctypes instances no longer have the internal and undocumented
  '_as_parameter_' attribute which was used to adapt them to foreign
  function calls; this mechanism is replaced by a function pointer in
  the type's stgdict.

  In the 'from_param' class methods, try the _as_parameter_ attribute if
  other conversions are not possible.

  This makes the documented _as_parameter_ mechanism work as intended.

  Change the ctypes version number to 1.0.1.
........
  r51278 | marc-andre.lemburg | 2006-08-14 13:44:34 +0200 (Mon, 14 Aug 2006) | 3 lines

  Readd NEWS items that were accidentally removed by r51276.
........
  r51279 | georg.brandl | 2006-08-14 14:36:06 +0200 (Mon, 14 Aug 2006) | 3 lines

  Improve markup in PyUnicode_RichCompare.
........
  r51280 | marc-andre.lemburg | 2006-08-14 14:57:27 +0200 (Mon, 14 Aug 2006) | 3 lines

  Correct an accidentally removed previous patch.
........
  r51281 | thomas.heller | 2006-08-14 18:17:41 +0200 (Mon, 14 Aug 2006) | 3 lines

  Patch #1536908: Add support for AMD64 / OpenBSD.
  Remove the -no-stack-protector compiler flag for OpenBSD
  as it has been reported to be unneeded.
........
  r51282 | thomas.heller | 2006-08-14 18:20:04 +0200 (Mon, 14 Aug 2006) | 1 line

  News item for rev 51281.
........
  r51283 | georg.brandl | 2006-08-14 22:25:39 +0200 (Mon, 14 Aug 2006) | 3 lines

  Fix refleak introduced in rev. 51248.
........
  r51284 | georg.brandl | 2006-08-14 23:34:08 +0200 (Mon, 14 Aug 2006) | 5 lines

  Make tabnanny recognize IndentationErrors raised by tokenize.
  Add a test to test_inspect to make sure indented source
  is recognized correctly. (fixes #1224621)
........
  r51285 | georg.brandl | 2006-08-14 23:42:55 +0200 (Mon, 14 Aug 2006) | 3 lines

  Patch #1535500: fix segfault in BZ2File.writelines and make sure it
  raises the correct exceptions.
........
  r51287 | georg.brandl | 2006-08-14 23:45:32 +0200 (Mon, 14 Aug 2006) | 3 lines

  Add an additional test: BZ2File write methods should raise IOError
  when file is read-only.
........
  r51289 | georg.brandl | 2006-08-14 23:55:28 +0200 (Mon, 14 Aug 2006) | 3 lines

  Patch #1536071: trace.py should now find the full module name of a
  file correctly even on Windows.
........
  r51290 | georg.brandl | 2006-08-15 00:01:24 +0200 (Tue, 15 Aug 2006) | 3 lines

  Cookie.py shouldn't "bogusly" use string._idmap.
........
  r51291 | georg.brandl | 2006-08-15 00:10:24 +0200 (Tue, 15 Aug 2006) | 3 lines

  Patch #1511317: don't crash on invalid hostname info
........
  r51292 | tim.peters | 2006-08-15 02:25:04 +0200 (Tue, 15 Aug 2006) | 2 lines

  Whitespace normalization.
........
  r51293 | neal.norwitz | 2006-08-15 06:14:57 +0200 (Tue, 15 Aug 2006) | 3 lines

  Georg fixed one of my bugs, so I'll repay him with 2 NEWS entries.
  Now we're even. :-)
........
  r51295 | neal.norwitz | 2006-08-15 06:58:28 +0200 (Tue, 15 Aug 2006) | 8 lines

  Fix the test for SocketServer so it should pass on cygwin and not fail
  sporadically on other platforms.  This is really a band-aid that doesn't
  fix the underlying issue in SocketServer.  It's not clear if it's worth
  it to fix SocketServer, however, I opened a bug to track it:

  	http://python.org/sf/1540386
........
  r51296 | neal.norwitz | 2006-08-15 06:59:30 +0200 (Tue, 15 Aug 2006) | 3 lines

  Update the docstring to use a version a little newer than 1999.  This was
  taken from a Debian patch.  Should we update the version for each release?
........
  r51298 | neal.norwitz | 2006-08-15 08:29:03 +0200 (Tue, 15 Aug 2006) | 2 lines

  Subclasses of int/long are allowed to define an __index__.
........
  r51300 | thomas.heller | 2006-08-15 15:07:21 +0200 (Tue, 15 Aug 2006) | 1 line

  Check for NULL return value from new_CArgObject calls.
........
  r51303 | kurt.kaiser | 2006-08-16 05:15:26 +0200 (Wed, 16 Aug 2006) | 2 lines

  The 'with' statement is now a Code Context block opener
........
  r51304 | anthony.baxter | 2006-08-16 05:42:26 +0200 (Wed, 16 Aug 2006) | 1 line

  preparing for 2.5c1
........
  r51305 | anthony.baxter | 2006-08-16 05:58:37 +0200 (Wed, 16 Aug 2006) | 1 line

  preparing for 2.5c1 - no, really this time
........
  r51306 | kurt.kaiser | 2006-08-16 07:01:42 +0200 (Wed, 16 Aug 2006) | 9 lines

  Patch #1540892: site.py Quitter() class attempts to close sys.stdin
  before raising SystemExit, allowing IDLE to honor quit() and exit().

  M    Lib/site.py
  M    Lib/idlelib/PyShell.py
  M    Lib/idlelib/CREDITS.txt
  M    Lib/idlelib/NEWS.txt
  M    Misc/NEWS
........
  r51307 | ka-ping.yee | 2006-08-16 09:02:50 +0200 (Wed, 16 Aug 2006) | 6 lines

  Update code and tests to support the 'bytes_le' attribute (for
  little-endian byte order on Windows), and to work around clocks
  with low resolution yielding duplicate UUIDs.

  Anthony Baxter has approved this change.
........
  r51308 | kurt.kaiser | 2006-08-16 09:04:17 +0200 (Wed, 16 Aug 2006) | 2 lines

  Get quit() and exit() to work cleanly when not using subprocess.
........
  r51309 | marc-andre.lemburg | 2006-08-16 10:13:26 +0200 (Wed, 16 Aug 2006) | 2 lines

  Revert to having static version numbers again.
........
  r51310 | martin.v.loewis | 2006-08-16 14:55:10 +0200 (Wed, 16 Aug 2006) | 2 lines

  Build _hashlib on Windows. Build OpenSSL with masm assembler code.
  Fixes #1535502.
........
  r51311 | thomas.heller | 2006-08-16 15:03:11 +0200 (Wed, 16 Aug 2006) | 6 lines

  Add commented assert statements to check that the result of
  PyObject_stgdict() and PyType_stgdict() calls are non-NULL before
  dereferencing the result.  Hopefully this fixes what klocwork is
  complaining about.

  Fix a few other nits as well.
........
  r51312 | anthony.baxter | 2006-08-16 15:08:25 +0200 (Wed, 16 Aug 2006) | 1 line

  news entry for 51307
........
  r51313 | andrew.kuchling | 2006-08-16 15:22:20 +0200 (Wed, 16 Aug 2006) | 1 line

  Add UnicodeWarning
........
  r51314 | andrew.kuchling | 2006-08-16 15:41:52 +0200 (Wed, 16 Aug 2006) | 1 line

  Bump document version to 1.0; remove pystone paragraph
........
  r51315 | andrew.kuchling | 2006-08-16 15:51:32 +0200 (Wed, 16 Aug 2006) | 1 line

  Link to docs; remove an XXX comment
........
  r51316 | martin.v.loewis | 2006-08-16 15:58:51 +0200 (Wed, 16 Aug 2006) | 1 line

  Make cl build step compile-only (/c). Remove libs from source list.
........
  r51317 | thomas.heller | 2006-08-16 16:07:44 +0200 (Wed, 16 Aug 2006) | 5 lines

  The __repr__ method of a NULL py_object does no longer raise an
  exception.  Remove a stray '?' character from the exception text
  when the value is retrieved of such an object.

  Includes tests.
........
  r51318 | andrew.kuchling | 2006-08-16 16:18:23 +0200 (Wed, 16 Aug 2006) | 1 line

  Update bug/patch counts
........
  r51319 | andrew.kuchling | 2006-08-16 16:21:14 +0200 (Wed, 16 Aug 2006) | 1 line

  Wording/typo fixes
........
  r51320 | thomas.heller | 2006-08-16 17:10:12 +0200 (Wed, 16 Aug 2006) | 9 lines

  Remove the special casing of Py_None when converting the return value
  of the Python part of a callback function to C.  If it cannot be
  converted, call PyErr_WriteUnraisable with the exception we got.
  Before, arbitrary data has been passed to the calling C code in this
  case.

  (I'm not really sure the NEWS entry is understandable, but I cannot
  find better words)
........
  r51321 | marc-andre.lemburg | 2006-08-16 18:11:01 +0200 (Wed, 16 Aug 2006) | 2 lines

  Add NEWS item mentioning the reverted distutils version number patch.
........
  r51322 | fredrik.lundh | 2006-08-16 18:47:07 +0200 (Wed, 16 Aug 2006) | 5 lines

  SF#1534630

  ignore data that arrives before the opening start tag
........
  r51324 | andrew.kuchling | 2006-08-16 19:11:18 +0200 (Wed, 16 Aug 2006) | 1 line

  Grammar fix
........
  r51328 | thomas.heller | 2006-08-16 20:02:11 +0200 (Wed, 16 Aug 2006) | 12 lines

  Tutorial:

      Clarify somewhat how parameters are passed to functions
      (especially explain what integer means).

      Correct the table - Python integers and longs can both be used.
      Further clarification to the table comparing ctypes types, Python
      types, and C types.

  Reference:

      Replace integer by C ``int`` where it makes sense.
........
  r51329 | kurt.kaiser | 2006-08-16 23:45:59 +0200 (Wed, 16 Aug 2006) | 8 lines

  File menu hotkeys: there were three 'p' assignments.  Reassign the
  'Save Copy As' and 'Print' hotkeys to 'y' and 't'.  Change the
  Shell menu hotkey from 's' to 'l'.

  M    Bindings.py
  M    PyShell.py
  M    NEWS.txt
........
  r51330 | neil.schemenauer | 2006-08-17 01:38:05 +0200 (Thu, 17 Aug 2006) | 3 lines

  Fix a bug in the ``compiler`` package that caused invalid code to be
  generated for generator expressions.
........
  r51342 | martin.v.loewis | 2006-08-17 21:19:32 +0200 (Thu, 17 Aug 2006) | 3 lines

  Merge 51340 and 51341 from 2.5 branch:
  Leave tk build directory to restore original path.
  Invoke debug mk1mf.pl after running Configure.
........
  r51354 | martin.v.loewis | 2006-08-18 05:47:18 +0200 (Fri, 18 Aug 2006) | 3 lines

  Bug #1541863: uuid.uuid1 failed to generate unique identifiers
  on systems with low clock resolution.
........
  r51355 | neal.norwitz | 2006-08-18 05:57:54 +0200 (Fri, 18 Aug 2006) | 1 line

  Add template for 2.6 on HEAD
........
  r51356 | neal.norwitz | 2006-08-18 06:01:38 +0200 (Fri, 18 Aug 2006) | 1 line

  More post-release wibble
........
  r51357 | neal.norwitz | 2006-08-18 06:58:33 +0200 (Fri, 18 Aug 2006) | 1 line

  Try to get Windows bots working again
........
  r51358 | neal.norwitz | 2006-08-18 07:10:00 +0200 (Fri, 18 Aug 2006) | 1 line

  Try to get Windows bots working again. Take 2
........
  r51359 | neal.norwitz | 2006-08-18 07:39:20 +0200 (Fri, 18 Aug 2006) | 1 line

  Try to get Unix bots install working again.
........
  r51360 | neal.norwitz | 2006-08-18 07:41:46 +0200 (Fri, 18 Aug 2006) | 1 line

  Set version to 2.6a0, seems more consistent.
........
  r51362 | neal.norwitz | 2006-08-18 08:14:52 +0200 (Fri, 18 Aug 2006) | 1 line

  More version wibble
........
  r51364 | georg.brandl | 2006-08-18 09:27:59 +0200 (Fri, 18 Aug 2006) | 4 lines

  Bug #1541682: Fix example in the "Refcount details" API docs.
  Additionally, remove a faulty example showing PySequence_SetItem applied
  to a newly created list object and add notes that this isn't a good idea.
........
  r51366 | anthony.baxter | 2006-08-18 09:29:02 +0200 (Fri, 18 Aug 2006) | 3 lines

  Updating IDLE's version number to match Python's (as per python-dev
  discussion).
........
  r51367 | anthony.baxter | 2006-08-18 09:30:07 +0200 (Fri, 18 Aug 2006) | 1 line

  RPM specfile updates
........
  r51368 | georg.brandl | 2006-08-18 09:35:47 +0200 (Fri, 18 Aug 2006) | 2 lines

  Typo in tp_clear docs.
........
  r51378 | andrew.kuchling | 2006-08-18 15:57:13 +0200 (Fri, 18 Aug 2006) | 1 line

  Minor edits
........
  r51379 | thomas.heller | 2006-08-18 16:38:46 +0200 (Fri, 18 Aug 2006) | 6 lines

  Add asserts to check for 'impossible' NULL values, with comments.
  In one place where I'n not 1000% sure about the non-NULL, raise
  a RuntimeError for safety.

  This should fix the klocwork issues that Neal sent me.  If so,
  it should be applied to the release25-maint branch also.
........
  r51400 | neal.norwitz | 2006-08-19 06:22:33 +0200 (Sat, 19 Aug 2006) | 5 lines

  Move initialization of interned strings to before allocating the
  object so we don't leak op.  (Fixes an earlier patch to this code)

  Klockwork #350
........
  r51401 | neal.norwitz | 2006-08-19 06:23:04 +0200 (Sat, 19 Aug 2006) | 4 lines

  Move assert to after NULL check, otherwise we deref NULL in the assert.

  Klocwork #307
........
  r51402 | neal.norwitz | 2006-08-19 06:25:29 +0200 (Sat, 19 Aug 2006) | 2 lines

  SF #1542693: Remove semi-colon at end of PyImport_ImportModuleEx macro
........
  r51403 | neal.norwitz | 2006-08-19 06:28:55 +0200 (Sat, 19 Aug 2006) | 6 lines

  Move initialization to after the asserts for non-NULL values.

  Klocwork 286-287.

  (I'm not backporting this, but if someone wants to, feel free.)
........
  r51404 | neal.norwitz | 2006-08-19 06:52:03 +0200 (Sat, 19 Aug 2006) | 6 lines

  Handle PyString_FromInternedString() failing (unlikely, but possible).

  Klocwork #325

  (I'm not backporting this, but if someone wants to, feel free.)
........
  r51416 | georg.brandl | 2006-08-20 15:15:39 +0200 (Sun, 20 Aug 2006) | 2 lines

  Patch #1542948: fix urllib2 header casing issue. With new test.
........
  r51428 | jeremy.hylton | 2006-08-21 18:19:37 +0200 (Mon, 21 Aug 2006) | 3 lines

  Move peephole optimizer to separate file.
........
  r51429 | jeremy.hylton | 2006-08-21 18:20:29 +0200 (Mon, 21 Aug 2006) | 2 lines

  Move peephole optimizer to separate file.  (Forgot .h in previous checkin.)
........
  r51432 | neal.norwitz | 2006-08-21 19:59:46 +0200 (Mon, 21 Aug 2006) | 5 lines

  Fix bug #1543303, tarfile adds padding that breaks gunzip.
  Patch # 1543897.

  Will backport to 2.5
........
  r51433 | neal.norwitz | 2006-08-21 20:01:30 +0200 (Mon, 21 Aug 2006) | 2 lines

  Add assert to make Klocwork happy (#276)
........
2006-08-21 19:07:27 +00:00

2221 lines
53 KiB
C

/*
python-bz2 - python bz2 library interface
Copyright (c) 2002 Gustavo Niemeyer <niemeyer@conectiva.com>
Copyright (c) 2002 Python Software Foundation; All Rights Reserved
*/
#include "Python.h"
#include <stdio.h>
#include <bzlib.h>
#include "structmember.h"
#ifdef WITH_THREAD
#include "pythread.h"
#endif
static char __author__[] =
"The bz2 python module was written by:\n\
\n\
Gustavo Niemeyer <niemeyer@conectiva.com>\n\
";
/* Our very own off_t-like type, 64-bit if possible */
/* copied from Objects/fileobject.c */
#if !defined(HAVE_LARGEFILE_SUPPORT)
typedef off_t Py_off_t;
#elif SIZEOF_OFF_T >= 8
typedef off_t Py_off_t;
#elif SIZEOF_FPOS_T >= 8
typedef fpos_t Py_off_t;
#else
#error "Large file support, but neither off_t nor fpos_t is large enough."
#endif
#define BUF(v) PyString_AS_STRING((PyStringObject *)v)
#define MODE_CLOSED 0
#define MODE_READ 1
#define MODE_READ_EOF 2
#define MODE_WRITE 3
#define BZ2FileObject_Check(v) ((v)->ob_type == &BZ2File_Type)
#ifdef BZ_CONFIG_ERROR
#if SIZEOF_LONG >= 8
#define BZS_TOTAL_OUT(bzs) \
(((long)bzs->total_out_hi32 << 32) + bzs->total_out_lo32)
#elif SIZEOF_LONG_LONG >= 8
#define BZS_TOTAL_OUT(bzs) \
(((PY_LONG_LONG)bzs->total_out_hi32 << 32) + bzs->total_out_lo32)
#else
#define BZS_TOTAL_OUT(bzs) \
bzs->total_out_lo32
#endif
#else /* ! BZ_CONFIG_ERROR */
#define BZ2_bzRead bzRead
#define BZ2_bzReadOpen bzReadOpen
#define BZ2_bzReadClose bzReadClose
#define BZ2_bzWrite bzWrite
#define BZ2_bzWriteOpen bzWriteOpen
#define BZ2_bzWriteClose bzWriteClose
#define BZ2_bzCompress bzCompress
#define BZ2_bzCompressInit bzCompressInit
#define BZ2_bzCompressEnd bzCompressEnd
#define BZ2_bzDecompress bzDecompress
#define BZ2_bzDecompressInit bzDecompressInit
#define BZ2_bzDecompressEnd bzDecompressEnd
#define BZS_TOTAL_OUT(bzs) bzs->total_out
#endif /* ! BZ_CONFIG_ERROR */
#ifdef WITH_THREAD
#define ACQUIRE_LOCK(obj) PyThread_acquire_lock(obj->lock, 1)
#define RELEASE_LOCK(obj) PyThread_release_lock(obj->lock)
#else
#define ACQUIRE_LOCK(obj)
#define RELEASE_LOCK(obj)
#endif
/* Bits in f_newlinetypes */
#define NEWLINE_UNKNOWN 0 /* No newline seen, yet */
#define NEWLINE_CR 1 /* \r newline seen */
#define NEWLINE_LF 2 /* \n newline seen */
#define NEWLINE_CRLF 4 /* \r\n newline seen */
/* ===================================================================== */
/* Structure definitions. */
typedef struct {
PyObject_HEAD
PyObject *file;
char* f_buf; /* Allocated readahead buffer */
char* f_bufend; /* Points after last occupied position */
char* f_bufptr; /* Current buffer position */
int f_softspace; /* Flag used by 'print' command */
int f_univ_newline; /* Handle any newline convention */
int f_newlinetypes; /* Types of newlines seen */
int f_skipnextlf; /* Skip next \n */
BZFILE *fp;
int mode;
Py_off_t pos;
Py_off_t size;
#ifdef WITH_THREAD
PyThread_type_lock lock;
#endif
} BZ2FileObject;
typedef struct {
PyObject_HEAD
bz_stream bzs;
int running;
#ifdef WITH_THREAD
PyThread_type_lock lock;
#endif
} BZ2CompObject;
typedef struct {
PyObject_HEAD
bz_stream bzs;
int running;
PyObject *unused_data;
#ifdef WITH_THREAD
PyThread_type_lock lock;
#endif
} BZ2DecompObject;
/* ===================================================================== */
/* Utility functions. */
static int
Util_CatchBZ2Error(int bzerror)
{
int ret = 0;
switch(bzerror) {
case BZ_OK:
case BZ_STREAM_END:
break;
#ifdef BZ_CONFIG_ERROR
case BZ_CONFIG_ERROR:
PyErr_SetString(PyExc_SystemError,
"the bz2 library was not compiled "
"correctly");
ret = 1;
break;
#endif
case BZ_PARAM_ERROR:
PyErr_SetString(PyExc_ValueError,
"the bz2 library has received wrong "
"parameters");
ret = 1;
break;
case BZ_MEM_ERROR:
PyErr_NoMemory();
ret = 1;
break;
case BZ_DATA_ERROR:
case BZ_DATA_ERROR_MAGIC:
PyErr_SetString(PyExc_IOError, "invalid data stream");
ret = 1;
break;
case BZ_IO_ERROR:
PyErr_SetString(PyExc_IOError, "unknown IO error");
ret = 1;
break;
case BZ_UNEXPECTED_EOF:
PyErr_SetString(PyExc_EOFError,
"compressed file ended before the "
"logical end-of-stream was detected");
ret = 1;
break;
case BZ_SEQUENCE_ERROR:
PyErr_SetString(PyExc_RuntimeError,
"wrong sequence of bz2 library "
"commands used");
ret = 1;
break;
}
return ret;
}
#if BUFSIZ < 8192
#define SMALLCHUNK 8192
#else
#define SMALLCHUNK BUFSIZ
#endif
#if SIZEOF_INT < 4
#define BIGCHUNK (512 * 32)
#else
#define BIGCHUNK (512 * 1024)
#endif
/* This is a hacked version of Python's fileobject.c:new_buffersize(). */
static size_t
Util_NewBufferSize(size_t currentsize)
{
if (currentsize > SMALLCHUNK) {
/* Keep doubling until we reach BIGCHUNK;
then keep adding BIGCHUNK. */
if (currentsize <= BIGCHUNK)
return currentsize + currentsize;
else
return currentsize + BIGCHUNK;
}
return currentsize + SMALLCHUNK;
}
/* This is a hacked version of Python's fileobject.c:get_line(). */
static PyObject *
Util_GetLine(BZ2FileObject *f, int n)
{
char c;
char *buf, *end;
size_t total_v_size; /* total # of slots in buffer */
size_t used_v_size; /* # used slots in buffer */
size_t increment; /* amount to increment the buffer */
PyObject *v;
int bzerror;
int newlinetypes = f->f_newlinetypes;
int skipnextlf = f->f_skipnextlf;
int univ_newline = f->f_univ_newline;
total_v_size = n > 0 ? n : 100;
v = PyString_FromStringAndSize((char *)NULL, total_v_size);
if (v == NULL)
return NULL;
buf = BUF(v);
end = buf + total_v_size;
for (;;) {
Py_BEGIN_ALLOW_THREADS
if (univ_newline) {
while (1) {
BZ2_bzRead(&bzerror, f->fp, &c, 1);
f->pos++;
if (bzerror != BZ_OK || buf == end)
break;
if (skipnextlf) {
skipnextlf = 0;
if (c == '\n') {
/* Seeing a \n here with
* skipnextlf true means we
* saw a \r before.
*/
newlinetypes |= NEWLINE_CRLF;
BZ2_bzRead(&bzerror, f->fp,
&c, 1);
if (bzerror != BZ_OK)
break;
} else {
newlinetypes |= NEWLINE_CR;
}
}
if (c == '\r') {
skipnextlf = 1;
c = '\n';
} else if ( c == '\n')
newlinetypes |= NEWLINE_LF;
*buf++ = c;
if (c == '\n') break;
}
if (bzerror == BZ_STREAM_END && skipnextlf)
newlinetypes |= NEWLINE_CR;
} else /* If not universal newlines use the normal loop */
do {
BZ2_bzRead(&bzerror, f->fp, &c, 1);
f->pos++;
*buf++ = c;
} while (bzerror == BZ_OK && c != '\n' && buf != end);
Py_END_ALLOW_THREADS
f->f_newlinetypes = newlinetypes;
f->f_skipnextlf = skipnextlf;
if (bzerror == BZ_STREAM_END) {
f->size = f->pos;
f->mode = MODE_READ_EOF;
break;
} else if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
Py_DECREF(v);
return NULL;
}
if (c == '\n')
break;
/* Must be because buf == end */
if (n > 0)
break;
used_v_size = total_v_size;
increment = total_v_size >> 2; /* mild exponential growth */
total_v_size += increment;
if (total_v_size > INT_MAX) {
PyErr_SetString(PyExc_OverflowError,
"line is longer than a Python string can hold");
Py_DECREF(v);
return NULL;
}
if (_PyString_Resize(&v, total_v_size) < 0)
return NULL;
buf = BUF(v) + used_v_size;
end = BUF(v) + total_v_size;
}
used_v_size = buf - BUF(v);
if (used_v_size != total_v_size)
_PyString_Resize(&v, used_v_size);
return v;
}
/* This is a hacked version of Python's
* fileobject.c:Py_UniversalNewlineFread(). */
size_t
Util_UnivNewlineRead(int *bzerror, BZFILE *stream,
char* buf, size_t n, BZ2FileObject *f)
{
char *dst = buf;
int newlinetypes, skipnextlf;
assert(buf != NULL);
assert(stream != NULL);
if (!f->f_univ_newline)
return BZ2_bzRead(bzerror, stream, buf, n);
newlinetypes = f->f_newlinetypes;
skipnextlf = f->f_skipnextlf;
/* Invariant: n is the number of bytes remaining to be filled
* in the buffer.
*/
while (n) {
size_t nread;
int shortread;
char *src = dst;
nread = BZ2_bzRead(bzerror, stream, dst, n);
assert(nread <= n);
n -= nread; /* assuming 1 byte out for each in; will adjust */
shortread = n != 0; /* true iff EOF or error */
while (nread--) {
char c = *src++;
if (c == '\r') {
/* Save as LF and set flag to skip next LF. */
*dst++ = '\n';
skipnextlf = 1;
}
else if (skipnextlf && c == '\n') {
/* Skip LF, and remember we saw CR LF. */
skipnextlf = 0;
newlinetypes |= NEWLINE_CRLF;
++n;
}
else {
/* Normal char to be stored in buffer. Also
* update the newlinetypes flag if either this
* is an LF or the previous char was a CR.
*/
if (c == '\n')
newlinetypes |= NEWLINE_LF;
else if (skipnextlf)
newlinetypes |= NEWLINE_CR;
*dst++ = c;
skipnextlf = 0;
}
}
if (shortread) {
/* If this is EOF, update type flags. */
if (skipnextlf && *bzerror == BZ_STREAM_END)
newlinetypes |= NEWLINE_CR;
break;
}
}
f->f_newlinetypes = newlinetypes;
f->f_skipnextlf = skipnextlf;
return dst - buf;
}
/* This is a hacked version of Python's fileobject.c:drop_readahead(). */
static void
Util_DropReadAhead(BZ2FileObject *f)
{
if (f->f_buf != NULL) {
PyMem_Free(f->f_buf);
f->f_buf = NULL;
}
}
/* This is a hacked version of Python's fileobject.c:readahead(). */
static int
Util_ReadAhead(BZ2FileObject *f, int bufsize)
{
int chunksize;
int bzerror;
if (f->f_buf != NULL) {
if((f->f_bufend - f->f_bufptr) >= 1)
return 0;
else
Util_DropReadAhead(f);
}
if (f->mode == MODE_READ_EOF) {
f->f_bufptr = f->f_buf;
f->f_bufend = f->f_buf;
return 0;
}
if ((f->f_buf = PyMem_Malloc(bufsize)) == NULL) {
return -1;
}
Py_BEGIN_ALLOW_THREADS
chunksize = Util_UnivNewlineRead(&bzerror, f->fp, f->f_buf,
bufsize, f);
Py_END_ALLOW_THREADS
f->pos += chunksize;
if (bzerror == BZ_STREAM_END) {
f->size = f->pos;
f->mode = MODE_READ_EOF;
} else if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
Util_DropReadAhead(f);
return -1;
}
f->f_bufptr = f->f_buf;
f->f_bufend = f->f_buf + chunksize;
return 0;
}
/* This is a hacked version of Python's
* fileobject.c:readahead_get_line_skip(). */
static PyStringObject *
Util_ReadAheadGetLineSkip(BZ2FileObject *f, int skip, int bufsize)
{
PyStringObject* s;
char *bufptr;
char *buf;
int len;
if (f->f_buf == NULL)
if (Util_ReadAhead(f, bufsize) < 0)
return NULL;
len = f->f_bufend - f->f_bufptr;
if (len == 0)
return (PyStringObject *)
PyString_FromStringAndSize(NULL, skip);
bufptr = memchr(f->f_bufptr, '\n', len);
if (bufptr != NULL) {
bufptr++; /* Count the '\n' */
len = bufptr - f->f_bufptr;
s = (PyStringObject *)
PyString_FromStringAndSize(NULL, skip+len);
if (s == NULL)
return NULL;
memcpy(PyString_AS_STRING(s)+skip, f->f_bufptr, len);
f->f_bufptr = bufptr;
if (bufptr == f->f_bufend)
Util_DropReadAhead(f);
} else {
bufptr = f->f_bufptr;
buf = f->f_buf;
f->f_buf = NULL; /* Force new readahead buffer */
s = Util_ReadAheadGetLineSkip(f, skip+len,
bufsize + (bufsize>>2));
if (s == NULL) {
PyMem_Free(buf);
return NULL;
}
memcpy(PyString_AS_STRING(s)+skip, bufptr, len);
PyMem_Free(buf);
}
return s;
}
/* ===================================================================== */
/* Methods of BZ2File. */
PyDoc_STRVAR(BZ2File_read__doc__,
"read([size]) -> string\n\
\n\
Read at most size uncompressed bytes, returned as a string. If the size\n\
argument is negative or omitted, read until EOF is reached.\n\
");
/* This is a hacked version of Python's fileobject.c:file_read(). */
static PyObject *
BZ2File_read(BZ2FileObject *self, PyObject *args)
{
long bytesrequested = -1;
size_t bytesread, buffersize, chunksize;
int bzerror;
PyObject *ret = NULL;
if (!PyArg_ParseTuple(args, "|l:read", &bytesrequested))
return NULL;
ACQUIRE_LOCK(self);
switch (self->mode) {
case MODE_READ:
break;
case MODE_READ_EOF:
ret = PyString_FromString("");
goto cleanup;
case MODE_CLOSED:
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
goto cleanup;
default:
PyErr_SetString(PyExc_IOError,
"file is not ready for reading");
goto cleanup;
}
if (bytesrequested < 0)
buffersize = Util_NewBufferSize((size_t)0);
else
buffersize = bytesrequested;
if (buffersize > INT_MAX) {
PyErr_SetString(PyExc_OverflowError,
"requested number of bytes is "
"more than a Python string can hold");
goto cleanup;
}
ret = PyString_FromStringAndSize((char *)NULL, buffersize);
if (ret == NULL)
goto cleanup;
bytesread = 0;
for (;;) {
Py_BEGIN_ALLOW_THREADS
chunksize = Util_UnivNewlineRead(&bzerror, self->fp,
BUF(ret)+bytesread,
buffersize-bytesread,
self);
self->pos += chunksize;
Py_END_ALLOW_THREADS
bytesread += chunksize;
if (bzerror == BZ_STREAM_END) {
self->size = self->pos;
self->mode = MODE_READ_EOF;
break;
} else if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
Py_DECREF(ret);
ret = NULL;
goto cleanup;
}
if (bytesrequested < 0) {
buffersize = Util_NewBufferSize(buffersize);
if (_PyString_Resize(&ret, buffersize) < 0)
goto cleanup;
} else {
break;
}
}
if (bytesread != buffersize)
_PyString_Resize(&ret, bytesread);
cleanup:
RELEASE_LOCK(self);
return ret;
}
PyDoc_STRVAR(BZ2File_readline__doc__,
"readline([size]) -> string\n\
\n\
Return the next line from the file, as a string, retaining newline.\n\
A non-negative size argument will limit the maximum number of bytes to\n\
return (an incomplete line may be returned then). Return an empty\n\
string at EOF.\n\
");
static PyObject *
BZ2File_readline(BZ2FileObject *self, PyObject *args)
{
PyObject *ret = NULL;
int sizehint = -1;
if (!PyArg_ParseTuple(args, "|i:readline", &sizehint))
return NULL;
ACQUIRE_LOCK(self);
switch (self->mode) {
case MODE_READ:
break;
case MODE_READ_EOF:
ret = PyString_FromString("");
goto cleanup;
case MODE_CLOSED:
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
goto cleanup;
default:
PyErr_SetString(PyExc_IOError,
"file is not ready for reading");
goto cleanup;
}
if (sizehint == 0)
ret = PyString_FromString("");
else
ret = Util_GetLine(self, (sizehint < 0) ? 0 : sizehint);
cleanup:
RELEASE_LOCK(self);
return ret;
}
PyDoc_STRVAR(BZ2File_readlines__doc__,
"readlines([size]) -> list\n\
\n\
Call readline() repeatedly and return a list of lines read.\n\
The optional size argument, if given, is an approximate bound on the\n\
total number of bytes in the lines returned.\n\
");
/* This is a hacked version of Python's fileobject.c:file_readlines(). */
static PyObject *
BZ2File_readlines(BZ2FileObject *self, PyObject *args)
{
long sizehint = 0;
PyObject *list = NULL;
PyObject *line;
char small_buffer[SMALLCHUNK];
char *buffer = small_buffer;
size_t buffersize = SMALLCHUNK;
PyObject *big_buffer = NULL;
size_t nfilled = 0;
size_t nread;
size_t totalread = 0;
char *p, *q, *end;
int err;
int shortread = 0;
int bzerror;
if (!PyArg_ParseTuple(args, "|l:readlines", &sizehint))
return NULL;
ACQUIRE_LOCK(self);
switch (self->mode) {
case MODE_READ:
break;
case MODE_READ_EOF:
list = PyList_New(0);
goto cleanup;
case MODE_CLOSED:
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
goto cleanup;
default:
PyErr_SetString(PyExc_IOError,
"file is not ready for reading");
goto cleanup;
}
if ((list = PyList_New(0)) == NULL)
goto cleanup;
for (;;) {
Py_BEGIN_ALLOW_THREADS
nread = Util_UnivNewlineRead(&bzerror, self->fp,
buffer+nfilled,
buffersize-nfilled, self);
self->pos += nread;
Py_END_ALLOW_THREADS
if (bzerror == BZ_STREAM_END) {
self->size = self->pos;
self->mode = MODE_READ_EOF;
if (nread == 0) {
sizehint = 0;
break;
}
shortread = 1;
} else if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
error:
Py_DECREF(list);
list = NULL;
goto cleanup;
}
totalread += nread;
p = memchr(buffer+nfilled, '\n', nread);
if (!shortread && p == NULL) {
/* Need a larger buffer to fit this line */
nfilled += nread;
buffersize *= 2;
if (buffersize > INT_MAX) {
PyErr_SetString(PyExc_OverflowError,
"line is longer than a Python string can hold");
goto error;
}
if (big_buffer == NULL) {
/* Create the big buffer */
big_buffer = PyString_FromStringAndSize(
NULL, buffersize);
if (big_buffer == NULL)
goto error;
buffer = PyString_AS_STRING(big_buffer);
memcpy(buffer, small_buffer, nfilled);
}
else {
/* Grow the big buffer */
_PyString_Resize(&big_buffer, buffersize);
buffer = PyString_AS_STRING(big_buffer);
}
continue;
}
end = buffer+nfilled+nread;
q = buffer;
while (p != NULL) {
/* Process complete lines */
p++;
line = PyString_FromStringAndSize(q, p-q);
if (line == NULL)
goto error;
err = PyList_Append(list, line);
Py_DECREF(line);
if (err != 0)
goto error;
q = p;
p = memchr(q, '\n', end-q);
}
/* Move the remaining incomplete line to the start */
nfilled = end-q;
memmove(buffer, q, nfilled);
if (sizehint > 0)
if (totalread >= (size_t)sizehint)
break;
if (shortread) {
sizehint = 0;
break;
}
}
if (nfilled != 0) {
/* Partial last line */
line = PyString_FromStringAndSize(buffer, nfilled);
if (line == NULL)
goto error;
if (sizehint > 0) {
/* Need to complete the last line */
PyObject *rest = Util_GetLine(self, 0);
if (rest == NULL) {
Py_DECREF(line);
goto error;
}
PyString_Concat(&line, rest);
Py_DECREF(rest);
if (line == NULL)
goto error;
}
err = PyList_Append(list, line);
Py_DECREF(line);
if (err != 0)
goto error;
}
cleanup:
RELEASE_LOCK(self);
if (big_buffer) {
Py_DECREF(big_buffer);
}
return list;
}
PyDoc_STRVAR(BZ2File_write__doc__,
"write(data) -> None\n\
\n\
Write the 'data' string to file. Note that due to buffering, close() may\n\
be needed before the file on disk reflects the data written.\n\
");
/* This is a hacked version of Python's fileobject.c:file_write(). */
static PyObject *
BZ2File_write(BZ2FileObject *self, PyObject *args)
{
PyObject *ret = NULL;
char *buf;
int len;
int bzerror;
if (!PyArg_ParseTuple(args, "s#:write", &buf, &len))
return NULL;
ACQUIRE_LOCK(self);
switch (self->mode) {
case MODE_WRITE:
break;
case MODE_CLOSED:
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
goto cleanup;
default:
PyErr_SetString(PyExc_IOError,
"file is not ready for writing");
goto cleanup;
}
self->f_softspace = 0;
Py_BEGIN_ALLOW_THREADS
BZ2_bzWrite (&bzerror, self->fp, buf, len);
self->pos += len;
Py_END_ALLOW_THREADS
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto cleanup;
}
Py_INCREF(Py_None);
ret = Py_None;
cleanup:
RELEASE_LOCK(self);
return ret;
}
PyDoc_STRVAR(BZ2File_writelines__doc__,
"writelines(sequence_of_strings) -> None\n\
\n\
Write the sequence of strings to the file. Note that newlines are not\n\
added. The sequence can be any iterable object producing strings. This is\n\
equivalent to calling write() for each string.\n\
");
/* This is a hacked version of Python's fileobject.c:file_writelines(). */
static PyObject *
BZ2File_writelines(BZ2FileObject *self, PyObject *seq)
{
#define CHUNKSIZE 1000
PyObject *list = NULL;
PyObject *iter = NULL;
PyObject *ret = NULL;
PyObject *line;
int i, j, index, len, islist;
int bzerror;
ACQUIRE_LOCK(self);
switch (self->mode) {
case MODE_WRITE:
break;
case MODE_CLOSED:
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
goto error;
default:
PyErr_SetString(PyExc_IOError,
"file is not ready for writing");
goto error;
}
islist = PyList_Check(seq);
if (!islist) {
iter = PyObject_GetIter(seq);
if (iter == NULL) {
PyErr_SetString(PyExc_TypeError,
"writelines() requires an iterable argument");
goto error;
}
list = PyList_New(CHUNKSIZE);
if (list == NULL)
goto error;
}
/* Strategy: slurp CHUNKSIZE lines into a private list,
checking that they are all strings, then write that list
without holding the interpreter lock, then come back for more. */
for (index = 0; ; index += CHUNKSIZE) {
if (islist) {
Py_XDECREF(list);
list = PyList_GetSlice(seq, index, index+CHUNKSIZE);
if (list == NULL)
goto error;
j = PyList_GET_SIZE(list);
}
else {
for (j = 0; j < CHUNKSIZE; j++) {
line = PyIter_Next(iter);
if (line == NULL) {
if (PyErr_Occurred())
goto error;
break;
}
PyList_SetItem(list, j, line);
}
}
if (j == 0)
break;
/* Check that all entries are indeed strings. If not,
apply the same rules as for file.write() and
convert the rets to strings. This is slow, but
seems to be the only way since all conversion APIs
could potentially execute Python code. */
for (i = 0; i < j; i++) {
PyObject *v = PyList_GET_ITEM(list, i);
if (!PyString_Check(v)) {
const char *buffer;
Py_ssize_t len;
if (PyObject_AsCharBuffer(v, &buffer, &len)) {
PyErr_SetString(PyExc_TypeError,
"writelines() "
"argument must be "
"a sequence of "
"strings");
goto error;
}
line = PyString_FromStringAndSize(buffer,
len);
if (line == NULL)
goto error;
Py_DECREF(v);
PyList_SET_ITEM(list, i, line);
}
}
self->f_softspace = 0;
/* Since we are releasing the global lock, the
following code may *not* execute Python code. */
Py_BEGIN_ALLOW_THREADS
for (i = 0; i < j; i++) {
line = PyList_GET_ITEM(list, i);
len = PyString_GET_SIZE(line);
BZ2_bzWrite (&bzerror, self->fp,
PyString_AS_STRING(line), len);
if (bzerror != BZ_OK) {
Py_BLOCK_THREADS
Util_CatchBZ2Error(bzerror);
goto error;
}
}
Py_END_ALLOW_THREADS
if (j < CHUNKSIZE)
break;
}
Py_INCREF(Py_None);
ret = Py_None;
error:
RELEASE_LOCK(self);
Py_XDECREF(list);
Py_XDECREF(iter);
return ret;
#undef CHUNKSIZE
}
PyDoc_STRVAR(BZ2File_seek__doc__,
"seek(offset [, whence]) -> None\n\
\n\
Move to new file position. Argument offset is a byte count. Optional\n\
argument whence defaults to 0 (offset from start of file, offset\n\
should be >= 0); other values are 1 (move relative to current position,\n\
positive or negative), and 2 (move relative to end of file, usually\n\
negative, although many platforms allow seeking beyond the end of a file).\n\
\n\
Note that seeking of bz2 files is emulated, and depending on the parameters\n\
the operation may be extremely slow.\n\
");
static PyObject *
BZ2File_seek(BZ2FileObject *self, PyObject *args)
{
int where = 0;
PyObject *offobj;
Py_off_t offset;
char small_buffer[SMALLCHUNK];
char *buffer = small_buffer;
size_t buffersize = SMALLCHUNK;
int bytesread = 0;
size_t readsize;
int chunksize;
int bzerror;
PyObject *ret = NULL;
if (!PyArg_ParseTuple(args, "O|i:seek", &offobj, &where))
return NULL;
#if !defined(HAVE_LARGEFILE_SUPPORT)
offset = PyInt_AsLong(offobj);
#else
offset = PyLong_Check(offobj) ?
PyLong_AsLongLong(offobj) : PyInt_AsLong(offobj);
#endif
if (PyErr_Occurred())
return NULL;
ACQUIRE_LOCK(self);
Util_DropReadAhead(self);
switch (self->mode) {
case MODE_READ:
case MODE_READ_EOF:
break;
case MODE_CLOSED:
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
goto cleanup;;
default:
PyErr_SetString(PyExc_IOError,
"seek works only while reading");
goto cleanup;;
}
if (where == 2) {
if (self->size == -1) {
assert(self->mode != MODE_READ_EOF);
for (;;) {
Py_BEGIN_ALLOW_THREADS
chunksize = Util_UnivNewlineRead(
&bzerror, self->fp,
buffer, buffersize,
self);
self->pos += chunksize;
Py_END_ALLOW_THREADS
bytesread += chunksize;
if (bzerror == BZ_STREAM_END) {
break;
} else if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto cleanup;
}
}
self->mode = MODE_READ_EOF;
self->size = self->pos;
bytesread = 0;
}
offset = self->size + offset;
} else if (where == 1) {
offset = self->pos + offset;
}
/* Before getting here, offset must be the absolute position the file
* pointer should be set to. */
if (offset >= self->pos) {
/* we can move forward */
offset -= self->pos;
} else {
/* we cannot move back, so rewind the stream */
BZ2_bzReadClose(&bzerror, self->fp);
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto cleanup;
}
ret = PyObject_CallMethod(self->file, "seek", "(i)", 0);
if (!ret)
goto cleanup;
Py_DECREF(ret);
ret = NULL;
self->pos = 0;
self->fp = BZ2_bzReadOpen(&bzerror, PyFile_AsFile(self->file),
0, 0, NULL, 0);
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto cleanup;
}
self->mode = MODE_READ;
}
if (offset <= 0 || self->mode == MODE_READ_EOF)
goto exit;
/* Before getting here, offset must be set to the number of bytes
* to walk forward. */
for (;;) {
if (offset-bytesread > buffersize)
readsize = buffersize;
else
/* offset might be wider that readsize, but the result
* of the subtraction is bound by buffersize (see the
* condition above). buffersize is 8192. */
readsize = (size_t)(offset-bytesread);
Py_BEGIN_ALLOW_THREADS
chunksize = Util_UnivNewlineRead(&bzerror, self->fp,
buffer, readsize, self);
self->pos += chunksize;
Py_END_ALLOW_THREADS
bytesread += chunksize;
if (bzerror == BZ_STREAM_END) {
self->size = self->pos;
self->mode = MODE_READ_EOF;
break;
} else if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto cleanup;
}
if (bytesread == offset)
break;
}
exit:
Py_INCREF(Py_None);
ret = Py_None;
cleanup:
RELEASE_LOCK(self);
return ret;
}
PyDoc_STRVAR(BZ2File_tell__doc__,
"tell() -> int\n\
\n\
Return the current file position, an integer (may be a long integer).\n\
");
static PyObject *
BZ2File_tell(BZ2FileObject *self, PyObject *args)
{
PyObject *ret = NULL;
if (self->mode == MODE_CLOSED) {
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
goto cleanup;
}
#if !defined(HAVE_LARGEFILE_SUPPORT)
ret = PyInt_FromLong(self->pos);
#else
ret = PyLong_FromLongLong(self->pos);
#endif
cleanup:
return ret;
}
PyDoc_STRVAR(BZ2File_close__doc__,
"close() -> None or (perhaps) an integer\n\
\n\
Close the file. Sets data attribute .closed to true. A closed file\n\
cannot be used for further I/O operations. close() may be called more\n\
than once without error.\n\
");
static PyObject *
BZ2File_close(BZ2FileObject *self)
{
PyObject *ret = NULL;
int bzerror = BZ_OK;
ACQUIRE_LOCK(self);
switch (self->mode) {
case MODE_READ:
case MODE_READ_EOF:
BZ2_bzReadClose(&bzerror, self->fp);
break;
case MODE_WRITE:
BZ2_bzWriteClose(&bzerror, self->fp,
0, NULL, NULL);
break;
}
self->mode = MODE_CLOSED;
ret = PyObject_CallMethod(self->file, "close", NULL);
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
Py_XDECREF(ret);
ret = NULL;
}
RELEASE_LOCK(self);
return ret;
}
static PyObject *BZ2File_getiter(BZ2FileObject *self);
static PyMethodDef BZ2File_methods[] = {
{"read", (PyCFunction)BZ2File_read, METH_VARARGS, BZ2File_read__doc__},
{"readline", (PyCFunction)BZ2File_readline, METH_VARARGS, BZ2File_readline__doc__},
{"readlines", (PyCFunction)BZ2File_readlines, METH_VARARGS, BZ2File_readlines__doc__},
{"write", (PyCFunction)BZ2File_write, METH_VARARGS, BZ2File_write__doc__},
{"writelines", (PyCFunction)BZ2File_writelines, METH_O, BZ2File_writelines__doc__},
{"seek", (PyCFunction)BZ2File_seek, METH_VARARGS, BZ2File_seek__doc__},
{"tell", (PyCFunction)BZ2File_tell, METH_NOARGS, BZ2File_tell__doc__},
{"close", (PyCFunction)BZ2File_close, METH_NOARGS, BZ2File_close__doc__},
{NULL, NULL} /* sentinel */
};
/* ===================================================================== */
/* Getters and setters of BZ2File. */
/* This is a hacked version of Python's fileobject.c:get_newlines(). */
static PyObject *
BZ2File_get_newlines(BZ2FileObject *self, void *closure)
{
switch (self->f_newlinetypes) {
case NEWLINE_UNKNOWN:
Py_INCREF(Py_None);
return Py_None;
case NEWLINE_CR:
return PyString_FromString("\r");
case NEWLINE_LF:
return PyString_FromString("\n");
case NEWLINE_CR|NEWLINE_LF:
return Py_BuildValue("(ss)", "\r", "\n");
case NEWLINE_CRLF:
return PyString_FromString("\r\n");
case NEWLINE_CR|NEWLINE_CRLF:
return Py_BuildValue("(ss)", "\r", "\r\n");
case NEWLINE_LF|NEWLINE_CRLF:
return Py_BuildValue("(ss)", "\n", "\r\n");
case NEWLINE_CR|NEWLINE_LF|NEWLINE_CRLF:
return Py_BuildValue("(sss)", "\r", "\n", "\r\n");
default:
PyErr_Format(PyExc_SystemError,
"Unknown newlines value 0x%x\n",
self->f_newlinetypes);
return NULL;
}
}
static PyObject *
BZ2File_get_closed(BZ2FileObject *self, void *closure)
{
return PyInt_FromLong(self->mode == MODE_CLOSED);
}
static PyObject *
BZ2File_get_mode(BZ2FileObject *self, void *closure)
{
return PyObject_GetAttrString(self->file, "mode");
}
static PyObject *
BZ2File_get_name(BZ2FileObject *self, void *closure)
{
return PyObject_GetAttrString(self->file, "name");
}
static PyGetSetDef BZ2File_getset[] = {
{"closed", (getter)BZ2File_get_closed, NULL,
"True if the file is closed"},
{"newlines", (getter)BZ2File_get_newlines, NULL,
"end-of-line convention used in this file"},
{"mode", (getter)BZ2File_get_mode, NULL,
"file mode ('r', 'w', or 'U')"},
{"name", (getter)BZ2File_get_name, NULL,
"file name"},
{NULL} /* Sentinel */
};
/* ===================================================================== */
/* Members of BZ2File_Type. */
#undef OFF
#define OFF(x) offsetof(BZ2FileObject, x)
static PyMemberDef BZ2File_members[] = {
{"softspace", T_INT, OFF(f_softspace), 0,
"flag indicating that a space needs to be printed; used by print"},
{NULL} /* Sentinel */
};
/* ===================================================================== */
/* Slot definitions for BZ2File_Type. */
static int
BZ2File_init(BZ2FileObject *self, PyObject *args, PyObject *kwargs)
{
static char *kwlist[] = {"filename", "mode", "buffering",
"compresslevel", 0};
PyObject *name;
char *mode = "r";
int buffering = -1;
int compresslevel = 9;
int bzerror;
int mode_char = 0;
self->size = -1;
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|sii:BZ2File",
kwlist, &name, &mode, &buffering,
&compresslevel))
return -1;
if (compresslevel < 1 || compresslevel > 9) {
PyErr_SetString(PyExc_ValueError,
"compresslevel must be between 1 and 9");
return -1;
}
for (;;) {
int error = 0;
switch (*mode) {
case 'r':
case 'w':
if (mode_char)
error = 1;
mode_char = *mode;
break;
case 'b':
break;
case 'U':
#ifdef __VMS
self->f_univ_newline = 0;
#else
self->f_univ_newline = 1;
#endif
break;
default:
error = 1;
break;
}
if (error) {
PyErr_Format(PyExc_ValueError,
"invalid mode char %c", *mode);
return -1;
}
mode++;
if (*mode == '\0')
break;
}
if (mode_char == 0) {
mode_char = 'r';
}
mode = (mode_char == 'r') ? "rb" : "wb";
self->file = PyObject_CallFunction((PyObject*)&PyFile_Type, "(Osi)",
name, mode, buffering);
if (self->file == NULL)
return -1;
/* From now on, we have stuff to dealloc, so jump to error label
* instead of returning */
#ifdef WITH_THREAD
self->lock = PyThread_allocate_lock();
if (!self->lock) {
PyErr_SetString(PyExc_MemoryError, "unable to allocate lock");
goto error;
}
#endif
if (mode_char == 'r')
self->fp = BZ2_bzReadOpen(&bzerror,
PyFile_AsFile(self->file),
0, 0, NULL, 0);
else
self->fp = BZ2_bzWriteOpen(&bzerror,
PyFile_AsFile(self->file),
compresslevel, 0, 0);
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto error;
}
self->mode = (mode_char == 'r') ? MODE_READ : MODE_WRITE;
return 0;
error:
Py_CLEAR(self->file);
#ifdef WITH_THREAD
if (self->lock) {
PyThread_free_lock(self->lock);
self->lock = NULL;
}
#endif
return -1;
}
static void
BZ2File_dealloc(BZ2FileObject *self)
{
int bzerror;
#ifdef WITH_THREAD
if (self->lock)
PyThread_free_lock(self->lock);
#endif
switch (self->mode) {
case MODE_READ:
case MODE_READ_EOF:
BZ2_bzReadClose(&bzerror, self->fp);
break;
case MODE_WRITE:
BZ2_bzWriteClose(&bzerror, self->fp,
0, NULL, NULL);
break;
}
Util_DropReadAhead(self);
Py_XDECREF(self->file);
self->ob_type->tp_free((PyObject *)self);
}
/* This is a hacked version of Python's fileobject.c:file_getiter(). */
static PyObject *
BZ2File_getiter(BZ2FileObject *self)
{
if (self->mode == MODE_CLOSED) {
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
return NULL;
}
Py_INCREF((PyObject*)self);
return (PyObject *)self;
}
/* This is a hacked version of Python's fileobject.c:file_iternext(). */
#define READAHEAD_BUFSIZE 8192
static PyObject *
BZ2File_iternext(BZ2FileObject *self)
{
PyStringObject* ret;
ACQUIRE_LOCK(self);
if (self->mode == MODE_CLOSED) {
PyErr_SetString(PyExc_ValueError,
"I/O operation on closed file");
return NULL;
}
ret = Util_ReadAheadGetLineSkip(self, 0, READAHEAD_BUFSIZE);
RELEASE_LOCK(self);
if (ret == NULL || PyString_GET_SIZE(ret) == 0) {
Py_XDECREF(ret);
return NULL;
}
return (PyObject *)ret;
}
/* ===================================================================== */
/* BZ2File_Type definition. */
PyDoc_VAR(BZ2File__doc__) =
PyDoc_STR(
"BZ2File(name [, mode='r', buffering=0, compresslevel=9]) -> file object\n\
\n\
Open a bz2 file. The mode can be 'r' or 'w', for reading (default) or\n\
writing. When opened for writing, the file will be created if it doesn't\n\
exist, and truncated otherwise. If the buffering argument is given, 0 means\n\
unbuffered, and larger numbers specify the buffer size. If compresslevel\n\
is given, must be a number between 1 and 9.\n\
")
PyDoc_STR(
"\n\
Add a 'U' to mode to open the file for input with universal newline\n\
support. Any line ending in the input file will be seen as a '\\n' in\n\
Python. Also, a file so opened gains the attribute 'newlines'; the value\n\
for this attribute is one of None (no newline read yet), '\\r', '\\n',\n\
'\\r\\n' or a tuple containing all the newline types seen. Universal\n\
newlines are available only when reading.\n\
")
;
static PyTypeObject BZ2File_Type = {
PyObject_HEAD_INIT(NULL)
0, /*ob_size*/
"bz2.BZ2File", /*tp_name*/
sizeof(BZ2FileObject), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)BZ2File_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash*/
0, /*tp_call*/
0, /*tp_str*/
PyObject_GenericGetAttr,/*tp_getattro*/
PyObject_GenericSetAttr,/*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, /*tp_flags*/
BZ2File__doc__, /*tp_doc*/
0, /*tp_traverse*/
0, /*tp_clear*/
0, /*tp_richcompare*/
0, /*tp_weaklistoffset*/
(getiterfunc)BZ2File_getiter, /*tp_iter*/
(iternextfunc)BZ2File_iternext, /*tp_iternext*/
BZ2File_methods, /*tp_methods*/
BZ2File_members, /*tp_members*/
BZ2File_getset, /*tp_getset*/
0, /*tp_base*/
0, /*tp_dict*/
0, /*tp_descr_get*/
0, /*tp_descr_set*/
0, /*tp_dictoffset*/
(initproc)BZ2File_init, /*tp_init*/
PyType_GenericAlloc, /*tp_alloc*/
PyType_GenericNew, /*tp_new*/
_PyObject_Del, /*tp_free*/
0, /*tp_is_gc*/
};
/* ===================================================================== */
/* Methods of BZ2Comp. */
PyDoc_STRVAR(BZ2Comp_compress__doc__,
"compress(data) -> string\n\
\n\
Provide more data to the compressor object. It will return chunks of\n\
compressed data whenever possible. When you've finished providing data\n\
to compress, call the flush() method to finish the compression process,\n\
and return what is left in the internal buffers.\n\
");
static PyObject *
BZ2Comp_compress(BZ2CompObject *self, PyObject *args)
{
char *data;
int datasize;
int bufsize = SMALLCHUNK;
PY_LONG_LONG totalout;
PyObject *ret = NULL;
bz_stream *bzs = &self->bzs;
int bzerror;
if (!PyArg_ParseTuple(args, "s#:compress", &data, &datasize))
return NULL;
if (datasize == 0)
return PyString_FromString("");
ACQUIRE_LOCK(self);
if (!self->running) {
PyErr_SetString(PyExc_ValueError,
"this object was already flushed");
goto error;
}
ret = PyString_FromStringAndSize(NULL, bufsize);
if (!ret)
goto error;
bzs->next_in = data;
bzs->avail_in = datasize;
bzs->next_out = BUF(ret);
bzs->avail_out = bufsize;
totalout = BZS_TOTAL_OUT(bzs);
for (;;) {
Py_BEGIN_ALLOW_THREADS
bzerror = BZ2_bzCompress(bzs, BZ_RUN);
Py_END_ALLOW_THREADS
if (bzerror != BZ_RUN_OK) {
Util_CatchBZ2Error(bzerror);
goto error;
}
if (bzs->avail_out == 0) {
bufsize = Util_NewBufferSize(bufsize);
if (_PyString_Resize(&ret, bufsize) < 0) {
BZ2_bzCompressEnd(bzs);
goto error;
}
bzs->next_out = BUF(ret) + (BZS_TOTAL_OUT(bzs)
- totalout);
bzs->avail_out = bufsize - (bzs->next_out - BUF(ret));
} else if (bzs->avail_in == 0) {
break;
}
}
_PyString_Resize(&ret, (Py_ssize_t)(BZS_TOTAL_OUT(bzs) - totalout));
RELEASE_LOCK(self);
return ret;
error:
RELEASE_LOCK(self);
Py_XDECREF(ret);
return NULL;
}
PyDoc_STRVAR(BZ2Comp_flush__doc__,
"flush() -> string\n\
\n\
Finish the compression process and return what is left in internal buffers.\n\
You must not use the compressor object after calling this method.\n\
");
static PyObject *
BZ2Comp_flush(BZ2CompObject *self)
{
int bufsize = SMALLCHUNK;
PyObject *ret = NULL;
bz_stream *bzs = &self->bzs;
PY_LONG_LONG totalout;
int bzerror;
ACQUIRE_LOCK(self);
if (!self->running) {
PyErr_SetString(PyExc_ValueError, "object was already "
"flushed");
goto error;
}
self->running = 0;
ret = PyString_FromStringAndSize(NULL, bufsize);
if (!ret)
goto error;
bzs->next_out = BUF(ret);
bzs->avail_out = bufsize;
totalout = BZS_TOTAL_OUT(bzs);
for (;;) {
Py_BEGIN_ALLOW_THREADS
bzerror = BZ2_bzCompress(bzs, BZ_FINISH);
Py_END_ALLOW_THREADS
if (bzerror == BZ_STREAM_END) {
break;
} else if (bzerror != BZ_FINISH_OK) {
Util_CatchBZ2Error(bzerror);
goto error;
}
if (bzs->avail_out == 0) {
bufsize = Util_NewBufferSize(bufsize);
if (_PyString_Resize(&ret, bufsize) < 0)
goto error;
bzs->next_out = BUF(ret);
bzs->next_out = BUF(ret) + (BZS_TOTAL_OUT(bzs)
- totalout);
bzs->avail_out = bufsize - (bzs->next_out - BUF(ret));
}
}
if (bzs->avail_out != 0)
_PyString_Resize(&ret, (Py_ssize_t)(BZS_TOTAL_OUT(bzs) - totalout));
RELEASE_LOCK(self);
return ret;
error:
RELEASE_LOCK(self);
Py_XDECREF(ret);
return NULL;
}
static PyMethodDef BZ2Comp_methods[] = {
{"compress", (PyCFunction)BZ2Comp_compress, METH_VARARGS,
BZ2Comp_compress__doc__},
{"flush", (PyCFunction)BZ2Comp_flush, METH_NOARGS,
BZ2Comp_flush__doc__},
{NULL, NULL} /* sentinel */
};
/* ===================================================================== */
/* Slot definitions for BZ2Comp_Type. */
static int
BZ2Comp_init(BZ2CompObject *self, PyObject *args, PyObject *kwargs)
{
int compresslevel = 9;
int bzerror;
static char *kwlist[] = {"compresslevel", 0};
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:BZ2Compressor",
kwlist, &compresslevel))
return -1;
if (compresslevel < 1 || compresslevel > 9) {
PyErr_SetString(PyExc_ValueError,
"compresslevel must be between 1 and 9");
goto error;
}
#ifdef WITH_THREAD
self->lock = PyThread_allocate_lock();
if (!self->lock) {
PyErr_SetString(PyExc_MemoryError, "unable to allocate lock");
goto error;
}
#endif
memset(&self->bzs, 0, sizeof(bz_stream));
bzerror = BZ2_bzCompressInit(&self->bzs, compresslevel, 0, 0);
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto error;
}
self->running = 1;
return 0;
error:
#ifdef WITH_THREAD
if (self->lock) {
PyThread_free_lock(self->lock);
self->lock = NULL;
}
#endif
return -1;
}
static void
BZ2Comp_dealloc(BZ2CompObject *self)
{
#ifdef WITH_THREAD
if (self->lock)
PyThread_free_lock(self->lock);
#endif
BZ2_bzCompressEnd(&self->bzs);
self->ob_type->tp_free((PyObject *)self);
}
/* ===================================================================== */
/* BZ2Comp_Type definition. */
PyDoc_STRVAR(BZ2Comp__doc__,
"BZ2Compressor([compresslevel=9]) -> compressor object\n\
\n\
Create a new compressor object. This object may be used to compress\n\
data sequentially. If you want to compress data in one shot, use the\n\
compress() function instead. The compresslevel parameter, if given,\n\
must be a number between 1 and 9.\n\
");
static PyTypeObject BZ2Comp_Type = {
PyObject_HEAD_INIT(NULL)
0, /*ob_size*/
"bz2.BZ2Compressor", /*tp_name*/
sizeof(BZ2CompObject), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)BZ2Comp_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash*/
0, /*tp_call*/
0, /*tp_str*/
PyObject_GenericGetAttr,/*tp_getattro*/
PyObject_GenericSetAttr,/*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, /*tp_flags*/
BZ2Comp__doc__, /*tp_doc*/
0, /*tp_traverse*/
0, /*tp_clear*/
0, /*tp_richcompare*/
0, /*tp_weaklistoffset*/
0, /*tp_iter*/
0, /*tp_iternext*/
BZ2Comp_methods, /*tp_methods*/
0, /*tp_members*/
0, /*tp_getset*/
0, /*tp_base*/
0, /*tp_dict*/
0, /*tp_descr_get*/
0, /*tp_descr_set*/
0, /*tp_dictoffset*/
(initproc)BZ2Comp_init, /*tp_init*/
PyType_GenericAlloc, /*tp_alloc*/
PyType_GenericNew, /*tp_new*/
_PyObject_Del, /*tp_free*/
0, /*tp_is_gc*/
};
/* ===================================================================== */
/* Members of BZ2Decomp. */
#undef OFF
#define OFF(x) offsetof(BZ2DecompObject, x)
static PyMemberDef BZ2Decomp_members[] = {
{"unused_data", T_OBJECT, OFF(unused_data), RO},
{NULL} /* Sentinel */
};
/* ===================================================================== */
/* Methods of BZ2Decomp. */
PyDoc_STRVAR(BZ2Decomp_decompress__doc__,
"decompress(data) -> string\n\
\n\
Provide more data to the decompressor object. It will return chunks\n\
of decompressed data whenever possible. If you try to decompress data\n\
after the end of stream is found, EOFError will be raised. If any data\n\
was found after the end of stream, it'll be ignored and saved in\n\
unused_data attribute.\n\
");
static PyObject *
BZ2Decomp_decompress(BZ2DecompObject *self, PyObject *args)
{
char *data;
int datasize;
int bufsize = SMALLCHUNK;
PY_LONG_LONG totalout;
PyObject *ret = NULL;
bz_stream *bzs = &self->bzs;
int bzerror;
if (!PyArg_ParseTuple(args, "s#:decompress", &data, &datasize))
return NULL;
ACQUIRE_LOCK(self);
if (!self->running) {
PyErr_SetString(PyExc_EOFError, "end of stream was "
"already found");
goto error;
}
ret = PyString_FromStringAndSize(NULL, bufsize);
if (!ret)
goto error;
bzs->next_in = data;
bzs->avail_in = datasize;
bzs->next_out = BUF(ret);
bzs->avail_out = bufsize;
totalout = BZS_TOTAL_OUT(bzs);
for (;;) {
Py_BEGIN_ALLOW_THREADS
bzerror = BZ2_bzDecompress(bzs);
Py_END_ALLOW_THREADS
if (bzerror == BZ_STREAM_END) {
if (bzs->avail_in != 0) {
Py_DECREF(self->unused_data);
self->unused_data =
PyString_FromStringAndSize(bzs->next_in,
bzs->avail_in);
}
self->running = 0;
break;
}
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto error;
}
if (bzs->avail_out == 0) {
bufsize = Util_NewBufferSize(bufsize);
if (_PyString_Resize(&ret, bufsize) < 0) {
BZ2_bzDecompressEnd(bzs);
goto error;
}
bzs->next_out = BUF(ret);
bzs->next_out = BUF(ret) + (BZS_TOTAL_OUT(bzs)
- totalout);
bzs->avail_out = bufsize - (bzs->next_out - BUF(ret));
} else if (bzs->avail_in == 0) {
break;
}
}
if (bzs->avail_out != 0)
_PyString_Resize(&ret, (Py_ssize_t)(BZS_TOTAL_OUT(bzs) - totalout));
RELEASE_LOCK(self);
return ret;
error:
RELEASE_LOCK(self);
Py_XDECREF(ret);
return NULL;
}
static PyMethodDef BZ2Decomp_methods[] = {
{"decompress", (PyCFunction)BZ2Decomp_decompress, METH_VARARGS, BZ2Decomp_decompress__doc__},
{NULL, NULL} /* sentinel */
};
/* ===================================================================== */
/* Slot definitions for BZ2Decomp_Type. */
static int
BZ2Decomp_init(BZ2DecompObject *self, PyObject *args, PyObject *kwargs)
{
int bzerror;
if (!PyArg_ParseTuple(args, ":BZ2Decompressor"))
return -1;
#ifdef WITH_THREAD
self->lock = PyThread_allocate_lock();
if (!self->lock) {
PyErr_SetString(PyExc_MemoryError, "unable to allocate lock");
goto error;
}
#endif
self->unused_data = PyString_FromString("");
if (!self->unused_data)
goto error;
memset(&self->bzs, 0, sizeof(bz_stream));
bzerror = BZ2_bzDecompressInit(&self->bzs, 0, 0);
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
goto error;
}
self->running = 1;
return 0;
error:
#ifdef WITH_THREAD
if (self->lock) {
PyThread_free_lock(self->lock);
self->lock = NULL;
}
#endif
Py_CLEAR(self->unused_data);
return -1;
}
static void
BZ2Decomp_dealloc(BZ2DecompObject *self)
{
#ifdef WITH_THREAD
if (self->lock)
PyThread_free_lock(self->lock);
#endif
Py_XDECREF(self->unused_data);
BZ2_bzDecompressEnd(&self->bzs);
self->ob_type->tp_free((PyObject *)self);
}
/* ===================================================================== */
/* BZ2Decomp_Type definition. */
PyDoc_STRVAR(BZ2Decomp__doc__,
"BZ2Decompressor() -> decompressor object\n\
\n\
Create a new decompressor object. This object may be used to decompress\n\
data sequentially. If you want to decompress data in one shot, use the\n\
decompress() function instead.\n\
");
static PyTypeObject BZ2Decomp_Type = {
PyObject_HEAD_INIT(NULL)
0, /*ob_size*/
"bz2.BZ2Decompressor", /*tp_name*/
sizeof(BZ2DecompObject), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)BZ2Decomp_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash*/
0, /*tp_call*/
0, /*tp_str*/
PyObject_GenericGetAttr,/*tp_getattro*/
PyObject_GenericSetAttr,/*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT|Py_TPFLAGS_BASETYPE, /*tp_flags*/
BZ2Decomp__doc__, /*tp_doc*/
0, /*tp_traverse*/
0, /*tp_clear*/
0, /*tp_richcompare*/
0, /*tp_weaklistoffset*/
0, /*tp_iter*/
0, /*tp_iternext*/
BZ2Decomp_methods, /*tp_methods*/
BZ2Decomp_members, /*tp_members*/
0, /*tp_getset*/
0, /*tp_base*/
0, /*tp_dict*/
0, /*tp_descr_get*/
0, /*tp_descr_set*/
0, /*tp_dictoffset*/
(initproc)BZ2Decomp_init, /*tp_init*/
PyType_GenericAlloc, /*tp_alloc*/
PyType_GenericNew, /*tp_new*/
_PyObject_Del, /*tp_free*/
0, /*tp_is_gc*/
};
/* ===================================================================== */
/* Module functions. */
PyDoc_STRVAR(bz2_compress__doc__,
"compress(data [, compresslevel=9]) -> string\n\
\n\
Compress data in one shot. If you want to compress data sequentially,\n\
use an instance of BZ2Compressor instead. The compresslevel parameter, if\n\
given, must be a number between 1 and 9.\n\
");
static PyObject *
bz2_compress(PyObject *self, PyObject *args, PyObject *kwargs)
{
int compresslevel=9;
char *data;
int datasize;
int bufsize;
PyObject *ret = NULL;
bz_stream _bzs;
bz_stream *bzs = &_bzs;
int bzerror;
static char *kwlist[] = {"data", "compresslevel", 0};
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s#|i",
kwlist, &data, &datasize,
&compresslevel))
return NULL;
if (compresslevel < 1 || compresslevel > 9) {
PyErr_SetString(PyExc_ValueError,
"compresslevel must be between 1 and 9");
return NULL;
}
/* Conforming to bz2 manual, this is large enough to fit compressed
* data in one shot. We will check it later anyway. */
bufsize = datasize + (datasize/100+1) + 600;
ret = PyString_FromStringAndSize(NULL, bufsize);
if (!ret)
return NULL;
memset(bzs, 0, sizeof(bz_stream));
bzs->next_in = data;
bzs->avail_in = datasize;
bzs->next_out = BUF(ret);
bzs->avail_out = bufsize;
bzerror = BZ2_bzCompressInit(bzs, compresslevel, 0, 0);
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
Py_DECREF(ret);
return NULL;
}
for (;;) {
Py_BEGIN_ALLOW_THREADS
bzerror = BZ2_bzCompress(bzs, BZ_FINISH);
Py_END_ALLOW_THREADS
if (bzerror == BZ_STREAM_END) {
break;
} else if (bzerror != BZ_FINISH_OK) {
BZ2_bzCompressEnd(bzs);
Util_CatchBZ2Error(bzerror);
Py_DECREF(ret);
return NULL;
}
if (bzs->avail_out == 0) {
bufsize = Util_NewBufferSize(bufsize);
if (_PyString_Resize(&ret, bufsize) < 0) {
BZ2_bzCompressEnd(bzs);
Py_DECREF(ret);
return NULL;
}
bzs->next_out = BUF(ret) + BZS_TOTAL_OUT(bzs);
bzs->avail_out = bufsize - (bzs->next_out - BUF(ret));
}
}
if (bzs->avail_out != 0)
_PyString_Resize(&ret, (Py_ssize_t)BZS_TOTAL_OUT(bzs));
BZ2_bzCompressEnd(bzs);
return ret;
}
PyDoc_STRVAR(bz2_decompress__doc__,
"decompress(data) -> decompressed data\n\
\n\
Decompress data in one shot. If you want to decompress data sequentially,\n\
use an instance of BZ2Decompressor instead.\n\
");
static PyObject *
bz2_decompress(PyObject *self, PyObject *args)
{
char *data;
int datasize;
int bufsize = SMALLCHUNK;
PyObject *ret;
bz_stream _bzs;
bz_stream *bzs = &_bzs;
int bzerror;
if (!PyArg_ParseTuple(args, "s#:decompress", &data, &datasize))
return NULL;
if (datasize == 0)
return PyString_FromString("");
ret = PyString_FromStringAndSize(NULL, bufsize);
if (!ret)
return NULL;
memset(bzs, 0, sizeof(bz_stream));
bzs->next_in = data;
bzs->avail_in = datasize;
bzs->next_out = BUF(ret);
bzs->avail_out = bufsize;
bzerror = BZ2_bzDecompressInit(bzs, 0, 0);
if (bzerror != BZ_OK) {
Util_CatchBZ2Error(bzerror);
Py_DECREF(ret);
return NULL;
}
for (;;) {
Py_BEGIN_ALLOW_THREADS
bzerror = BZ2_bzDecompress(bzs);
Py_END_ALLOW_THREADS
if (bzerror == BZ_STREAM_END) {
break;
} else if (bzerror != BZ_OK) {
BZ2_bzDecompressEnd(bzs);
Util_CatchBZ2Error(bzerror);
Py_DECREF(ret);
return NULL;
}
if (bzs->avail_out == 0) {
bufsize = Util_NewBufferSize(bufsize);
if (_PyString_Resize(&ret, bufsize) < 0) {
BZ2_bzDecompressEnd(bzs);
Py_DECREF(ret);
return NULL;
}
bzs->next_out = BUF(ret) + BZS_TOTAL_OUT(bzs);
bzs->avail_out = bufsize - (bzs->next_out - BUF(ret));
} else if (bzs->avail_in == 0) {
BZ2_bzDecompressEnd(bzs);
PyErr_SetString(PyExc_ValueError,
"couldn't find end of stream");
Py_DECREF(ret);
return NULL;
}
}
if (bzs->avail_out != 0)
_PyString_Resize(&ret, (Py_ssize_t)BZS_TOTAL_OUT(bzs));
BZ2_bzDecompressEnd(bzs);
return ret;
}
static PyMethodDef bz2_methods[] = {
{"compress", (PyCFunction) bz2_compress, METH_VARARGS|METH_KEYWORDS,
bz2_compress__doc__},
{"decompress", (PyCFunction) bz2_decompress, METH_VARARGS,
bz2_decompress__doc__},
{NULL, NULL} /* sentinel */
};
/* ===================================================================== */
/* Initialization function. */
PyDoc_STRVAR(bz2__doc__,
"The python bz2 module provides a comprehensive interface for\n\
the bz2 compression library. It implements a complete file\n\
interface, one shot (de)compression functions, and types for\n\
sequential (de)compression.\n\
");
PyMODINIT_FUNC
initbz2(void)
{
PyObject *m;
BZ2File_Type.ob_type = &PyType_Type;
BZ2Comp_Type.ob_type = &PyType_Type;
BZ2Decomp_Type.ob_type = &PyType_Type;
m = Py_InitModule3("bz2", bz2_methods, bz2__doc__);
if (m == NULL)
return;
PyModule_AddObject(m, "__author__", PyString_FromString(__author__));
Py_INCREF(&BZ2File_Type);
PyModule_AddObject(m, "BZ2File", (PyObject *)&BZ2File_Type);
Py_INCREF(&BZ2Comp_Type);
PyModule_AddObject(m, "BZ2Compressor", (PyObject *)&BZ2Comp_Type);
Py_INCREF(&BZ2Decomp_Type);
PyModule_AddObject(m, "BZ2Decompressor", (PyObject *)&BZ2Decomp_Type);
}