cpython/Doc/library/itertools.rst
Christian Heimes fe337bfd0d Merged revisions 61724-61725,61731-61735,61737,61739,61741,61743-61744,61753,61761,61765-61767,61769,61773,61776-61778,61780-61783,61788,61793,61796,61807,61813 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk

................
  r61724 | martin.v.loewis | 2008-03-22 01:01:12 +0100 (Sat, 22 Mar 2008) | 49 lines

  Merged revisions 61602-61723 via svnmerge from
  svn+ssh://pythondev@svn.python.org/sandbox/trunk/2to3/lib2to3

  ........
    r61626 | david.wolever | 2008-03-19 17:19:16 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line

    Added fixer for implicit local imports.  See #2414.
  ........
    r61628 | david.wolever | 2008-03-19 17:57:43 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line

    Added a class for tests which should not run if a particular import is found.
  ........
    r61629 | collin.winter | 2008-03-19 17:58:19 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line

    Two more relative import fixes in pgen2.
  ........
    r61635 | david.wolever | 2008-03-19 20:16:03 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line

    Fixed print fixer so it will do the Right Thing when it encounters __future__.print_function.  2to3 gets upset, though, so the tests have been commented out.
  ........
    r61637 | david.wolever | 2008-03-19 21:37:17 +0100 (Mi, 19 M?\195?\164r 2008) | 3 lines

    Added a fixer for itertools imports (from itertools import imap, ifilterfalse --> from itertools import filterfalse)
  ........
    r61645 | david.wolever | 2008-03-19 23:22:35 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line

    SVN is happier when you add the files you create... -_-'
  ........
    r61654 | david.wolever | 2008-03-20 01:09:56 +0100 (Do, 20 M?\195?\164r 2008) | 1 line

    Added an explicit sort order to fixers -- fixes problems like #2427
  ........
    r61664 | david.wolever | 2008-03-20 04:32:40 +0100 (Do, 20 M?\195?\164r 2008) | 3 lines

    Fixes #2428 -- comments are no longer eatten by __future__ fixer.
  ........
    r61673 | david.wolever | 2008-03-20 17:22:40 +0100 (Do, 20 M?\195?\164r 2008) | 1 line

    Added 2to3 node pretty-printer
  ........
    r61679 | david.wolever | 2008-03-20 20:50:42 +0100 (Do, 20 M?\195?\164r 2008) | 1 line

    Made node printing a little bit prettier
  ........
    r61723 | martin.v.loewis | 2008-03-22 00:59:27 +0100 (Sa, 22 M?\195?\164r 2008) | 2 lines

    Fix whitespace.
  ........
................
  r61725 | martin.v.loewis | 2008-03-22 01:02:41 +0100 (Sat, 22 Mar 2008) | 2 lines

  Install lib2to3.
................
  r61731 | facundo.batista | 2008-03-22 03:45:37 +0100 (Sat, 22 Mar 2008) | 4 lines


  Small fix that complicated the test actually when that
  test failed.
................
  r61732 | alexandre.vassalotti | 2008-03-22 05:08:44 +0100 (Sat, 22 Mar 2008) | 2 lines

  Added warning for the removal of 'hotshot' in Py3k.
................
  r61733 | georg.brandl | 2008-03-22 11:07:29 +0100 (Sat, 22 Mar 2008) | 4 lines

  #1918: document that weak references *to* an object are
  cleared before the object's __del__ is called, to ensure that the weak
  reference callback (if any) finds the object healthy.
................
  r61734 | georg.brandl | 2008-03-22 11:56:23 +0100 (Sat, 22 Mar 2008) | 2 lines

  Activate the Sphinx doctest extension and convert howto/functional to use it.
................
  r61735 | georg.brandl | 2008-03-22 11:58:38 +0100 (Sat, 22 Mar 2008) | 2 lines

  Allow giving source names on the cmdline.
................
  r61737 | georg.brandl | 2008-03-22 12:00:48 +0100 (Sat, 22 Mar 2008) | 2 lines

  Fixup this HOWTO's doctest blocks so that they can be run with sphinx' doctest builder.
................
  r61739 | georg.brandl | 2008-03-22 12:47:10 +0100 (Sat, 22 Mar 2008) | 2 lines

  Test decimal.rst doctests as far as possible with sphinx doctest.
................
  r61741 | georg.brandl | 2008-03-22 13:04:26 +0100 (Sat, 22 Mar 2008) | 2 lines

  Make doctests in re docs usable with sphinx' doctest.
................
  r61743 | georg.brandl | 2008-03-22 13:59:37 +0100 (Sat, 22 Mar 2008) | 2 lines

  Make more doctests in pprint docs testable.
................
  r61744 | georg.brandl | 2008-03-22 14:07:06 +0100 (Sat, 22 Mar 2008) | 2 lines

  No need to specify explicit "doctest_block" anymore.
................
  r61753 | georg.brandl | 2008-03-22 21:08:43 +0100 (Sat, 22 Mar 2008) | 2 lines

  Fix-up syntax problems.
................
  r61761 | georg.brandl | 2008-03-22 22:06:20 +0100 (Sat, 22 Mar 2008) | 4 lines

  Make collections' doctests executable.

  (The <BLANKLINE>s will be stripped from presentation output.)
................
  r61765 | georg.brandl | 2008-03-22 22:21:57 +0100 (Sat, 22 Mar 2008) | 2 lines

  Test doctests in datetime docs.
................
  r61766 | georg.brandl | 2008-03-22 22:26:44 +0100 (Sat, 22 Mar 2008) | 2 lines

  Test doctests in operator docs.
................
  r61767 | georg.brandl | 2008-03-22 22:38:33 +0100 (Sat, 22 Mar 2008) | 2 lines

  Enable doctests in functions.rst.  Already found two errors :)
................
  r61769 | georg.brandl | 2008-03-22 23:04:10 +0100 (Sat, 22 Mar 2008) | 3 lines

  Enable doctest running for several other documents.
  We have now over 640 doctests that are run with "make doctest".
................
  r61773 | raymond.hettinger | 2008-03-23 01:55:46 +0100 (Sun, 23 Mar 2008) | 1 line

  Simplify demo code.
................
  r61776 | neal.norwitz | 2008-03-23 04:43:33 +0100 (Sun, 23 Mar 2008) | 7 lines

  Try to make this test a little more robust and not fail with:
    timeout (10.0025) is more than 2 seconds more than expected (0.001)

  I'm assuming this problem is caused by DNS lookup.  This change
  does a DNS lookup of the hostname before trying to connect, so the time
  is not included.
................
  r61777 | neal.norwitz | 2008-03-23 05:08:30 +0100 (Sun, 23 Mar 2008) | 1 line

  Speed up the test by avoiding socket timeouts.
................
  r61778 | neal.norwitz | 2008-03-23 05:43:09 +0100 (Sun, 23 Mar 2008) | 1 line

  Skip the epoll test if epoll() does not work
................
  r61780 | neal.norwitz | 2008-03-23 06:47:20 +0100 (Sun, 23 Mar 2008) | 1 line

  Suppress failure (to avoid a flaky test) if we cannot connect to svn.python.org
................
  r61781 | neal.norwitz | 2008-03-23 07:13:25 +0100 (Sun, 23 Mar 2008) | 4 lines

  Move itertools before future_builtins since the latter depends on the former.
  From a clean build importing future_builtins would fail since itertools
  wasn't built yet.
................
  r61782 | neal.norwitz | 2008-03-23 07:16:04 +0100 (Sun, 23 Mar 2008) | 1 line

  Try to prevent the alarm going off early in tearDown
................
  r61783 | neal.norwitz | 2008-03-23 07:19:57 +0100 (Sun, 23 Mar 2008) | 4 lines

  Remove compiler warnings (on Alpha at least) about using chars as
  array subscripts.  Using chars are dangerous b/c they are signed
  on some platforms and unsigned on others.
................
  r61788 | georg.brandl | 2008-03-23 09:05:30 +0100 (Sun, 23 Mar 2008) | 2 lines

  Make the doctests presentation-friendlier.
................
  r61793 | amaury.forgeotdarc | 2008-03-23 10:55:29 +0100 (Sun, 23 Mar 2008) | 4 lines

  #1477: ur'\U0010FFFF' raised in narrow unicode builds.
  Corrected the raw-unicode-escape codec to use UTF-16 surrogates in
  this case, just like the unicode-escape codec.
................
  r61796 | raymond.hettinger | 2008-03-23 14:32:32 +0100 (Sun, 23 Mar 2008) | 1 line

  Issue 1681432:  Add triangular distribution the random module.
................
  r61807 | raymond.hettinger | 2008-03-23 20:37:53 +0100 (Sun, 23 Mar 2008) | 4 lines

  Adopt Nick's suggestion for useful default arguments.
  Clean-up floating point issues by adding true division and float constants.
................
  r61813 | gregory.p.smith | 2008-03-23 22:04:43 +0100 (Sun, 23 Mar 2008) | 6 lines

  Fix gzip to deal with CRC's being signed values in Python 2.x properly and to
  read 32bit values as unsigned to start with rather than applying signedness
  fixups allover the place afterwards.

  This hopefully fixes the test_tarfile failure on the alpha/tru64 buildbot.
................
2008-03-23 21:54:12 +00:00

634 lines
23 KiB
ReStructuredText

:mod:`itertools` --- Functions creating iterators for efficient looping
=======================================================================
.. module:: itertools
:synopsis: Functions creating iterators for efficient looping.
.. moduleauthor:: Raymond Hettinger <python@rcn.com>
.. sectionauthor:: Raymond Hettinger <python@rcn.com>
.. testsetup::
from itertools import *
This module implements a number of :term:`iterator` building blocks inspired by
constructs from the Haskell and SML programming languages. Each has been recast
in a form suitable for Python.
The module standardizes a core set of fast, memory efficient tools that are
useful by themselves or in combination. Standardization helps avoid the
readability and reliability problems which arise when many different individuals
create their own slightly varying implementations, each with their own quirks
and naming conventions.
The tools are designed to combine readily with one another. This makes it easy
to construct more specialized tools succinctly and efficiently in pure Python.
For instance, SML provides a tabulation tool: ``tabulate(f)`` which produces a
sequence ``f(0), f(1), ...``. But, this effect can be achieved in Python
by combining :func:`map` and :func:`count` to form ``map(f, count())``.
Likewise, the functional tools are designed to work well with the high-speed
functions provided by the :mod:`operator` module.
The module author welcomes suggestions for other basic building blocks to be
added to future versions of the module.
Whether cast in pure python form or compiled code, tools that use iterators are
more memory efficient (and faster) than their list based counterparts. Adopting
the principles of just-in-time manufacturing, they create data when and where
needed instead of consuming memory with the computer equivalent of "inventory".
The performance advantage of iterators becomes more acute as the number of
elements increases -- at some point, lists grow large enough to severely impact
memory cache performance and start running slowly.
.. seealso::
The Standard ML Basis Library, `The Standard ML Basis Library
<http://www.standardml.org/Basis/>`_.
Haskell, A Purely Functional Language, `Definition of Haskell and the Standard
Libraries <http://www.haskell.org/definition/>`_.
.. _itertools-functions:
Itertool functions
------------------
The following module functions all construct and return iterators. Some provide
streams of infinite length, so they should only be accessed by functions or
loops that truncate the stream.
.. function:: chain(*iterables)
Make an iterator that returns elements from the first iterable until it is
exhausted, then proceeds to the next iterable, until all of the iterables are
exhausted. Used for treating consecutive sequences as a single sequence.
Equivalent to::
def chain(*iterables):
# chain('ABC', 'DEF') --> A B C D E F
for it in iterables:
for element in it:
yield element
.. function:: itertools.chain.from_iterable(iterable)
Alternate constructor for :func:`chain`. Gets chained inputs from a
single iterable argument that is evaluated lazily. Equivalent to::
@classmethod
def from_iterable(iterables):
# chain.from_iterable(['ABC', 'DEF']) --> A B C D E F
for it in iterables:
for element in it:
yield element
.. versionadded:: 2.6
.. function:: combinations(iterable, r)
Return successive *r* length combinations of elements in the *iterable*.
Combinations are emitted in lexicographic sort order. So, if the
input *iterable* is sorted, the combination tuples will be produced
in sorted order.
Elements are treated as unique based on their position, not on their
value. So if the input elements are unique, there will be no repeat
values in each combination.
Each result tuple is ordered to match the input order. So, every
combination is a subsequence of the input *iterable*.
Equivalent to::
def combinations(iterable, r):
# combinations('ABCD', 2) --> AB AC AD BC BD CD
# combinations(range(4), 3) --> 012 013 023 123
pool = tuple(iterable)
n = len(pool)
indices = range(r)
yield tuple(pool[i] for i in indices)
while 1:
for i in reversed(range(r)):
if indices[i] != i + n - r:
break
else:
return
indices[i] += 1
for j in range(i+1, r):
indices[j] = indices[j-1] + 1
yield tuple(pool[i] for i in indices)
The code for :func:`combinations` can be also expressed as a subsequence
of :func:`permutations` after filtering entries where the elements are not
in sorted order (according to their position in the input pool)::
def combinations(iterable, r):
pool = tuple(iterable)
n = len(pool)
for indices in permutations(range(n), r):
if sorted(indices) == list(indices):
yield tuple(pool[i] for i in indices)
.. versionadded:: 2.6
.. function:: count([n])
Make an iterator that returns consecutive integers starting with *n*. If not
specified *n* defaults to zero. Often used as an argument to :func:`map` to
generate consecutive data points. Also, used with :func:`zip` to add sequence
numbers. Equivalent to::
def count(n=0):
# count(10) --> 10 11 12 13 14 ...
while True:
yield n
n += 1
.. function:: cycle(iterable)
Make an iterator returning elements from the iterable and saving a copy of each.
When the iterable is exhausted, return elements from the saved copy. Repeats
indefinitely. Equivalent to::
def cycle(iterable):
# cycle('ABCD') --> A B C D A B C D A B C D ...
saved = []
for element in iterable:
yield element
saved.append(element)
while saved:
for element in saved:
yield element
Note, this member of the toolkit may require significant auxiliary storage
(depending on the length of the iterable).
.. function:: dropwhile(predicate, iterable)
Make an iterator that drops elements from the iterable as long as the predicate
is true; afterwards, returns every element. Note, the iterator does not produce
*any* output until the predicate first becomes false, so it may have a lengthy
start-up time. Equivalent to::
def dropwhile(predicate, iterable):
# dropwhile(lambda x: x<5, [1,4,6,4,1]) --> 6 4 1
iterable = iter(iterable)
for x in iterable:
if not predicate(x):
yield x
break
for x in iterable:
yield x
.. function:: groupby(iterable[, key])
Make an iterator that returns consecutive keys and groups from the *iterable*.
The *key* is a function computing a key value for each element. If not
specified or is ``None``, *key* defaults to an identity function and returns
the element unchanged. Generally, the iterable needs to already be sorted on
the same key function.
The operation of :func:`groupby` is similar to the ``uniq`` filter in Unix. It
generates a break or new group every time the value of the key function changes
(which is why it is usually necessary to have sorted the data using the same key
function). That behavior differs from SQL's GROUP BY which aggregates common
elements regardless of their input order.
The returned group is itself an iterator that shares the underlying iterable
with :func:`groupby`. Because the source is shared, when the :func:`groupby`
object is advanced, the previous group is no longer visible. So, if that data
is needed later, it should be stored as a list::
groups = []
uniquekeys = []
data = sorted(data, key=keyfunc)
for k, g in groupby(data, keyfunc):
groups.append(list(g)) # Store group iterator as a list
uniquekeys.append(k)
:func:`groupby` is equivalent to::
class groupby(object):
# [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
# [(list(g)) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
def __init__(self, iterable, key=None):
if key is None:
key = lambda x: x
self.keyfunc = key
self.it = iter(iterable)
self.tgtkey = self.currkey = self.currvalue = object()
def __iter__(self):
return self
def __next__(self):
while self.currkey == self.tgtkey:
self.currvalue = next(self.it) # Exit on StopIteration
self.currkey = self.keyfunc(self.currvalue)
self.tgtkey = self.currkey
return (self.currkey, self._grouper(self.tgtkey))
def _grouper(self, tgtkey):
while self.currkey == tgtkey:
yield self.currvalue
self.currvalue = next(self.it) # Exit on StopIteration
self.currkey = self.keyfunc(self.currvalue)
.. function:: filterfalse(predicate, iterable)
Make an iterator that filters elements from iterable returning only those for
which the predicate is ``False``. If *predicate* is ``None``, return the items
that are false. Equivalent to::
def filterfalse(predicate, iterable):
# filterfalse(lambda x: x%2, range(10)) --> 0 2 4 6 8
if predicate is None:
predicate = bool
for x in iterable:
if not predicate(x):
yield x
.. function:: islice(iterable, [start,] stop [, step])
Make an iterator that returns selected elements from the iterable. If *start* is
non-zero, then elements from the iterable are skipped until start is reached.
Afterward, elements are returned consecutively unless *step* is set higher than
one which results in items being skipped. If *stop* is ``None``, then iteration
continues until the iterator is exhausted, if at all; otherwise, it stops at the
specified position. Unlike regular slicing, :func:`islice` does not support
negative values for *start*, *stop*, or *step*. Can be used to extract related
fields from data where the internal structure has been flattened (for example, a
multi-line report may list a name field on every third line). Equivalent to::
def islice(iterable, *args):
# islice('ABCDEFG', 2) --> A B
# islice('ABCDEFG', 2, 4) --> C D
# islice('ABCDEFG', 2, None) --> C D E F G
# islice('ABCDEFG', 0, None, 2) --> A C E G
s = slice(*args)
it = range(s.start or 0, s.stop or sys.maxsize, s.step or 1)
nexti = next(it)
for i, element in enumerate(iterable):
if i == nexti:
yield element
nexti = next(it)
If *start* is ``None``, then iteration starts at zero. If *step* is ``None``,
then the step defaults to one.
.. function:: zip_longest(*iterables[, fillvalue])
Make an iterator that aggregates elements from each of the iterables. If the
iterables are of uneven length, missing values are filled-in with *fillvalue*.
Iteration continues until the longest iterable is exhausted. Equivalent to::
def zip_longest(*args, fillvalue=None):
# zip_longest('ABCD', 'xy', fillvalue='-') --> Ax By C- D-
def sentinel(counter = ([fillvalue]*(len(args)-1)).pop):
yield counter() # yields the fillvalue, or raises IndexError
fillers = repeat(fillvalue)
iters = [chain(it, sentinel(), fillers) for it in args]
try:
for tup in zip(*iters):
yield tup
except IndexError:
pass
If one of the iterables is potentially infinite, then the :func:`zip_longest`
function should be wrapped with something that limits the number of calls (for
example :func:`islice` or :func:`takewhile`).
.. function:: permutations(iterable[, r])
Return successive *r* length permutations of elements in the *iterable*.
If *r* is not specified or is ``None``, then *r* defaults to the length
of the *iterable* and all possible full-length permutations
are generated.
Permutations are emitted in lexicographic sort order. So, if the
input *iterable* is sorted, the permutation tuples will be produced
in sorted order.
Elements are treated as unique based on their position, not on their
value. So if the input elements are unique, there will be no repeat
values in each permutation.
Equivalent to::
def permutations(iterable, r=None):
# permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC
# permutations(range(3)) --> 012 021 102 120 201 210
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
indices = range(n)
cycles = range(n, n-r, -1)
yield tuple(pool[i] for i in indices[:r])
while n:
for i in reversed(range(r)):
cycles[i] -= 1
if cycles[i] == 0:
indices[i:] = indices[i+1:] + indices[i:i+1]
cycles[i] = n - i
else:
j = cycles[i]
indices[i], indices[-j] = indices[-j], indices[i]
yield tuple(pool[i] for i in indices[:r])
break
else:
return
The code for :func:`permutations` can be also expressed as a subsequence of
:func:`product`, filtered to exclude entries with repeated elements (those
from the same position in the input pool)::
def permutations(iterable, r=None):
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
for indices in product(range(n), repeat=r):
if len(set(indices)) == r:
yield tuple(pool[i] for i in indices)
.. versionadded:: 2.6
.. function:: product(*iterables[, repeat])
Cartesian product of input iterables.
Equivalent to nested for-loops in a generator expression. For example,
``product(A, B)`` returns the same as ``((x,y) for x in A for y in B)``.
The leftmost iterators correspond to the outermost for-loop, so the output
tuples cycle like an odometer (with the rightmost element changing on every
iteration). This results in a lexicographic ordering so that if the
inputs iterables are sorted, the product tuples are emitted
in sorted order.
To compute the product of an iterable with itself, specify the number of
repetitions with the optional *repeat* keyword argument. For example,
``product(A, repeat=4)`` means the same as ``product(A, A, A, A)``.
This function is equivalent to the following code, except that the
actual implementation does not build up intermediate results in memory::
def product(*args, repeat=1):
# product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy
# product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111
pools = map(tuple, args) * repeat
result = [[]]
for pool in pools:
result = [x+[y] for x in result for y in pool]
for prod in result:
yield tuple(prod)
.. function:: repeat(object[, times])
Make an iterator that returns *object* over and over again. Runs indefinitely
unless the *times* argument is specified. Used as argument to :func:`map` for
invariant parameters to the called function. Also used with :func:`zip` to
create an invariant part of a tuple record. Equivalent to::
def repeat(object, times=None):
# repeat(10, 3) --> 10 10 10
if times is None:
while True:
yield object
else:
for i in range(times):
yield object
.. function:: starmap(function, iterable)
Make an iterator that computes the function using arguments obtained from
the iterable. Used instead of :func:`map` when argument parameters are already
grouped in tuples from a single iterable (the data has been "pre-zipped"). The
difference between :func:`map` and :func:`starmap` parallels the distinction
between ``function(a,b)`` and ``function(*c)``. Equivalent to::
def starmap(function, iterable):
# starmap(pow, [(2,5), (3,2), (10,3)]) --> 32 9 1000
for args in iterable:
yield function(*args)
.. versionchanged:: 2.6
Previously, :func:`starmap` required the function arguments to be tuples.
Now, any iterable is allowed.
.. function:: takewhile(predicate, iterable)
Make an iterator that returns elements from the iterable as long as the
predicate is true. Equivalent to::
def takewhile(predicate, iterable):
# takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4
for x in iterable:
if predicate(x):
yield x
else:
break
.. function:: tee(iterable[, n=2])
Return *n* independent iterators from a single iterable. The case where ``n==2``
is equivalent to::
def tee(iterable):
def gen(next, data={}):
for i in count():
if i in data:
yield data.pop(i)
else:
data[i] = next()
yield data[i]
it = iter(iterable)
return (gen(it.__next__), gen(it.__next__))
Note, once :func:`tee` has made a split, the original *iterable* should not be
used anywhere else; otherwise, the *iterable* could get advanced without the tee
objects being informed.
Note, this member of the toolkit may require significant auxiliary storage
(depending on how much temporary data needs to be stored). In general, if one
iterator is going to use most or all of the data before the other iterator, it
is faster to use :func:`list` instead of :func:`tee`.
.. _itertools-example:
Examples
--------
The following examples show common uses for each tool and demonstrate ways they
can be combined.
.. doctest::
# Show a dictionary sorted and grouped by value
>>> from operator import itemgetter
>>> d = dict(a=1, b=2, c=1, d=2, e=1, f=2, g=3)
>>> di = sorted(d.items(), key=itemgetter(1))
>>> for k, g in groupby(di, key=itemgetter(1)):
... print(k, map(itemgetter(0), g))
...
1 ['a', 'c', 'e']
2 ['b', 'd', 'f']
3 ['g']
# Find runs of consecutive numbers using groupby. The key to the solution
# is differencing with a range so that consecutive numbers all appear in
# same group.
>>> data = [ 1, 4,5,6, 10, 15,16,17,18, 22, 25,26,27,28]
>>> for k, g in groupby(enumerate(data), lambda t:t[0]-t[1]):
... print(map(operator.itemgetter(1), g))
...
[1]
[4, 5, 6]
[10]
[15, 16, 17, 18]
[22]
[25, 26, 27, 28]
.. _itertools-recipes:
Recipes
-------
This section shows recipes for creating an extended toolset using the existing
itertools as building blocks.
The extended tools offer the same high performance as the underlying toolset.
The superior memory performance is kept by processing elements one at a time
rather than bringing the whole iterable into memory all at once. Code volume is
kept small by linking the tools together in a functional style which helps
eliminate temporary variables. High speed is retained by preferring
"vectorized" building blocks over the use of for-loops and :term:`generator`\s
which incur interpreter overhead.
.. testcode::
def take(n, seq):
return list(islice(seq, n))
def enumerate(iterable):
return zip(count(), iterable)
def tabulate(function):
"Return function(0), function(1), ..."
return map(function, count())
def items(mapping):
return zip(mapping.keys(), mapping.values())
def nth(iterable, n):
"Returns the nth item or raise StopIteration"
return next(islice(iterable, n, None))
def all(seq, pred=None):
"Returns True if pred(x) is true for every element in the iterable"
for elem in filterfalse(pred, seq):
return False
return True
def any(seq, pred=None):
"Returns True if pred(x) is true for at least one element in the iterable"
for elem in filter(pred, seq):
return True
return False
def no(seq, pred=None):
"Returns True if pred(x) is false for every element in the iterable"
for elem in filter(pred, seq):
return False
return True
def quantify(seq, pred=None):
"Count how many times the predicate is true in the sequence"
return sum(map(pred, seq))
def padnone(seq):
"""Returns the sequence elements and then returns None indefinitely.
Useful for emulating the behavior of the built-in map() function.
"""
return chain(seq, repeat(None))
def ncycles(seq, n):
"Returns the sequence elements n times"
return chain.from_iterable(repeat(seq, n))
def dotproduct(vec1, vec2):
return sum(map(operator.mul, vec1, vec2))
def flatten(listOfLists):
return list(chain.from_iterable(listOfLists))
def repeatfunc(func, times=None, *args):
"""Repeat calls to func with specified arguments.
Example: repeatfunc(random.random)
"""
if times is None:
return starmap(func, repeat(args))
return starmap(func, repeat(args, times))
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
for elem in b:
break
return zip(a, b)
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')"
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
def roundrobin(*iterables):
"roundrobin('abc', 'd', 'ef') --> 'a', 'd', 'e', 'b', 'f', 'c'"
# Recipe credited to George Sakkis
pending = len(iterables)
nexts = cycle(iter(it).__next__ for it in iterables)
while pending:
try:
for next in nexts:
yield next()
except StopIteration:
pending -= 1
nexts = cycle(islice(nexts, pending))
def powerset(iterable):
"powerset('ab') --> set([]), set(['a']), set(['b']), set(['a', 'b'])"
# Recipe credited to Eric Raymond
pairs = [(2**i, x) for i, x in enumerate(iterable)]
for n in xrange(2**len(pairs)):
yield set(x for m, x in pairs if m&n)
def compress(data, selectors):
"compress('abcdef', [1,0,1,0,1,1]) --> a c e f"
for d, s in zip(data, selectors):
if s:
yield d