comments
stringlengths 2
31.4k
|
---|
"""
==============
Array Creation
==============
Introduction
============
There are 5 general mechanisms for creating arrays:
1) Conversion from other Python structures (e.g., lists, tuples)
2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros,
etc.)
3) Reading arrays from disk, either from standard or custom formats
4) Creating arrays from raw bytes through the use of strings or buffers
5) Use of special library functions (e.g., random)
This section will not cover means of replicating, joining, or otherwise
expanding or mutating existing arrays. Nor will it cover creating object
arrays or structured arrays. Both of those are covered in their own sections.
Converting Python array_like Objects to NumPy Arrays
====================================================
In general, numerical data arranged in an array-like structure in Python can
be converted to arrays through the use of the array() function. The most
obvious examples are lists and tuples. See the documentation for array() for
details for its use. Some objects may support the array-protocol and allow
conversion to arrays this way. A simple way to find out if the object can be
converted to a numpy array using array() is simply to try it interactively and
see if it works! (The Python Way).
Examples: ::
>>> x = np.array([2,3,1,0])
>>> x = np.array([2, 3, 1, 0])
>>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists,
and types
>>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]])
Intrinsic NumPy Array Creation
==============================
NumPy has built-in functions for creating arrays from scratch:
zeros(shape) will create an array filled with 0 values with the specified
shape. The default dtype is float64.
``>>> np.zeros((2, 3))
array([[ 0., 0., 0.], [ 0., 0., 0.]])``
ones(shape) will create an array filled with 1 values. It is identical to
zeros in all other respects.
arange() will create arrays with regularly incrementing values. Check the
docstring for complete information on the various ways it can be used. A few
examples will be given here: ::
>>> np.arange(10)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> np.arange(2, 10, dtype=np.float)
array([ 2., 3., 4., 5., 6., 7., 8., 9.])
>>> np.arange(2, 3, 0.1)
array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9])
Note that there are some subtleties regarding the last usage that the user
should be aware of that are described in the arange docstring.
linspace() will create arrays with a specified number of elements, and
spaced equally between the specified beginning and end values. For
example: ::
>>> np.linspace(1., 4., 6)
array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ])
The advantage of this creation function is that one can guarantee the
number of elements and the starting and end point, which arange()
generally will not do for arbitrary start, stop, and step values.
indices() will create a set of arrays (stacked as a one-higher dimensioned
array), one per dimension with each representing variation in that dimension.
An example illustrates much better than a verbal description: ::
>>> np.indices((3,3))
array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]])
This is particularly useful for evaluating functions of multiple dimensions on
a regular grid.
Reading Arrays From Disk
========================
This is presumably the most common case of large array creation. The details,
of course, depend greatly on the format of data on disk and so this section
can only give general pointers on how to handle various formats.
Standard Binary Formats
-----------------------
Various fields have standard formats for array data. The following lists the
ones with known python libraries to read them and return numpy arrays (there
may be others for which it is possible to read and convert to numpy arrays so
check the last section as well)
::
HDF5: PyTables
FITS: PyFITS
Examples of formats that cannot be read directly but for which it is not hard to
convert are those formats supported by libraries like PIL (able to read and
write many image formats such as jpg, png, etc).
Common ASCII Formats
------------------------
Comma Separated Value files (CSV) are widely used (and an export and import
option for programs like Excel). There are a number of ways of reading these
files in Python. There are CSV functions in Python and functions in pylab
(part of matplotlib).
More generic ascii files can be read using the io package in scipy.
Custom Binary Formats
---------------------
There are a variety of approaches one can use. If the file has a relatively
simple format then one can write a simple I/O library and use the numpy
fromfile() function and .tofile() method to read and write numpy arrays
directly (mind your byteorder though!) If a good C or C++ library exists that
read the data, one can wrap that library with a variety of techniques though
that certainly is much more work and requires significantly more advanced
knowledge to interface with C or C++.
Use of Special Libraries
------------------------
There are libraries that can be used to generate arrays for special purposes
and it isn't possible to enumerate all of them. The most common uses are use
of the many array generation functions in random that can generate arrays of
random values, and some utility functions to generate special matrices (e.g.
diagonal).
""" |
"""Test module for the noddy examples
Noddy 1:
>>> import noddy
>>> n1 = noddy.Noddy()
>>> n2 = noddy.Noddy()
>>> del n1
>>> del n2
Noddy 2
>>> import noddy2
>>> n1 = noddy2.Noddy('jim', 'fulton', 42)
>>> n1.first
'jim'
>>> n1.last
'NAME n1.number
42
>>> n1.name()
'jim NAME n1.first = 'will'
>>> n1.name()
'will NAME n1.last = 'NAME n1.name()
'will NAME del n1.first
>>> n1.name()
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first = 'drew'
>>> n1.first
'drew'
>>> del n1.number
Traceback (most recent call last):
...
TypeError: can't delete numeric/char attribute
>>> n1.number=2
>>> n1.number
2
>>> n1.first = 42
>>> n1.name()
'42 NAME n2 = noddy2.Noddy()
>>> n2.name()
' '
>>> n2.first
''
>>> n2.last
''
>>> del n2.first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.name()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: first
>>> n2.number
0
>>> n3 = noddy2.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
>>> del n1
>>> del n2
Noddy 3
>>> import noddy3
>>> n1 = noddy3.Noddy('jim', 'fulton', 42)
>>> n1 = noddy3.Noddy('jim', 'fulton', 42)
>>> n1.name()
'jim NAME del n1.first
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: Cannot delete the first attribute
>>> n1.first = 42
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: The first attribute value must be a string
>>> n1.first = 'will'
>>> n1.name()
'will NAME n2 = noddy3.Noddy()
>>> n2 = noddy3.Noddy()
>>> n2 = noddy3.Noddy()
>>> n3 = noddy3.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
>>> del n1
>>> del n2
Noddy 4
>>> import noddy4
>>> n1 = noddy4.Noddy('jim', 'fulton', 42)
>>> n1.first
'jim'
>>> n1.last
'NAME n1.number
42
>>> n1.name()
'jim NAME n1.first = 'will'
>>> n1.name()
'will NAME n1.last = 'NAME n1.name()
'will NAME del n1.first
>>> n1.name()
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first = 'drew'
>>> n1.first
'drew'
>>> del n1.number
Traceback (most recent call last):
...
TypeError: can't delete numeric/char attribute
>>> n1.number=2
>>> n1.number
2
>>> n1.first = 42
>>> n1.name()
'42 NAME n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2.name()
' '
>>> n2.first
''
>>> n2.last
''
>>> del n2.first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.name()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: first
>>> n2.number
0
>>> n3 = noddy4.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
Test cyclic gc(?)
>>> import gc
>>> gc.disable()
>>> x = []
>>> l = [x]
>>> n2.first = l
>>> n2.first
[[]]
>>> l.append(n2)
>>> del l
>>> del n1
>>> del n2
>>> sys.getrefcount(x)
3
>>> ignore = gc.collect()
>>> sys.getrefcount(x)
2
>>> gc.enable()
""" |
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to reqd all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
"""This module tests SyntaxErrors.
Here's an example of the sort of thing that is tested.
>>> def f(x):
... global x
Traceback (most recent call last):
SyntaxError: name 'x' is parameter and global
The tests are all raise SyntaxErrors. They were created by checking
each C call that raises SyntaxError. There are several modules that
raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and
symtable.c.
The parser itself outlaws a lot of invalid syntax. None of these
errors are tested here at the moment. We should add some tests; since
there are infinitely many programs with invalid syntax, we would need
to be judicious in selecting some.
The compiler generates a synthetic module name for code executed by
doctest. Since all the code comes from the same module, a suffix like
[1] is appended to the module name, As a consequence, changing the
order of tests in this module means renumbering all the errors after
it. (Maybe we should enable the ellipsis option for these tests.)
In ast.c, syntax errors are raised by calling ast_error().
Errors from set_context():
>>> obj.None = 1
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> None = 1
Traceback (most recent call last):
SyntaxError: can't assign to keyword
It's a syntax error to assign to the empty tuple. Why isn't it an
error to assign to the empty list? It will always raise some error at
runtime.
>>> () = 1
Traceback (most recent call last):
SyntaxError: can't assign to ()
>>> f() = 1
Traceback (most recent call last):
SyntaxError: can't assign to function call
>>> del f()
Traceback (most recent call last):
SyntaxError: can't delete function call
>>> a + 1 = 2
Traceback (most recent call last):
SyntaxError: can't assign to operator
>>> (x for x in x) = 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression
>>> 1 = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> "abc" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> b"" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> `1` = 1
Traceback (most recent call last):
SyntaxError: invalid syntax
If the left-hand side of an assignment is a list or tuple, an illegal
expression inside that contain should still cause a syntax error.
This test just checks a couple of cases rather than enumerating all of
them.
>>> (a, "b", c) = (1, 2, 3)
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> [a, b, c + 1] = [1, 2, 3]
Traceback (most recent call last):
SyntaxError: can't assign to operator
>>> a if 1 else b = 1
Traceback (most recent call last):
SyntaxError: can't assign to conditional expression
From compiler_complex_args():
>>> def f(None=1):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_arguments():
>>> def f(x, y=1, z):
... pass
Traceback (most recent call last):
SyntaxError: non-default argument follows default argument
>>> def f(x, None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> def f(*None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> def f(**None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_funcdef():
>>> def None(x):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_call():
>>> def f(it, *varargs):
... return list(it)
>>> L = range(10)
>>> f(x for x in L)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(x for x in L, 1)
Traceback (most recent call last):
SyntaxError: Generator expression must be parenthesized if not sole argument
>>> f((x for x in L), 1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... i244, i245, i246, i247, i248, i249, i250, i251, i252,
... i253, i254, i255)
Traceback (most recent call last):
SyntaxError: more than 255 arguments
The actual error cases counts positional arguments, keyword arguments,
and generator expression arguments separately. This test combines the
three.
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... (x for x in i244), i245, i246, i247, i248, i249, i250, i251,
... i252=1, i253=1, i254=1, i255=1)
Traceback (most recent call last):
SyntaxError: more than 255 arguments
>>> f(lambda x: x[0] = 3)
Traceback (most recent call last):
SyntaxError: lambda cannot contain assignment
The grammar accepts any test (basically, any expression) in the
keyword slot of a call site. Test a few different options.
>>> f(x()=2)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
>>> f(a or b=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
>>> f(x.y=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
More set_context():
>>> (x for x in x) += 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression
>>> None += 1
Traceback (most recent call last):
SyntaxError: can't assign to keyword
>>> f() += 1
Traceback (most recent call last):
SyntaxError: can't assign to function call
Test continue in finally in weird combinations.
continue in for loop under finally should be ok.
>>> def test():
... try:
... pass
... finally:
... for abc in range(10):
... continue
... print(abc)
>>> test()
9
Start simple, a continue in a finally should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
This is essentially a continue in a finally which should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... try:
... continue
... except:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... try:
... continue
... finally:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try: pass
... finally:
... try:
... pass
... except:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
There is one test for a break that is not in a loop. The compiler
uses a single data structure to keep track of try-finally and loops,
so we need to be sure that a break is actually inside a loop. If it
isn't, there should be a syntax error.
>>> try:
... print(1)
... break
... print(2)
... finally:
... print(3)
Traceback (most recent call last):
...
SyntaxError: 'break' outside loop
This should probably raise a better error than a SystemError (or none at all).
In 2.5 there was a missing exception and an assert was triggered in a debug
build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514
>>> while 1:
... while 2:
... while 3:
... while 4:
... while 5:
... while 6:
... while 8:
... while 9:
... while 10:
... while 11:
... while 12:
... while 13:
... while 14:
... while 15:
... while 16:
... while 17:
... while 18:
... while 19:
... while 20:
... while 21:
... while 22:
... break
Traceback (most recent call last):
...
SystemError: too many statically nested blocks
Misuse of the nonlocal statement can lead to a few unique syntax errors.
>>> def f(x):
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: name 'x' is parameter and nonlocal
>>> def f():
... global x
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: name 'x' is nonlocal and global
>>> def f():
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: no binding for nonlocal 'x' found
From SF bug #1705365
>>> nonlocal x
Traceback (most recent call last):
...
SyntaxError: nonlocal declaration not allowed at module level
TODO(jhylton): Figure out how to test SyntaxWarning with doctest.
## >>> def f(x):
## ... def f():
## ... print(x)
## ... nonlocal x
## Traceback (most recent call last):
## ...
## SyntaxWarning: name 'x' is assigned to before nonlocal declaration
## >>> def f():
## ... x = 1
## ... nonlocal x
## Traceback (most recent call last):
## ...
## SyntaxWarning: name 'x' is assigned to before nonlocal declaration
This tests assignment-context; there was a bug in Python 2.5 where compiling
a complex 'if' (one with 'elif') would fail to notice an invalid suite,
leading to spurious errors.
>>> if 1:
... x() = 1
... elif 1:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... x() = 1
... elif 1:
... pass
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... pass
... else:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
Make sure that the old "raise X, Y[, Z]" form is gone:
>>> raise X, Y
Traceback (most recent call last):
...
SyntaxError: invalid syntax
>>> raise X, Y, Z
Traceback (most recent call last):
...
SyntaxError: invalid syntax
>>> f(a=23, a=234)
Traceback (most recent call last):
...
SyntaxError: keyword argument repeated
>>> del ()
Traceback (most recent call last):
SyntaxError: can't delete ()
>>> {1, 2, 3} = 42
Traceback (most recent call last):
SyntaxError: can't assign to literal
Corner-cases that used to fail to raise the correct error:
>>> def f(*, x=lambda __debug__:0): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(*args:(lambda __debug__:0)): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(**kwargs:(lambda __debug__:0)): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> with (lambda *:0): pass
Traceback (most recent call last):
SyntaxError: named arguments must follow bare *
Corner-cases that used to crash:
>>> def f(**__debug__): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(*xx, __debug__): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
""" |
"""
====================================
Linear algebra (:mod:`scipy.linalg`)
====================================
.. currentmodule:: scipy.linalg
Linear algebra functions.
.. seealso::
`numpy.linalg` for more linear algebra functions. Note that
although `scipy.linalg` imports most of them, identically named
functions from `scipy.linalg` may offer more or slightly differing
functionality.
Basics
======
.. autosummary::
:toctree: generated/
inv - Find the inverse of a square matrix
solve - Solve a linear system of equations
solve_banded - Solve a banded linear system
solveh_banded - Solve a Hermitian or symmetric banded system
solve_circulant - Solve a circulant system
solve_triangular - Solve a triangular matrix
solve_toeplitz - Solve a toeplitz matrix
det - Find the determinant of a square matrix
norm - Matrix and vector norm
lstsq - Solve a linear least-squares problem
pinv - Pseudo-inverse (Moore-Penrose) using lstsq
pinv2 - Pseudo-inverse using svd
pinvh - Pseudo-inverse of hermitian matrix
kron - Kronecker product of two arrays
tril - Construct a lower-triangular matrix from a given matrix
triu - Construct an upper-triangular matrix from a given matrix
orthogonal_procrustes - Solve an orthogonal Procrustes problem
LinAlgError
Eigenvalue Problems
===================
.. autosummary::
:toctree: generated/
eig - Find the eigenvalues and eigenvectors of a square matrix
eigvals - Find just the eigenvalues of a square matrix
eigh - Find the e-vals and e-vectors of a Hermitian or symmetric matrix
eigvalsh - Find just the eigenvalues of a Hermitian or symmetric matrix
eig_banded - Find the eigenvalues and eigenvectors of a banded matrix
eigvals_banded - Find just the eigenvalues of a banded matrix
Decompositions
==============
.. autosummary::
:toctree: generated/
lu - LU decomposition of a matrix
lu_factor - LU decomposition returning unordered matrix and pivots
lu_solve - Solve Ax=b using back substitution with output of lu_factor
svd - Singular value decomposition of a matrix
svdvals - Singular values of a matrix
diagsvd - Construct matrix of singular values from output of svd
orth - Construct orthonormal basis for the range of A using svd
cholesky - Cholesky decomposition of a matrix
cholesky_banded - Cholesky decomp. of a sym. or Hermitian banded matrix
cho_factor - Cholesky decomposition for use in solving a linear system
cho_solve - Solve previously factored linear system
cho_solve_banded - Solve previously factored banded linear system
polar - Compute the polar decomposition.
qr - QR decomposition of a matrix
qr_multiply - QR decomposition and multiplication by Q
qr_update - Rank k QR update
qr_delete - QR downdate on row or column deletion
qr_insert - QR update on row or column insertion
rq - RQ decomposition of a matrix
qz - QZ decomposition of a pair of matrices
ordqz - QZ decomposition of a pair of matrices with reordering
schur - Schur decomposition of a matrix
rsf2csf - Real to complex Schur form
hessenberg - Hessenberg form of a matrix
.. seealso::
`scipy.linalg.interpolative` -- Interpolative matrix decompositions
Matrix Functions
================
.. autosummary::
:toctree: generated/
expm - Matrix exponential
logm - Matrix logarithm
cosm - Matrix cosine
sinm - Matrix sine
tanm - Matrix tangent
coshm - Matrix hyperbolic cosine
sinhm - Matrix hyperbolic sine
tanhm - Matrix hyperbolic tangent
signm - Matrix sign
sqrtm - Matrix square root
funm - Evaluating an arbitrary matrix function
expm_frechet - Frechet derivative of the matrix exponential
expm_cond - Relative condition number of expm in the Frobenius norm
fractional_matrix_power - Fractional matrix power
Matrix Equation Solvers
=======================
.. autosummary::
:toctree: generated/
solve_sylvester - Solve the Sylvester matrix equation
solve_continuous_are - Solve the continuous-time algebraic Riccati equation
solve_discrete_are - Solve the discrete-time algebraic Riccati equation
solve_discrete_lyapunov - Solve the discrete-time Lyapunov equation
solve_lyapunov - Solve the (continous-time) Lyapunov equation
Special Matrices
================
.. autosummary::
:toctree: generated/
block_diag - Construct a block diagonal matrix from submatrices
circulant - Circulant matrix
companion - Companion matrix
dft - Discrete Fourier transform matrix
hadamard - Hadamard matrix of order 2**n
hankel - Hankel matrix
helmert - Helmert matrix
hilbert - Hilbert matrix
invhilbert - Inverse Hilbert matrix
leslie - Leslie matrix
pascal - Pascal matrix
invpascal - Inverse Pascal matrix
toeplitz - Toeplitz matrix
tri - Construct a matrix filled with ones at and below a given diagonal
Low-level routines
==================
.. autosummary::
:toctree: generated/
get_blas_funcs
get_lapack_funcs
find_best_blas_type
.. seealso::
`scipy.linalg.blas` -- Low-level BLAS functions
`scipy.linalg.lapack` -- Low-level LAPACK functions
`scipy.linalg.cython_blas` -- Low-level BLAS functions for Cython
`scipy.linalg.cython_lapack` -- Low-level LAPACK functions for Cython
""" |
#Python 3+ Script for converting ban table format as of 2018-10-28 made by USERNAME starting ensure you have installed the mysqlclient package https://github.com/PyMySQL/mysqlclient-python
#It can be downloaded from command line with pip:
#pip install mysqlclient
#
#You will also have to create a new ban table for inserting converted data to per the schema:
#CREATE TABLE `ban` (
# `id` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT,
# `bantime` DATETIME NOT NULL,
# `server_ip` INT(10) UNSIGNED NOT NULL,
# `server_port` SMALLINT(5) UNSIGNED NOT NULL,
# `round_id` INT(11) UNSIGNED NOT NULL,
# `role` VARCHAR(32) NULL DEFAULT NULL,
# `expiration_time` DATETIME NULL DEFAULT NULL,
# `applies_to_admins` TINYINT(1) UNSIGNED NOT NULL DEFAULT '0',
# `reason` VARCHAR(2048) NOT NULL,
# `ckey` VARCHAR(32) NULL DEFAULT NULL,
# `ip` INT(10) UNSIGNED NULL DEFAULT NULL,
# `computerid` VARCHAR(32) NULL DEFAULT NULL,
# `a_ckey` VARCHAR(32) NOT NULL,
# `a_ip` INT(10) UNSIGNED NOT NULL,
# `a_computerid` VARCHAR(32) NOT NULL,
# `who` VARCHAR(2048) NOT NULL,
# `adminwho` VARCHAR(2048) NOT NULL,
# `edits` TEXT NULL DEFAULT NULL,
# `unbanned_datetime` DATETIME NULL DEFAULT NULL,
# `unbanned_ckey` VARCHAR(32) NULL DEFAULT NULL,
# `unbanned_ip` INT(10) UNSIGNED NULL DEFAULT NULL,
# `unbanned_computerid` VARCHAR(32) NULL DEFAULT NULL,
# `unbanned_round_id` INT(11) UNSIGNED NULL DEFAULT NULL,
# PRIMARY KEY (`id`),
# KEY `idx_ban_isbanned` (`ckey`,`role`,`unbanned_datetime`,`expiration_time`),
# KEY `idx_ban_isbanned_details` (`ckey`,`ip`,`computerid`,`role`,`unbanned_datetime`,`expiration_time`),
# KEY `idx_ban_count` (`bantime`,`a_ckey`,`applies_to_admins`,`unbanned_datetime`,`expiration_time`)
#) ENGINE=InnoDB DEFAULT CHARSET=latin1;
#This is to prevent the destruction of existing data and allow rollbacks to be performed in the event of an error during conversion
#Once conversion is complete remember to rename the old and new ban tables; it's up to you if you want to keep the old table
#
#To view the parameters for this script, execute it with the argument --help
#All the positional arguments are required, remember to include prefixes in your table names if you use them
#An example of the command used to execute this script from powershell:
#python ban_conversion_2018-10-28.py "localhost" "root" "password" "feedback" "SS13_ban" "SS13_ban_new"
#I found that this script would complete conversion of 35000 rows in approximately 20 seconds, results will depend on the size of your ban table and computer used
#
#The script has been tested to complete with tgstation's ban table as of 2018-09-02 02:19:56
#In the event of an error the new ban table is automatically truncated
#The source table is never modified so you don't have to worry about losing any data due to errors
#Some additional error correction is performed to fix problems specific to legacy and invalid data in tgstation's ban table, these operations are tagged with a 'TG:' comment
#Even if you don't have any of these specific problems in your ban table the operations won't have matter as they have an insignificant effect on runtime
#
#While this script is safe to run with your game server(s) active, any bans created after the script has started won't be converted
#You will also have to ensure that the code and table names are updated between rounds as neither will be compatible
|
#
# ElementTree
# $Id: ElementTree.py 2326 2005-03-17 07:45:21Z USERNAME $
#
# light-weight XML support for Python 1.5.2 and later.
#
# history:
# 2001-10-20 fl created (from various sources)
# 2001-11-01 fl return root from parse method
# 2002-02-16 fl sort attributes in lexical order
# 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup
# 2002-05-01 fl finished TreeBuilder refactoring
# 2002-07-14 fl added basic namespace support to ElementTree.write
# 2002-07-25 fl added QName attribute support
# 2002-10-20 fl fixed encoding in write
# 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding
# 2002-11-27 fl accept file objects or file names for parse/write
# 2002-12-04 fl moved XMLTreeBuilder back to this module
# 2003-01-11 fl fixed entity encoding glitch for us-ascii
# 2003-02-13 fl added XML literal factory
# 2003-02-21 fl added ProcessingInstruction/PI factory
# 2003-05-11 fl added tostring/fromstring helpers
# 2003-05-26 fl added ElementPath support
# 2003-07-05 fl added makeelement factory method
# 2003-07-28 fl added more well-known namespace prefixes
# 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed
# 2003-10-31 fl markup updates
# 2003-11-15 fl fixed nested namespace bug
# 2004-03-28 fl added XMLID helper
# 2004-06-02 fl added default support to findtext
# 2004-06-08 fl fixed encoding of non-ascii element/attribute names
# 2004-08-23 fl take advantage of post-2.1 expat features
# 2005-02-01 fl added iterparse implementation
# 2005-03-02 fl fixed iterparse support for pre-2.2 versions
# 2012-06-29 EMAIL Made all classes new-style
# 2012-07-02 EMAIL Include dist. ElementPath
# 2013-02-27 EMAIL renamed module files, kept namespace.
#
# Copyright (c) 1999-2005 by NAME All rights reserved.
#
# EMAIL
# http://www.pythonware.com
#
# --------------------------------------------------------------------
# The ElementTree toolkit is
#
# Copyright (c) 1999-2005 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
# Licensed to PSF under a Contributor Agreement.
# See http://www.python.org/2.4/license for licensing details.
|
# def view_revision(self, request, **kwargs):
# # render a page for a popup in an old revision
# revision_pk = kwargs.pop('revision_pk')
# page_revision = PageRevision.objects.get(id=revision_pk)
# # get the rendered_placeholders which are persisted as html strings
# prc = PageRevisionComparator(page_revision, request=request)
# rendered_placeholders = prc.rendered_placeholders
#
# context = SekizaiContext({
# 'current_template': page_revision.page.get_template(),
# 'rendered_placeholders': rendered_placeholders,
# 'page_revision': page_revision,
# 'is_popup': True,
# 'request': request,
# })
# return render(request, self.view_revision_template, context=context)
#
# def download_audit_trail(self, request):
# # show view where the user can select the desired download params
# # self.set_page_lang(request)
# return TemplateResponse(request, 'admin/download_audit_trail.html',
# {'p_id': request.GET.get('page_id'), 'lang': request.GET.get('language')}
# )
# def download_audit_trail_xlsx(self, request, **kwargs):
# # download the audit trail as xlsx
# # self.set_page_lang(request) , language=get_language_from_request(request)
# page_id = request.GET.get('page_id')
# language = request.GET.get('language')
# request.current_page = Page.objects.get(id=page_id)
# report = exporter.ReportXLSXFormatter()
# return report.get_download_response(request, self.get_queryset(request), language=language)
# def revert(self, r equest, ** kwargs):
# # revert page to revision
# pk = kwargs.pop('pk')
# language = request.GET.get('language')
# page_revision = PageRevision.objects.get(id=pk)
#
# if not revert_page(page_revision, request):
# messages.info(request, u'Page is already in this revision!')
# prev = request.META.get('HTTP_REFERER')
# if prev:
# return redirect(prev)
# return self.changelist_view(request, **kwargs)
#
# # create a new revision if reverted to keep history correct
# # therefore mark a placeholder as dirty
# # TODO: in case of no placeholder?
# # page_revision.page.placeholders.first().mark_as_dirty(language)
# # creator = PageRevisionCreator(page_revision.page.pk, language, request, request.user,
# # ugettext(u'Restored') + ' ' + '#' + str(page_revision.pk))
# # creator.create_page_revision()
#
#
# messages.info(request, _(u'You have succesfully reverted to {rev}').format(rev=page_revision))
# return self.render_close_frame()
# def diff(self, request, **kwargs):
# # deprecated diff view (used in the PageRevisionForm)
# # deprecated because it was only able to show an html code difference
# # which compares a revision to a page
# # -> REPLACED BY diff-view
# pk = kwargs.pop('pk')
# page_revision = PageRevision.objects.get(id=pk)
#
# prc = PageRevisionComparator(page_revision, request=request)
# slot_html = {slot: revert_escape(html) for slot, html in prc.slot_html.items() if slot in prc.changed_slots}
#
# if not slot_html:
# messages.info(request, _(u'No diff between revision and current page detected'))
# return self.changelist_view(request, **kwargs)
#
# context = SekizaiContext({
# 'title': _(u'Diff current page and page revision #{pk}').format(pk=pk),
# 'slot_html': slot_html,
# 'is_popup': True,
# 'page_revision_id': page_revision.pk,
# 'request': request
# })
|
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
"""
******************************
Self-organizing maps (``som``)
******************************
.. index:: self-organizing map (SOM)
.. index::
single: projection; self-organizing map (SOM)
`Self-organizing map <http://en.wikipedia.org/wiki/Self-organizing_map>`_
(SOM) is an unsupervised learning algorithm that infers low,
typically two-dimensional discretized representation of the input
space, called a map. The map preserves topological properties of the
input space, such that the cells that are close in the map include data
instances that are similar to each other.
=================================
Inference of Self-Organizing Maps
=================================
The main class for inference of self-organizing maps is :obj:`SOMLearner`.
The class initializes the topology of the map and returns an inference
objects which, given the data, performs the optimization of the map::
import Orange
som = Orange.projection.som.SOMLearner(map_shape=(8, 8),
initialize=Orange.projection.som.InitializeRandom)
data = Orange.data.table("iris.tab")
map = som(data)
.. autoclass:: SOMLearner
:members:
.. autoclass:: SOMMap
:members:
Topology
--------
.. autodata:: HexagonalTopology
.. autodata:: RectangularTopology
Map initialization
------------------
.. autodata:: InitializeLinear
.. autodata:: InitializeRandom
Node neighbourhood
------------------
.. autodata:: NeighbourhoodGaussian
.. autodata:: NeighbourhoodBubble
.. autodata:: NeighbourhoodEpanechicov
=============================================
Supervised Learning with Self-Organizing Maps
=============================================
Supervised learning requires class-labeled data. For training,
class information is first added to data instances as a regular
feature by extending the feature vectors accordingly. Next, the
map is trained, and the training data projected to nodes. Each
node then classifies to the majority class. The dimensions
corresponding to the class features are then removed from the
prototype vector of each node in the map. For classification,
the data instance is projected to the best matching cell, returning
the associated class.
An example of the code that trains and then classifies on the same
data set is::
import Orange
import random
learner = Orange.projection.som.SOMSupervisedLearner(map_shape=(4, 4))
data = Orange.data.Table("iris.tab")
classifier = learner(data)
random.seed(50)
for d in random.sample(data, 5):
print "%-15s originally %-15s" % (classifier(d), d.getclass())
.. autoclass:: SOMSupervisedLearner
:members:
==================
Supporting Classes
==================
The actual map optimization algorithm is implemented by :class:`Solver`
class which is used by both the :class:`SOMLearner` and the
:class:`SOMSupervisedLearner`.
.. autoclass:: Solver
:members:
Class :obj:`Map` stores the self-organizing map composed of :obj:`Node`
objects. The code below (:download:`som-node.py <code/som-node.py>`)
shows an example how to access the information stored in the node of the
map:
.. literalinclude:: code/som-node.py
:lines: 7-
.. autoclass:: Map
:members:
.. autoclass:: Node
:members:
========
Examples
========
The following code (:download:`som-mapping.py <code/som-mapping.py>`)
infers self-organizing map from Iris data set. The map is rather small,
and consists of only 9 cells. We optimize the network, and then report
how many data instances were mapped into each cell. The second part
of the code reports on data instances from one of the corner cells:
.. literalinclude:: code/som-mapping.py
:lines: 7-
The output of this code is::
Node Instances
(0, 0) 31
(0, 1) 7
(0, 2) 0
(1, 0) 24
(1, 1) 7
(1, 2) 50
(2, 0) 10
(2, 1) 21
(2, 2) 0
Data instances in cell (0, 1):
[6.9, 3.1, 4.9, 1.5, 'Iris-versicolor']
[6.7, 3.0, 5.0, 1.7, 'Iris-versicolor']
[6.3, 2.9, 5.6, 1.8, 'Iris-virginica']
[6.5, 3.2, 5.1, 2.0, 'Iris-virginica']
[6.4, 2.7, 5.3, 1.9, 'Iris-virginica']
[6.1, 2.6, 5.6, 1.4, 'Iris-virginica']
[6.5, 3.0, 5.2, 2.0, 'Iris-virginica']
""" |
"""
Discrete Fourier Transform (:mod:`numpy.fft`)
=============================================
.. currentmodule:: numpy.fft
Standard FFTs
-------------
.. autosummary::
:toctree: generated/
fft Discrete Fourier transform.
ifft Inverse discrete Fourier transform.
fft2 Discrete Fourier transform in two dimensions.
ifft2 Inverse discrete Fourier transform in two dimensions.
fftn Discrete Fourier transform in N-dimensions.
ifftn Inverse discrete Fourier transform in N dimensions.
Real FFTs
---------
.. autosummary::
:toctree: generated/
rfft Real discrete Fourier transform.
irfft Inverse real discrete Fourier transform.
rfft2 Real discrete Fourier transform in two dimensions.
irfft2 Inverse real discrete Fourier transform in two dimensions.
rfftn Real discrete Fourier transform in N dimensions.
irfftn Inverse real discrete Fourier transform in N dimensions.
Hermitian FFTs
--------------
.. autosummary::
:toctree: generated/
hfft Hermitian discrete Fourier transform.
ihfft Inverse Hermitian discrete Fourier transform.
Helper routines
---------------
.. autosummary::
:toctree: generated/
fftfreq Discrete Fourier Transform sample frequencies.
rfftfreq DFT sample frequencies (for usage with rfft, irfft).
fftshift Shift zero-frequency component to center of spectrum.
ifftshift Inverse of fftshift.
Background information
----------------------
Fourier analysis is fundamentally a method for expressing a function as a
sum of periodic components, and for recovering the function from those
components. When both the function and its Fourier transform are
replaced with discretized counterparts, it is called the discrete Fourier
transform (DFT). The DFT has become a mainstay of numerical computing in
part because of a very fast algorithm for computing it, called the Fast
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_
provide an accessible introduction to Fourier analysis and its
applications.
Because the discrete Fourier transform separates its input into
components that contribute at discrete frequencies, it has a great number
of applications in digital signal processing, e.g., for filtering, and in
this context the discretized input to the transform is customarily
referred to as a *signal*, which exists in the *time domain*. The output
is called a *spectrum* or *transform* and exists in the *frequency
domain*.
Implementation details
----------------------
There are many ways to define the DFT, varying in the sign of the
exponent, normalization, etc. In this implementation, the DFT is defined
as
.. math::
A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\}
\\qquad k = 0,\\ldots,n-1.
The DFT is in general defined for complex inputs and outputs, and a
single-frequency component at linear frequency :math:`f` is
represented by a complex exponential
:math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t`
is the sampling interval.
The values in the result follow so-called "standard" order: If ``A =
fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the sum of
the signal), which is always purely real for real inputs. Then ``A[1:n/2]``
contains the positive-frequency terms, and ``A[n/2+1:]`` contains the
negative-frequency terms, in order of decreasingly negative frequency.
For an even number of input points, ``A[n/2]`` represents both positive and
negative Nyquist frequency, and is also purely real for real input. For
an odd number of input points, ``A[(n-1)/2]`` contains the largest positive
frequency, while ``A[(n+1)/2]`` contains the largest negative frequency.
The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies
of corresponding elements in the output. The routine
``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the
zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes
that shift.
When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)``
is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum.
The phase spectrum is obtained by ``np.angle(A)``.
The inverse DFT is defined as
.. math::
a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\}
\\qquad m = 0,\\ldots,n-1.
It differs from the forward transform by the sign of the exponential
argument and the default normalization by :math:`1/n`.
Normalization
-------------
The default normalization has the direct transforms unscaled and the inverse
transforms are scaled by :math:`1/n`. It is possible to obtain unitary
transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is
`None`) so that both direct and inverse transforms will be scaled by
:math:`1/\\sqrt{n}`.
Real and Hermitian transforms
-----------------------------
When the input is purely real, its transform is Hermitian, i.e., the
component at frequency :math:`f_k` is the complex conjugate of the
component at frequency :math:`-f_k`, which means that for real
inputs there is no information in the negative frequency components that
is not already available from the positive frequency components.
The family of `rfft` functions is
designed to operate on real inputs, and exploits this symmetry by
computing only the positive frequency components, up to and including the
Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex
output points. The inverses of this family assumes the same symmetry of
its input, and for an output of ``n`` points uses ``n/2+1`` input points.
Correspondingly, when the spectrum is purely real, the signal is
Hermitian. The `hfft` family of functions exploits this symmetry by
using ``n/2+1`` complex points in the input (time) domain for ``n`` real
points in the frequency domain.
In higher dimensions, FFTs are used, e.g., for image analysis and
filtering. The computational efficiency of the FFT means that it can
also be a faster way to compute large convolutions, using the property
that a convolution in the time domain is equivalent to a point-by-point
multiplication in the frequency domain.
Higher dimensions
-----------------
In two dimensions, the DFT is defined as
.. math::
A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1}
a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\}
\\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1,
which extends in the obvious way to higher dimensions, and the inverses
in higher dimensions also extend in the same way.
References
----------
.. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the
machine calculation of complex Fourier series," *Math. Comput.*
19: 297-301.
.. [NR] NAME NAME NAME and NAME
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
12-13. Cambridge Univ. Press, Cambridge, UK.
Examples
--------
For examples, see the various functions.
""" |
"""
=============
Miscellaneous
=============
IEEE 754 Floating Point Special Values:
-----------------------------------------------
Special values defined in numpy: nan, inf,
NaNs can be used as a poor-man's mask (if you don't care what the
original value was)
Note: cannot use equality to test NaNs. E.g.: ::
>>> myarr = np.array([1., 0., np.nan, 3.])
>>> np.where(myarr == np.nan)
>>> np.nan == np.nan # is always False! Use special numpy functions instead.
False
>>> myarr[myarr == np.nan] = 0. # doesn't work
>>> myarr
array([ 1., 0., NaN, 3.])
>>> myarr[np.isnan(myarr)] = 0. # use this instead find
>>> myarr
array([ 1., 0., 0., 3.])
Other related special value functions: ::
isinf(): True if value is inf
isfinite(): True if not nan or inf
nan_to_num(): Map nan to 0, inf to max float, -inf to min float
The following corresponds to the usual functions except that nans are excluded
from the results: ::
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
How numpy handles numerical exceptions:
------------------------------------------
The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
set individually for different kinds of exceptions. The different behaviors
are:
- 'ignore' : Take no action when the exception occurs.
- 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
- 'raise' : Raise a `FloatingPointError`.
- 'call' : Call a function specified using the `seterrcall` function.
- 'print' : Print a warning directly to ``stdout``.
- 'log' : Record error in a Log object specified by `seterrcall`.
These behaviors can be set for all kinds of errors or specific ones:
- all : apply to all numeric exceptions
- invalid : when NaNs are generated
- divide : divide by zero (for integers as well!)
- overflow : floating point overflows
- underflow : floating point underflows
Note that integer divide-by-zero is handled by the same machinery.
These behaviors are set on a per-thread basis.
Examples:
------------
::
>>> oldsettings = np.seterr(all='warn')
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under='ignore')
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid='raise')
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
... print "saw stupid error!"
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all='call')
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
... # error-handling settings
Interfacing to C:
-----------------
Only a survey of the choices. Little detail on how each works.
1) Bare metal, wrap your own C-code manually.
- Plusses:
- Efficient
- No dependencies on other tools
- Minuses:
- Lots of learning overhead:
- need to learn basics of Python C API
- need to learn basics of numpy C API
- need to learn how to handle reference counting and love it.
- Reference counting often difficult to get right.
- getting it wrong leads to memory leaks, and worse, segfaults
- API will change for Python 3.0!
2) pyrex
- Plusses:
- avoid learning C API's
- no dealing with reference counting
- can code in psuedo python and generate C code
- can also interface to existing C code
- should shield you from changes to Python C api
- become pretty popular within Python community
- Minuses:
- Can write code in non-standard form which may become obsolete
- Not as flexible as manual wrapping
- Maintainers not easily adaptable to new features
Thus:
3) cython - fork of pyrex to allow needed features for SAGE
- being considered as the standard scipy/numpy wrapping tool
- fast indexing support for arrays
4) ctypes
- Plusses:
- part of Python standard library
- good for interfacing to existing sharable libraries, particularly
Windows DLLs
- avoids API/reference counting issues
- good numpy support: arrays have all these in their ctypes
attribute: ::
a.ctypes.data a.ctypes.get_strides
a.ctypes.data_as a.ctypes.shape
a.ctypes.get_as_parameter a.ctypes.shape_as
a.ctypes.get_data a.ctypes.strides
a.ctypes.get_shape a.ctypes.strides_as
- Minuses:
- can't use for writing code to be turned into C extensions, only a wrapper
tool.
5) SWIG (automatic wrapper generator)
- Plusses:
- around a long time
- multiple scripting language support
- C++ support
- Good for wrapping large (many functions) existing C libraries
- Minuses:
- generates lots of code between Python and the C code
- can cause performance problems that are nearly impossible to optimize
out
- interface files can be hard to write
- doesn't necessarily avoid reference counting issues or needing to know
API's
7) Weave
- Plusses:
- Phenomenal tool
- can turn many numpy expressions into C code
- dynamic compiling and loading of generated C code
- can embed pure C code in Python module and have weave extract, generate
interfaces and compile, etc.
- Minuses:
- Future uncertain--lacks a champion
8) Psyco
- Plusses:
- Turns pure python into efficient machine code through jit-like
optimizations
- very fast when it optimizes well
- Minuses:
- Only on intel (windows?)
- Doesn't do much for numpy?
Interfacing to Fortran:
-----------------------
Fortran: Clear choice is f2py. (Pyfort is an older alternative, but not
supported any longer)
Interfacing to C++:
-------------------
1) CXX
2) Boost.python
3) SWIG
4) Sage has used cython to wrap C++ (not pretty, but it can be done)
5) SIP (used mainly in PyQT)
""" |
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |
#!/usr/bin/env python
# (c) 2013, NAME <EMAIL>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
#
# Author: NAME <EMAIL>
#
# Description:
# This module queries local or remote Docker daemons and generates
# inventory information.
#
# This plugin does not support targeting of specific hosts using the --host
# flag. Instead, it queries the Docker API for each container, running
# or not, and returns this data all once.
#
# The plugin returns the following custom attributes on Docker containers:
# docker_args
# docker_config
# docker_created
# docker_driver
# docker_exec_driver
# docker_host_config
# docker_hostname_path
# docker_hosts_path
# docker_id
# docker_image
# docker_name
# docker_network_settings
# docker_path
# docker_resolv_conf_path
# docker_state
# docker_volumes
# docker_volumes_rw
#
# Requirements:
# The docker-py module: https://github.com/dotcloud/docker-py
#
# Notes:
# A config file can be used to configure this inventory module, and there
# are several environment variables that can be set to modify the behavior
# of the plugin at runtime:
# DOCKER_CONFIG_FILE
# DOCKER_HOST
# DOCKER_VERSION
# DOCKER_TIMEOUT
# DOCKER_PRIVATE_SSH_PORT
# DOCKER_DEFAULT_IP
#
# Environment Variables:
# environment variable: DOCKER_CONFIG_FILE
# description:
# - A path to a Docker inventory hosts/defaults file in YAML format
# - A sample file has been provided, colocated with the inventory
# file called 'docker.yml'
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_HOST
# description:
# - The socket on which to connect to a Docker daemon API
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_VERSION
# description:
# - Version of the Docker API to use
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_TIMEOUT
# description:
# - Timeout in seconds for connections to Docker daemon API
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_PRIVATE_SSH_PORT
# description:
# - The private port (container port) on which SSH is listening
# for connections
# default: 22
# required: false
# environment variable: DOCKER_DEFAULT_IP
# description:
# - This environment variable overrides the container SSH connection
# IP address (aka, 'ansible_ssh_host')
#
# This option allows one to override the ansible_ssh_host whenever
# Docker has exercised its default behavior of binding private ports
# to all interfaces of the Docker host. This behavior, when dealing
# with remote Docker hosts, does not allow Ansible to determine
# a proper host IP address on which to connect via SSH to containers.
# By default, this inventory module assumes all IP_ADDRESS-exposed
# ports to be bound to localhost:<port>. To override this
# behavior, for example, to bind a container's SSH port to the public
# interface of its host, one must manually set this IP.
#
# It is preferable to begin to launch Docker containers with
# ports exposed on publicly accessible IP addresses, particularly
# if the containers are to be targeted by Ansible for remote
# configuration, not accessible via localhost SSH connections.
#
# Docker containers can be explicitly exposed on IP addresses by
# a) starting the daemon with the --ip argument
# b) running containers with the -P/--publish ip::containerPort
# argument
# default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker
# required: false
#
# Examples:
# Use the config file:
# DOCKER_CONFIG_FILE=./docker.yml docker.py --list
#
# Connect to docker instance on localhost port 4243
# DOCKER_HOST=tcp://localhost:4243 docker.py --list
#
# Any container's ssh port exposed on IP_ADDRESS will mapped to
# another IP address (where Ansible will attempt to connect via SSH)
# DOCKER_DEFAULT_IP=IP_ADDRESS docker.py --list
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
# SKR03
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
|
#
# ElementTree
# $Id: ElementTree.py 3276 2007-09-12 06:52:30Z USERNAME $
#
# light-weight XML support for Python 2.2 and later.
#
# history:
# 2001-10-20 fl created (from various sources)
# 2001-11-01 fl return root from parse method
# 2002-02-16 fl sort attributes in lexical order
# 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup
# 2002-05-01 fl finished TreeBuilder refactoring
# 2002-07-14 fl added basic namespace support to ElementTree.write
# 2002-07-25 fl added QName attribute support
# 2002-10-20 fl fixed encoding in write
# 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding
# 2002-11-27 fl accept file objects or file names for parse/write
# 2002-12-04 fl moved XMLTreeBuilder back to this module
# 2003-01-11 fl fixed entity encoding glitch for us-ascii
# 2003-02-13 fl added XML literal factory
# 2003-02-21 fl added ProcessingInstruction/PI factory
# 2003-05-11 fl added tostring/fromstring helpers
# 2003-05-26 fl added ElementPath support
# 2003-07-05 fl added makeelement factory method
# 2003-07-28 fl added more well-known namespace prefixes
# 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed
# 2003-10-31 fl markup updates
# 2003-11-15 fl fixed nested namespace bug
# 2004-03-28 fl added XMLID helper
# 2004-06-02 fl added default support to findtext
# 2004-06-08 fl fixed encoding of non-ascii element/attribute names
# 2004-08-23 fl take advantage of post-2.1 expat features
# 2004-09-03 fl made Element class visible; removed factory
# 2005-02-01 fl added iterparse implementation
# 2005-03-02 fl fixed iterparse support for pre-2.2 versions
# 2005-11-12 fl added tostringlist/fromstringlist helpers
# 2006-07-05 fl merged in selected changes from the 1.3 sandbox
# 2006-07-05 fl removed support for 2.1 and earlier
# 2007-06-21 fl added deprecation/future warnings
# 2007-08-25 fl added doctype hook, added parser version attribute etc
# 2007-08-26 fl added new serializer code (better namespace handling, etc)
# 2007-08-27 fl warn for broken /tag searches on tree level
# 2007-09-02 fl added html/text methods to serializer (experimental)
# 2007-09-05 fl added method argument to tostring/tostringlist
# 2007-09-06 fl improved error handling
#
# Copyright (c) 1999-2007 by NAME All rights reserved.
#
# EMAIL
# http://www.pythonware.com
#
# --------------------------------------------------------------------
# The ElementTree toolkit is
#
# Copyright (c) 1999-2007 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
|
"""Session implementation for CherryPy.
You need to edit your config file to use sessions. Here's an example::
[/]
tools.sessions.on = True
tools.sessions.storage_class = cherrypy.lib.sessions.FileSession
tools.sessions.storage_path = "/home/site/sessions"
tools.sessions.timeout = 60
This sets the session to be stored in files in the directory
/home/site/sessions, and the session timeout to 60 minutes. If you omit
``storage_class``, the sessions will be saved in RAM.
``tools.sessions.on`` is the only required line for working sessions,
the rest are optional.
By default, the session ID is passed in a cookie, so the client's browser must
have cookies enabled for your site.
To set data for the current session, use
``cherrypy.session['fieldname'] = 'fieldvalue'``;
to get data use ``cherrypy.session.get('fieldname')``.
================
Locking sessions
================
By default, the ``'locking'`` mode of sessions is ``'implicit'``, which means
the session is locked early and unlocked late. Be mindful of this default mode
for any requests that take a long time to process (streaming responses,
expensive calculations, database lookups, API calls, etc), as other concurrent
requests that also utilize sessions will hang until the session is unlocked.
If you want to control when the session data is locked and unlocked,
set ``tools.sessions.locking = 'explicit'``. Then call
``cherrypy.session.acquire_lock()`` and ``cherrypy.session.release_lock()``.
Regardless of which mode you use, the session is guaranteed to be unlocked when
the request is complete.
=================
Expiring Sessions
=================
You can force a session to expire with :func:`cherrypy.lib.sessions.expire`.
Simply call that function at the point you want the session to expire, and it
will cause the session cookie to expire client-side.
===========================
Session Fixation Protection
===========================
If CherryPy receives, via a request cookie, a session id that it does not
recognize, it will reject that id and create a new one to return in the
response cookie. This `helps prevent session fixation attacks
<http://en.wikipedia.org/wiki/Session_fixation#Regenerate_SID_on_each_request>`_.
However, CherryPy "recognizes" a session id by looking up the saved session
data for that id. Therefore, if you never save any session data,
**you will get a new session id for every request**.
A side effect of CherryPy overwriting unrecognised session ids is that if you
have multiple, separate CherryPy applications running on a single domain (e.g.
on different ports), each app will overwrite the other's session id because by
default they use the same cookie name (``"session_id"``) but do not recognise
each others sessions. It is therefore a good idea to use a different name for
each, for example::
[/]
...
tools.sessions.name = "my_app_session_id"
================
Sharing Sessions
================
If you run multiple instances of CherryPy (for example via mod_python behind
Apache prefork), you most likely cannot use the RAM session backend, since each
instance of CherryPy will have its own memory space. Use a different backend
instead, and verify that all instances are pointing at the same file or db
location. Alternately, you might try a load balancer which makes sessions
"sticky". Google is your friend, there.
================
Expiration Dates
================
The response cookie will possess an expiration date to inform the client at
which point to stop sending the cookie back in requests. If the server time
and client time differ, expect sessions to be unreliable. **Make sure the
system time of your server is accurate**.
CherryPy defaults to a 60-minute session timeout, which also applies to the
cookie which is sent to the client. Unfortunately, some versions of Safari
("4 public beta" on Windows XP at least) appear to have a bug in their parsing
of the GMT expiration date--they appear to interpret the date as one hour in
the past. Sixty minutes minus one hour is pretty close to zero, so you may
experience this bug as a new session id for every request, unless the requests
are less than one second apart. To fix, try increasing the session.timeout.
On the other extreme, some users report Firefox sending cookies after their
expiration date, although this was on a system with an inaccurate system time.
Maybe FF doesn't trust system time.
""" |
# -*- coding: utf-8 -*-
# -- Dual Licence ----------------------------------------------------------
############################################################################
# GPL License #
# #
# This file is a SCons (http://www.scons.org/) builder #
# Copyright (c) 2012-14, NAME <EMAIL> #
# This program is free software: you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as #
# published by the Free Software Foundation, either version 3 of the #
# License, or (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program. If not, see <http://www.gnu.org/licenses/>. #
############################################################################
# --------------------------------------------------------------------------
############################################################################
# BSD 3-Clause License #
# #
# This file is a SCons (http://www.scons.org/) builder #
# Copyright (c) 2012-14, NAME <EMAIL> #
# All rights reserved. #
# #
# Redistribution and use in source and binary forms, with or without #
# modification, are permitted provided that the following conditions are #
# met: #
# #
# 1. Redistributions of source code must retain the above copyright #
# notice, this list of conditions and the following disclaimer. #
# #
# 2. Redistributions in binary form must reproduce the above copyright #
# notice, this list of conditions and the following disclaimer in the #
# documentation and/or other materials provided with the distribution. #
# #
# 3. Neither the name of the copyright holder nor the names of its #
# contributors may be used to endorse or promote products derived from #
# this software without specific prior written permission. #
# #
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS #
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT #
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A #
# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT #
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, #
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED #
# TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR #
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF #
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING #
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS #
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #
############################################################################
# The Unpack Builder can be used for unpacking archives (eg Zip, TGZ, BZ, ... ).
# The emitter of the Builder reads the archive data and creates a returning file list
# the builder extract the archive. The environment variable stores a dictionary "UNPACK"
# for set different extractions (subdict "EXTRACTOR"):
# {
# PRIORITY => a value for setting the extractor order (lower numbers = extractor is used earlier)
# SUFFIX => defines a list with file suffixes, which should be handled with this extractor
# EXTRACTSUFFIX => suffix of the extract command
# EXTRACTFLAGS => a string parameter for the RUN command for extracting the data
# EXTRACTCMD => full extract command of the builder
# RUN => the main program which will be started (if the parameter is empty, the extractor will be ignored)
# LISTCMD => the listing command for the emitter
# LISTFLAGS => the string options for the RUN command for showing a list of files
# LISTSUFFIX => suffix of the list command
# LISTEXTRACTOR => a optional Python function, that is called on each output line of the
# LISTCMD for extracting file & dir names, the function need two parameters (first line number,
# second line content) and must return a string with the file / dir path (other value types
# will be ignored)
# }
# Other options in the UNPACK dictionary are:
# STOPONEMPTYFILE => bool variable for stoping if the file has empty size (default True)
# VIWEXTRACTOUTPUT => shows the output messages of the extraction command (default False)
# EXTRACTDIR => path in that the data will be extracted (default #)
#
# The file which is handled by the first suffix match of the extractor, the extractor list can be append for other files.
# The order of the extractor dictionary creates the listing & extractor command eg file extension .tar.gz should be
# before .gz, because the tar.gz is extract in one shoot.
#
# Under *nix system these tools are supported: tar, bzip2, gzip, unzip
# Under Windows only 7-Zip (http://www.7-zip.org/) is supported
|
"""
Airy Functions
--------------
* airy -- Airy functions and their derivatives.
* airye -- Exponentially scaled Airy functions
* ai_zeros -- [+]Zeros of Airy functions Ai(x) and Ai'(x)
* bi_zeros -- [+]Zeros of Airy functions Bi(x) and Bi'(x)
Elliptic Functions and Integrals
--------------------------------
* ellipj -- Jacobian elliptic functions
* ellipk -- Complete elliptic integral of the first kind.
* ellipkinc -- Incomplete elliptic integral of the first kind.
* ellipe -- Complete elliptic integral of the second kind.
* ellipeinc -- Incomplete elliptic integral of the second kind.
Bessel Functions
----------------
* jn -- Bessel function of integer order and real argument.
* jv -- Bessel function of real-valued order and complex argument.
* jve -- Exponentially scaled Bessel function.
* yn -- Bessel function of second kind (integer order).
* yv -- Bessel function of the second kind (real-valued order).
* yve -- Exponentially scaled Bessel function of the second kind.
* kn -- Modified Bessel function of the second kind (integer order).
* kv -- Modified Bessel function of the second kind (real order).
* kve -- Exponentially scaled modified Bessel function of the second kind.
* iv -- Modified Bessel function.
* ive -- Exponentially scaled modified Bessel function.
* hankel1 -- Hankel function of the first kind.
* hankel1e -- Exponentially scaled Hankel function of the first kind.
* hankel2 -- Hankel function of the second kind.
* hankel2e -- Exponentially scaled Hankel function of the second kind.
* lmbda -- [+]Sequence of lambda functions with arbitrary order v.
Zeros of Bessel Functions
.........................
* jnjnp_zeros -- [+]Zeros of integer-order Bessel functions and derivatives sorted in order.
* jnyn_zeros -- [+]Zeros of integer-order Bessel functions and derivatives as separate arrays.
* jn_zeros -- [+]Zeros of Jn(x)
* jnp_zeros -- [+]Zeros of Jn'(x)
* yn_zeros -- [+]Zeros of Yn(x)
* ynp_zeros -- [+]Zeros of Yn'(x)
* y0_zeros -- [+]Complex zeros: Y0(z0)=0 and values of Y0'(z0)
* y1_zeros -- [+]Complex zeros: Y1(z1)=0 and values of Y1'(z1)
* y1p_zeros -- [+]Complex zeros of Y1'(z1')=0 and values of Y1(z1')
Faster versions of common Bessel Functions
..........................................
* j0 -- Bessel function of order 0.
* j1 -- Bessel function of order 1.
* y0 -- Bessel function of second kind of order 0.
* y1 -- Bessel function of second kind of order 1.
* i0 -- Modified Bessel function of order 0.
* i0e -- Exponentially scaled modified Bessel function of order 0.
* i1 -- Modified Bessel function of order 1.
* i1e -- Exponentially scaled modified Bessel function of order 1.
* k0 -- Modified Bessel function of the second kind of order 0.
* k0e -- Exponentially scaled modified Bessel function of the second kind of order 0.
* k1 -- Modified Bessel function of the second kind of order 1.
* k1e -- Exponentially scaled modified Bessel function of the second kind of order 1.
Integrals of Bessel Functions
.............................
* itj0y0 -- Basic integrals of j0 and y0 from 0 to x.
* it2j0y0 -- Integrals of (1-j0(t))/t from 0 to x and y0(t)/t from x to inf.
* iti0k0 -- Basic integrals of i0 and k0 from 0 to x.
* it2i0k0 -- Integrals of (i0(t)-1)/t from 0 to x and k0(t)/t from x to inf.
* besselpoly -- Integral of a bessel function: Jv(2* a* x) * x[+]lambda from x=0 to 1.
Derivatives of Bessel Functions
...............................
* jvp -- Nth derivative of Jv(v,z)
* yvp -- Nth derivative of Yv(v,z)
* kvp -- Nth derivative of Kv(v,z)
* ivp -- Nth derivative of Iv(v,z)
* h1vp -- Nth derivative of H1v(v,z)
* h2vp -- Nth derivative of H2v(v,z)
Spherical Bessel Functions
..........................
* sph_jn -- [+]Sequence of spherical Bessel functions, jn(z)
* sph_yn -- [+]Sequence of spherical Bessel functions, yn(z)
* sph_jnyn -- [+]Sequence of spherical Bessel functions, jn(z) and yn(z)
* sph_in -- [+]Sequence of spherical Bessel functions, in(z)
* sph_kn -- [+]Sequence of spherical Bessel functions, kn(z)
* sph_inkn -- [+]Sequence of spherical Bessel functions, in(z) and kn(z)
Ricatti-Bessel Functions
........................
* riccati_jn -- [+]Sequence of Ricatti-Bessel functions of first kind.
* riccati_yn -- [+]Sequence of Ricatti-Bessel functions of second kind.
Struve Functions
----------------
* struve -- Struve function --- Hv(x)
* modstruve -- Modified struve function --- Lv(x)
* itstruve0 -- Integral of H0(t) from 0 to x
* it2struve0 -- Integral of H0(t)/t from x to Inf.
* itmodstruve0 -- Integral of L0(t) from 0 to x.
Raw Statistical Functions (Friendly versions in scipy.stats)
------------------------------------------------------------
* bdtr -- Sum of terms 0 through k of of the binomial pdf.
* bdtrc -- Sum of terms k+1 through n of the binomial pdf.
* bdtri -- Inverse of bdtr
* btdtr -- Integral from 0 to x of beta pdf.
* btdtri -- Quantiles of beta distribution
* fdtr -- Integral from 0 to x of F pdf.
* fdtrc -- Integral from x to infinity under F pdf.
* fdtri -- Inverse of fdtrc
* gdtr -- Integral from 0 to x of gamma pdf.
* gdtrc -- Integral from x to infinity under gamma pdf.
* gdtria --
* gdtrib --
* gdtrix --
* nbdtr -- Sum of terms 0 through k of the negative binomial pdf.
* nbdtrc -- Sum of terms k+1 to infinity under negative binomial pdf.
* nbdtri -- Inverse of nbdtr
* pdtr -- Sum of terms 0 through k of the Poisson pdf.
* pdtrc -- Sum of terms k+1 to infinity of the Poisson pdf.
* pdtri -- Inverse of pdtr
* stdtr -- Integral from -infinity to t of the Student-t pdf.
* stdtridf --
* stdtrit --
* chdtr -- Integral from 0 to x of the Chi-square pdf.
* chdtrc -- Integral from x to infnity of Chi-square pdf.
* chdtri -- Inverse of chdtrc.
* ndtr -- Integral from -infinity to x of standard normal pdf
* ndtri -- Inverse of ndtr (quantiles)
* smirnov -- Kolmogorov-Smirnov complementary CDF for one-sided test statistic (Dn+ or Dn-)
* smirnovi -- Inverse of smirnov.
* kolmogorov -- The complementary CDF of the (scaled) two-sided test statistic (Kn*) valid for large n.
* kolmogi -- Inverse of kolmogorov
* tklmbda -- Tukey-Lambda CDF
Gamma and Related Functions
---------------------------
* gamma -- Gamma function.
* gammaln -- Log of the absolute value of the gamma function.
* gammainc -- Incomplete gamma integral.
* gammaincinv -- Inverse of gammainc.
* gammaincc -- Complemented incomplete gamma integral.
* gammainccinv -- Inverse of gammaincc.
* beta -- Beta function.
* betaln -- Log of the absolute value of the beta function.
* betainc -- Incomplete beta integral.
* betaincinv -- Inverse of betainc.
* psi(digamma) -- Logarithmic derivative of the gamma function.
* rgamma -- One divided by the gamma function.
* polygamma -- Nth derivative of psi function.
Error Function and Fresnel Integrals
------------------------------------
* erf -- Error function.
* erfc -- Complemented error function (1- erf(x))
* erfinv -- Inverse of error function
* erfcinv -- Inverse of erfc
* erf_zeros -- [+]Complex zeros of erf(z)
* fresnel -- Fresnel sine and cosine integrals.
* fresnel_zeros -- Complex zeros of both Fresnel integrals
* fresnelc_zeros -- [+]Complex zeros of fresnel cosine integrals
* fresnels_zeros -- [+]Complex zeros of fresnel sine integrals
* modfresnelp -- Modified Fresnel integrals F_+(x) and K_+(x)
* modfresnelm -- Modified Fresnel integrals F_-(x) and K_-(x)
Legendre Functions
------------------
* lpn -- [+]Legendre Functions (polynomials) of the first kind
* lqn -- [+]Legendre Functions of the second kind.
* lpmn -- [+]Associated Legendre Function of the first kind.
* lqmn -- [+]Associated Legendre Function of the second kind.
* lpmv -- Associated Legendre Function of arbitrary non-negative degree v.
* sph_harm -- Spherical Harmonics (complex-valued) Y^m_n(theta,phi)
Orthogonal polynomials --- 15 types
These functions all return a polynomial class which can then be
evaluated: vals = chebyt(n)(x)
This class also has an attribute 'weights' which
return the roots, weights, and total weights for the appropriate
form of Gaussian quadrature. These are returned in an n x 3 array with roots
in the first column, weights in the second column, and total weights in the final
column
* legendre -- [+]Legendre polynomial P_n(x) (lpn -- for function).
* chebyt -- [+]Chebyshev polynomial T_n(x)
* chebyu -- [+]Chebyshev polynomial U_n(x)
* chebyc -- [+]Chebyshev polynomial C_n(x)
* chebys -- [+]Chebyshev polynomial S_n(x)
* jacobi -- [+]Jacobi polynomial P^(alpha,beta)_n(x)
* laguerre -- [+]Laguerre polynomial, L_n(x)
* genlaguerre -- [+]Generalized (Associated) Laguerre polynomial, L^alpha_n(x)
* hermite -- [+]Hermite polynomial H_n(x)
* hermitenorm -- [+]Normalized Hermite polynomial, He_n(x)
* gegenbauer -- [+]Gegenbauer (Ultraspherical) polynomials, C^(alpha)_n(x)
* sh_legendre -- [+]shifted Legendre polynomial, P*_n(x)
* sh_chebyt -- [+]shifted Chebyshev polynomial, T*_n(x)
* sh_chebyu -- [+]shifted Chebyshev polynomial, U*_n(x)
* sh_jacobi -- [+]shifted Jacobi polynomial, J*_n(x) = G^(p,q)_n(x)
HyperGeometric Functions
------------------------
* hyp2f1 -- Gauss hypergeometric function (2F1)
* hyp1f1 -- Confluent hypergeometric function (1F1)
* hyperu -- Confluent hypergeometric function (U)
* hyp0f1 -- Confluent hypergeometric limit function (0F1)
* hyp2f0 -- Hypergeometric function (2F0)
* hyp1f2 -- Hypergeometric function (1F2)
* hyp3f0 -- Hypergeometric function (3F0)
Parabolic Cylinder Functions
----------------------------
* pbdv -- Parabolic cylinder function Dv(x) and derivative.
* pbvv -- Parabolic cylinder function Vv(x) and derivative.
* pbwa -- Parabolic cylinder function W(a,x) and derivative.
* pbdv_seq -- [+]Sequence of parabolic cylinder functions Dv(x)
* pbvv_seq -- [+]Sequence of parabolic cylinder functions Vv(x)
* pbdn_seq -- [+]Sequence of parabolic cylinder functions Dn(z), complex z
mathieu and Related Functions (and derivatives)
-----------------------------------------------
* mathieu_a -- Characteristic values for even solution (ce_m)
* mathieu_b -- Characteristic values for odd solution (se_m)
* mathieu_even_coef -- [+]sequence of expansion coefficients for even solution
* mathieu_odd_coef -- [+]sequence of expansion coefficients for odd solution
**All the following return both function and first derivative**
* mathieu_cem -- Even mathieu function
* mathieu_sem -- Odd mathieu function
* mathieu_modcem1 -- Even modified mathieu function of the first kind
* mathieu_modcem2 -- Even modified mathieu function of the second kind
* mathieu_modsem1 -- Odd modified mathieu function of the first kind
* mathieu_modsem2 -- Odd modified mathieu function of the second kind
Spheroidal Wave Functions
-------------------------
* pro_ang1 -- Prolate spheroidal angular function of the first kind
* pro_rad1 -- Prolate spheroidal radial function of the first kind
* pro_rad2 -- Prolate spheroidal radial function of the second kind
* obl_ang1 -- Oblate spheroidal angluar function of the first kind
* obl_rad1 -- Oblate spheroidal radial function of the first kind
* obl_rad2 -- Oblate spheroidal radial function of the second kind
* pro_cv -- Compute characteristic value for prolate functions
* obl_cv -- Compute characteristic value for oblate functions
* pro_cv_seq -- Compute sequence of prolate characteristic values
* obl_cv_seq -- Compute sequence of oblate characteristic values
**The following functions require pre-computed characteristic values**
* pro_ang1_cv -- Prolate spheroidal angular function of the first kind
* pro_rad1_cv -- Prolate spheroidal radial function of the first kind
* pro_rad2_cv -- Prolate spheroidal radial function of the second kind
* obl_ang1_cv -- Oblate spheroidal angluar function of the first kind
* obl_rad1_cv -- Oblate spheroidal radial function of the first kind
* obl_rad2_cv -- Oblate spheroidal radial function of the second kind
Kelvin Functions
----------------
* kelvin -- All Kelvin functions (order 0) and derivatives.
* kelvin_zeros -- [+]Zeros of All Kelvin functions (order 0) and derivatives
* ber -- Kelvin function ber x
* bei -- Kelvin function bei x
* berp -- Derivative of Kelvin function ber x
* beip -- Derivative of Kelvin function bei x
* ker -- Kelvin function ker x
* kei -- Kelvin function kei x
* kerp -- Derivative of Kelvin function ker x
* keip -- Derivative of Kelvin function kei x
* ber_zeros -- [+]Zeros of Kelvin function bei x
* bei_zeros -- [+]Zeros of Kelvin function ber x
* berp_zeros -- [+]Zeros of derivative of Kelvin function ber x
* beip_zeros -- [+]Zeros of derivative of Kelvin function bei x
* ker_zeros -- [+]Zeros of Kelvin function kei x
* kei_zeros -- [+]Zeros of Kelvin function ker x
* kerp_zeros -- [+]Zeros of derivative of Kelvin function ker x
* keip_zeros -- [+]Zeros of derivative of Kelvin function kei x
Other Special Functions
-----------------------
* expn -- Exponential integral.
* exp1 -- Exponential integral of order 1 (for complex argument)
* expi -- Another exponential integral -- Ei(x)
* wofz -- Fadeeva function.
* dawsn -- Dawson's integral.
* shichi -- Hyperbolic sine and cosine integrals.
* sici -- Integral of the sinc and "cosinc" functions.
* spence -- Dilogarithm integral.
* zeta -- Riemann zeta function of two arguments.
* zetac -- 1.0 - standard Riemann zeta function.
Convenience Functions
---------------------
* cbrt -- Cube root.
* exp10 -- 10 raised to the x power.
* exp2 -- 2 raised to the x power.
* radian -- radian angle given degrees, minutes, and seconds.
* cosdg -- cosine of the angle given in degrees.
* sindg -- sine of the angle given in degrees.
* tandg -- tangent of the angle given in degrees.
* cotdg -- cotangent of the angle given in degrees.
* log1p -- log(1+x)
* expm1 -- exp(x)-1
* cosm1 -- cos(x)-1
* round -- round the argument to the nearest integer. If argument ends in 0.5 exactly, pick the nearest even integer.
-------
[+] in the description indicates a function which is not a universal
function and does not follow broadcasting and automatic
array-looping rules.
Error handling
--------------
Errors are handled by returning nans, or other appropriate values.
Some of the special function routines will print an error message
when an error occurs. By default this printing
is disabled. To enable such messages use errprint(1)
To disable such messages use errprint(0).
Example:
>>> print scipy.special.bdtr(-1,10,0.3)
>>> scipy.special.errprint(1)
>>> print scipy.special.bdtr(-1,10,0.3)
""" |
# -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
|
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
"""Stuff to parse AIFF-C and AIFF files.
Unless explicitly stated otherwise, the description below is true
both for AIFF-C files and AIFF files.
An AIFF-C file has the following structure.
+-----------------+
| FORM |
+-----------------+
| <size> |
+----+------------+
| | AIFC |
| +------------+
| | <chunks> |
| | . |
| | . |
| | . |
+----+------------+
An AIFF file has the string "AIFF" instead of "AIFC".
A chunk consists of an identifier (4 bytes) followed by a size (4 bytes,
big endian order), followed by the data. The size field does not include
the size of the 8 byte header.
The following chunk types are recognized.
FVER
<version number of AIFF-C defining document> (AIFF-C only).
MARK
<# of markers> (2 bytes)
list of markers:
<marker ID> (2 bytes, must be > 0)
<position> (4 bytes)
<marker name> ("pstring")
COMM
<# of channels> (2 bytes)
<# of sound frames> (4 bytes)
<size of the samples> (2 bytes)
<sampling frequency> (10 bytes, IEEE 80-bit extended
floating point)
in AIFF-C files only:
<compression type> (4 bytes)
<human-readable version of compression type> ("pstring")
SSND
<offset> (4 bytes, not used by this program)
<blocksize> (4 bytes, not used by this program)
<sound data>
A pstring consists of 1 byte length, a string of characters, and 0 or 1
byte pad to make the total length even.
Usage.
Reading AIFF files:
f = aifc.open(file, 'r')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods read(), seek(), and close().
In some types of audio files, if the setpos() method is not used,
the seek() method is not necessary.
This returns an instance of a class with the following public methods:
getnchannels() -- returns number of audio channels (1 for
mono, 2 for stereo)
getsampwidth() -- returns sample width in bytes
getframerate() -- returns sampling frequency
getnframes() -- returns number of audio frames
getcomptype() -- returns compression type ('NONE' for AIFF files)
getcompname() -- returns human-readable version of
compression type ('not compressed' for AIFF files)
getparams() -- returns a namedtuple consisting of all of the
above in the above order
getmarkers() -- get the list of marks in the audio file or None
if there are no marks
getmark(id) -- get mark with the specified id (raises an error
if the mark does not exist)
readframes(n) -- returns at most n frames of audio
rewind() -- rewind to the beginning of the audio stream
setpos(pos) -- seek to the specified position
tell() -- return the current position
close() -- close the instance (make it unusable)
The position returned by tell(), the position given to setpos() and
the position of marks are all compatible and have nothing to do with
the actual position in the file.
The close() method is called automatically when the class instance
is destroyed.
Writing AIFF files:
f = aifc.open(file, 'w')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods write(), tell(), seek(), and
close().
This returns an instance of a class with the following public methods:
aiff() -- create an AIFF file (AIFF-C default)
aifc() -- create an AIFF-C file
setnchannels(n) -- set the number of channels
setsampwidth(n) -- set the sample width
setframerate(n) -- set the frame rate
setnframes(n) -- set the number of frames
setcomptype(type, name)
-- set the compression type and the
human-readable compression type
setparams(tuple)
-- set all parameters at once
setmark(id, pos, name)
-- add specified mark to the list of marks
tell() -- return current position in output file (useful
in combination with setmark())
writeframesraw(data)
-- write audio frames without pathing up the
file header
writeframes(data)
-- write audio frames and patch up the file header
close() -- patch up the file header and close the
output file
You should set the parameters before the first writeframesraw or
writeframes. The total number of frames does not need to be set,
but when it is set to the correct value, the header does not have to
be patched up.
It is best to first set all parameters, perhaps possibly the
compression type, and then write audio frames using writeframesraw.
When all frames have been written, either call writeframes('') or
close() to patch up the sizes in the header.
Marks can be added anytime. If there are any marks, you must call
close() after all frames have been written.
The close() method is called automatically when the class instance
is destroyed.
When a file is opened with the extension '.aiff', an AIFF file is
written, otherwise an AIFF-C file is written. This default can be
changed by calling aiff() or aifc() before the first writeframes or
writeframesraw.
""" |
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |
"""
File for transformation of yaml files to another version
Wildchars:
In defined actions (if is not writen in next text else) is possible use simple
wildchars. For key with name path and source_path:
- * for one level of path
- ** for one or more level of path
For key with name destination_path, addition_path and set_type_path is
possible use replacement $1,$2,... . This signify part of path in wildchars
on position 1,2,....
Wildchars have three restrictions:
- ** can't be last part of path (too many result cases)
- **/* sequence is forbiden (too many result cases)
- **/** sequence is forbiden (can't determine replacements)
Filter:
- type-filter is a optional parameter for a key in possition path or source_path.
If is the parameter set, a opperation is processed only for the key of set type.
- parent-type-filter is a optional filter parameter similar as type-filter but for
parent of set key
- path-type-filter and path-type-filter-path are optional filter parameters
similar as type-filter but for data in set path (path-type-filter-path) and
set type
- key-exists-filter - is a optional parameter for a key in possition path or source_path.
If is the parameter set, a opperation is processed only for the key with set child .
- key-not-exists-filter - is a optional parameter for a key in possition path or source_path.
If is the parameter set, a opperation is processed only for the key without set child .
Actions:
- move-key - Move value of set key from source_path to destination_path.
If create_path parameter is not set to True, not existing directory of
destination_path cause exception. If create_path is set to True, absent
path is created. Path with number is created as array with index one more
than last existing index, other path is created as record.
If both path is same, only key is different, key is renamed. Destination_path
placed in source_path cause exception. If source_path not exist action si skipped.
Refference or anchors is moved and if its relative possition is changed,
result must be fixed by user. In this action can be used parameters
set_type_path and new_type for replacing tag new_type in path "set_type_path".
If optional parameter keep-source is set to true, old key is kept.
- delete-key - Delete key on set path. If key contains anchor or refference,
transporter try resolve reference. If path not exist, or contains some children
action si skipped. If parameter deep is set to true, key is delete with containing
children.
- rename-type - Change tag on set path from old_name to new_name.
- change-value - Change value on set path from old_value to new_value.
If node in specified path is not scallar type node, exception is raise.
- scale-value - multiply value in set path by scale. If node in specified
path is not scallar type node with numeric value, exception is raise.
- merge-arrays - Add array in addition_path after array in source path. If
destination_path is defined, node is moved from source to this path.
- move-key-forward - Move node in specified path before all siblings nodes.
(this action is use by import script for moving node with anchor after
refference)
- add-key - add set key to path. Key parent should be existing abstract record.
if optional parameters value or type is set, corresponding key parameter is
added.
- replace-value - Replace value on set path. Parameters pattern and
replacement have same meaning as in python regular Expression
function re.sub, where is pass on. Json style of transformation script
requare duplication of all backslashed characters in pattern and
replacement parameters.
- move-key-forward - move key on set path to first position in structure
Description:
Transformator check se json transformation file. If this file is in bad format,
cause exception. After checking is processet action in set order. Repetitional
processing this program can cause exception or corrupt data.
Example::
{
"name": "Example file",
"description": "Template for creating transformation file",
"old_format": "",
"new_format": "",
"actions": [
{
"action": "move-key",
"parameters":{
"source_path":"/problem/primary_equation/input_fields",
"destination_path":"/problem/mesh/balance"
}
},
{
"action": "delete-key",
"parameters": {
"path": "/problem/mesh/mesh_file"
}
},
{
"action": "rename-type",
"parameters": {
"path": "/problem/primary_equation",
"old_name": "Steady_MH",
"new_name": "SteadyMH"
}
}
]
}
Wildchars Example::
{
"action": "move-key",
"parameters": {
"source_path":"/**/input_fields/*/r_set",
"destination_path":"/$1/input_fields/$2/region"
}
}
""" |
"""
=============
Miscellaneous
=============
IEEE 754 Floating Point Special Values
--------------------------------------
Special values defined in numpy: nan, inf,
NaNs can be used as a poor-man's mask (if you don't care what the
original value was)
Note: cannot use equality to test NaNs. E.g.: ::
>>> myarr = np.array([1., 0., np.nan, 3.])
>>> np.where(myarr == np.nan)
>>> np.nan == np.nan # is always False! Use special numpy functions instead.
False
>>> myarr[myarr == np.nan] = 0. # doesn't work
>>> myarr
array([ 1., 0., NaN, 3.])
>>> myarr[np.isnan(myarr)] = 0. # use this instead find
>>> myarr
array([ 1., 0., 0., 3.])
Other related special value functions: ::
isinf(): True if value is inf
isfinite(): True if not nan or inf
nan_to_num(): Map nan to 0, inf to max float, -inf to min float
The following corresponds to the usual functions except that nans are excluded
from the results: ::
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
How numpy handles numerical exceptions
--------------------------------------
The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
set individually for different kinds of exceptions. The different behaviors
are:
- 'ignore' : Take no action when the exception occurs.
- 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
- 'raise' : Raise a `FloatingPointError`.
- 'call' : Call a function specified using the `seterrcall` function.
- 'print' : Print a warning directly to ``stdout``.
- 'log' : Record error in a Log object specified by `seterrcall`.
These behaviors can be set for all kinds of errors or specific ones:
- all : apply to all numeric exceptions
- invalid : when NaNs are generated
- divide : divide by zero (for integers as well!)
- overflow : floating point overflows
- underflow : floating point underflows
Note that integer divide-by-zero is handled by the same machinery.
These behaviors are set on a per-thread basis.
Examples
--------
::
>>> oldsettings = np.seterr(all='warn')
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under='ignore')
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid='raise')
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
... print("saw stupid error!")
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all='call')
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
... # error-handling settings
Interfacing to C
----------------
Only a survey of the choices. Little detail on how each works.
1) Bare metal, wrap your own C-code manually.
- Plusses:
- Efficient
- No dependencies on other tools
- Minuses:
- Lots of learning overhead:
- need to learn basics of Python C API
- need to learn basics of numpy C API
- need to learn how to handle reference counting and love it.
- Reference counting often difficult to get right.
- getting it wrong leads to memory leaks, and worse, segfaults
- API will change for Python 3.0!
2) Cython
- Plusses:
- avoid learning C API's
- no dealing with reference counting
- can code in pseudo python and generate C code
- can also interface to existing C code
- should shield you from changes to Python C api
- has become the de-facto standard within the scientific Python community
- fast indexing support for arrays
- Minuses:
- Can write code in non-standard form which may become obsolete
- Not as flexible as manual wrapping
3) ctypes
- Plusses:
- part of Python standard library
- good for interfacing to existing sharable libraries, particularly
Windows DLLs
- avoids API/reference counting issues
- good numpy support: arrays have all these in their ctypes
attribute: ::
a.ctypes.data a.ctypes.get_strides
a.ctypes.data_as a.ctypes.shape
a.ctypes.get_as_parameter a.ctypes.shape_as
a.ctypes.get_data a.ctypes.strides
a.ctypes.get_shape a.ctypes.strides_as
- Minuses:
- can't use for writing code to be turned into C extensions, only a wrapper
tool.
4) SWIG (automatic wrapper generator)
- Plusses:
- around a long time
- multiple scripting language support
- C++ support
- Good for wrapping large (many functions) existing C libraries
- Minuses:
- generates lots of code between Python and the C code
- can cause performance problems that are nearly impossible to optimize
out
- interface files can be hard to write
- doesn't necessarily avoid reference counting issues or needing to know
API's
5) scipy.weave
- Plusses:
- can turn many numpy expressions into C code
- dynamic compiling and loading of generated C code
- can embed pure C code in Python module and have weave extract, generate
interfaces and compile, etc.
- Minuses:
- Future very uncertain: it's the only part of Scipy not ported to Python 3
and is effectively deprecated in favor of Cython.
6) Psyco
- Plusses:
- Turns pure python into efficient machine code through jit-like
optimizations
- very fast when it optimizes well
- Minuses:
- Only on intel (windows?)
- Doesn't do much for numpy?
Interfacing to Fortran:
-----------------------
The clear choice to wrap Fortran code is
`f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_.
Pyfort is an older alternative, but not supported any longer.
Fwrap is a newer project that looked promising but isn't being developed any
longer.
Interfacing to C++:
-------------------
1) Cython
2) CXX
3) Boost.python
4) SWIG
5) SIP (used mainly in PyQT)
""" |
# Check the basic discovery process, including a sub-suite.
#
# RUN: %{lit} %{inputs}/discovery \
# RUN: -j 1 --debug --show-tests --show-suites \
# RUN: -v > %t.out 2> %t.err
# RUN: FileCheck --check-prefix=CHECK-BASIC-OUT < %t.out %s
# RUN: FileCheck --check-prefix=CHECK-BASIC-ERR < %t.err %s
#
# CHECK-BASIC-ERR: loading suite config '{{.*}}/discovery/lit.cfg'
# CHECK-BASIC-ERR: loading local config '{{.*}}/discovery/subdir/lit.local.cfg'
# CHECK-BASIC-ERR: loading suite config '{{.*}}/discovery/subsuite/lit.cfg'
#
# CHECK-BASIC-OUT: -- Test Suites --
# CHECK-BASIC-OUT: sub-suite - 2 tests
# CHECK-BASIC-OUT: Source Root: {{.*/discovery/subsuite$}}
# CHECK-BASIC-OUT: Exec Root : {{.*/discovery/subsuite$}}
# CHECK-BASIC-OUT: top-level-suite - 3 tests
# CHECK-BASIC-OUT: Source Root: {{.*/discovery$}}
# CHECK-BASIC-OUT: Exec Root : {{.*/discovery$}}
#
# CHECK-BASIC-OUT: -- Available Tests --
# CHECK-BASIC-OUT: sub-suite :: test-one
# CHECK-BASIC-OUT: sub-suite :: test-two
# CHECK-BASIC-OUT: top-level-suite :: subdir/test-three
# CHECK-BASIC-OUT: top-level-suite :: test-one
# CHECK-BASIC-OUT: top-level-suite :: test-two
# Check discovery when exact test names are given.
#
# RUN: %{lit} \
# RUN: %{inputs}/discovery/subdir/test-three.py \
# RUN: %{inputs}/discovery/subsuite/test-one.txt \
# RUN: -j 1 --show-tests --show-suites -v > %t.out
# RUN: FileCheck --check-prefix=CHECK-EXACT-TEST < %t.out %s
#
# CHECK-EXACT-TEST: -- Available Tests --
# CHECK-EXACT-TEST: sub-suite :: test-one
# CHECK-EXACT-TEST: top-level-suite :: subdir/test-three
# Check discovery when using an exec path.
#
# RUN: %{lit} %{inputs}/exec-discovery \
# RUN: -j 1 --debug --show-tests --show-suites \
# RUN: -v > %t.out 2> %t.err
# RUN: FileCheck --check-prefix=CHECK-ASEXEC-OUT < %t.out %s
# RUN: FileCheck --check-prefix=CHECK-ASEXEC-ERR < %t.err %s
#
# CHECK-ASEXEC-ERR: loading suite config '{{.*}}/exec-discovery/lit.site.cfg'
# CHECK-ASEXEC-ERR: load_config from '{{.*}}/discovery/lit.cfg'
# CHECK-ASEXEC-ERR: loaded config '{{.*}}/discovery/lit.cfg'
# CHECK-ASEXEC-ERR: loaded config '{{.*}}/exec-discovery/lit.site.cfg'
# CHECK-ASEXEC-ERR: loading local config '{{.*}}/discovery/subdir/lit.local.cfg'
# CHECK-ASEXEC-ERR: loading suite config '{{.*}}/discovery/subsuite/lit.cfg'
#
# CHECK-ASEXEC-OUT: -- Test Suites --
# CHECK-ASEXEC-OUT: sub-suite - 2 tests
# CHECK-ASEXEC-OUT: Source Root: {{.*/discovery/subsuite$}}
# CHECK-ASEXEC-OUT: Exec Root : {{.*/discovery/subsuite$}}
# CHECK-ASEXEC-OUT: top-level-suite - 3 tests
# CHECK-ASEXEC-OUT: Source Root: {{.*/discovery$}}
# CHECK-ASEXEC-OUT: Exec Root : {{.*/exec-discovery$}}
#
# CHECK-ASEXEC-OUT: -- Available Tests --
# CHECK-ASEXEC-OUT: sub-suite :: test-one
# CHECK-ASEXEC-OUT: sub-suite :: test-two
# CHECK-ASEXEC-OUT: top-level-suite :: subdir/test-three
# CHECK-ASEXEC-OUT: top-level-suite :: test-one
# CHECK-ASEXEC-OUT: top-level-suite :: test-two
# Check discovery when exact test names are given.
#
# FIXME: Note that using a path into a subsuite doesn't work correctly here.
#
# RUN: %{lit} \
# RUN: %{inputs}/exec-discovery/subdir/test-three.py \
# RUN: -j 1 --show-tests --show-suites -v > %t.out
# RUN: FileCheck --check-prefix=CHECK-ASEXEC-EXACT-TEST < %t.out %s
#
# CHECK-ASEXEC-EXACT-TEST: -- Available Tests --
# CHECK-ASEXEC-EXACT-TEST: top-level-suite :: subdir/test-three
# Check that we don't recurse infinitely when loading an site specific test
# suite located inside the test source root.
#
# RUN: %{lit} \
# RUN: %{inputs}/exec-discovery-in-tree/obj/ \
# RUN: -j 1 --show-tests --show-suites -v > %t.out
# RUN: FileCheck --check-prefix=CHECK-ASEXEC-INTREE < %t.out %s
#
# CHECK-ASEXEC-INTREE: exec-discovery-in-tree-suite - 1 tests
# CHECK-ASEXEC-INTREE-NEXT: Source Root: {{.*/exec-discovery-in-tree$}}
# CHECK-ASEXEC-INTREE-NEXT: Exec Root : {{.*/exec-discovery-in-tree/obj$}}
# CHECK-ASEXEC-INTREE-NEXT: -- Available Tests --
# CHECK-ASEXEC-INTREE-NEXT: exec-discovery-in-tree-suite :: test-one
|
"""
==============
Array indexing
==============
Array indexing refers to any use of the square brackets ([]) to index
array values. There are many options to indexing, which give numpy
indexing great power, but with power comes some complexity and the
potential for confusion. This section is just an overview of the
various options and issues related to indexing. Aside from single
element indexing, the details on most of these options are to be
found in related sections.
Assignment vs referencing
=========================
Most of the following examples show the use of indexing when
referencing data in an array. The examples work just as well
when assigning to an array. See the section at the end for
specific examples and explanations on how assignments work.
Single element indexing
=======================
Single element indexing for a 1-D array is what one expects. It work
exactly like that for other standard Python sequences. It is 0-based,
and accepts negative indices for indexing from the end of the array. ::
>>> x = np.arange(10)
>>> x[2]
2
>>> x[-2]
8
Unlike lists and tuples, numpy arrays support multidimensional indexing
for multidimensional arrays. That means that it is not necessary to
separate each dimension's index into its own set of square brackets. ::
>>> x.shape = (2,5) # now x is 2-dimensional
>>> x[1,3]
8
>>> x[1,-1]
9
Note that if one indexes a multidimensional array with fewer indices
than dimensions, one gets a subdimensional array. For example: ::
>>> x[0]
array([0, 1, 2, 3, 4])
That is, each index specified selects the array corresponding to the
rest of the dimensions selected. In the above example, choosing 0
means that remaining dimension of lenth 5 is being left unspecified,
and that what is returned is an array of that dimensionality and size.
It must be noted that the returned array is not a copy of the original,
but points to the same values in memory as does the original array.
In this case, the 1-D array at the first position (0) is returned.
So using a single index on the returned array, results in a single
element being returned. That is: ::
>>> x[0][2]
2
So note that ``x[0,2] = x[0][2]`` though the second case is more
inefficient a new temporary array is created after the first index
that is subsequently indexed by 2.
Note to those used to IDL or Fortran memory order as it relates to
indexing. Numpy uses C-order indexing. That means that the last
index usually represents the most rapidly changing memory location,
unlike Fortran or IDL, where the first index represents the most
rapidly changing location in memory. This difference represents a
great potential for confusion.
Other indexing options
======================
It is possible to slice and stride arrays to extract arrays of the
same number of dimensions, but of different sizes than the original.
The slicing and striding works exactly the same way it does for lists
and tuples except that they can be applied to multiple dimensions as
well. A few examples illustrates best: ::
>>> x = np.arange(10)
>>> x[2:5]
array([2, 3, 4])
>>> x[:-7]
array([0, 1, 2])
>>> x[1:7:2]
array([1, 3, 5])
>>> y = np.arange(35).reshape(5,7)
>>> y[1:5:2,::3]
array([[ 7, 10, 13],
[21, 24, 27]])
Note that slices of arrays do not copy the internal array data but
also produce new views of the original data.
It is possible to index arrays with other arrays for the purposes of
selecting lists of values out of arrays into new arrays. There are
two different ways of accomplishing this. One uses one or more arrays
of index values. The other involves giving a boolean array of the proper
shape to indicate the values to be selected. Index arrays are a very
powerful tool that allow one to avoid looping over individual elements in
arrays and thus greatly improve performance.
It is possible to use special features to effectively increase the
number of dimensions in an array through indexing so the resulting
array aquires the shape needed for use in an expression or with a
specific function.
Index arrays
============
Numpy arrays may be indexed with other arrays (or any other sequence-
like object that can be converted to an array, such as lists, with the
exception of tuples; see the end of this document for why this is). The
use of index arrays ranges from simple, straightforward cases to
complex, hard-to-understand cases. For all cases of index arrays, what
is returned is a copy of the original data, not a view as one gets for
slices.
Index arrays must be of integer type. Each value in the array indicates
which value in the array to use in place of the index. To illustrate: ::
>>> x = np.arange(10,1,-1)
>>> x
array([10, 9, 8, 7, 6, 5, 4, 3, 2])
>>> x[np.array([3, 3, 1, 8])]
array([7, 7, 9, 2])
The index array consisting of the values 3, 3, 1 and 8 correspondingly
create an array of length 4 (same as the index array) where each index
is replaced by the value the index array has in the array being indexed.
Negative values are permitted and work as they do with single indices
or slices: ::
>>> x[np.array([3,3,-3,8])]
array([7, 7, 4, 2])
It is an error to have index values out of bounds: ::
>>> x[np.array([3, 3, 20, 8])]
<type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9
Generally speaking, what is returned when index arrays are used is
an array with the same shape as the index array, but with the type
and values of the array being indexed. As an example, we can use a
multidimensional index array instead: ::
>>> x[np.array([[1,1],[2,3]])]
array([[9, 9],
[8, 7]])
Indexing Multi-dimensional arrays
=================================
Things become more complex when multidimensional arrays are indexed,
particularly with multidimensional index arrays. These tend to be
more unusal uses, but theyare permitted, and they are useful for some
problems. We'll start with thesimplest multidimensional case (using
the array y from the previous examples): ::
>>> y[np.array([0,2,4]), np.array([0,1,2])]
array([ 0, 15, 30])
In this case, if the index arrays have a matching shape, and there is
an index array for each dimension of the array being indexed, the
resultant array has the same shape as the index arrays, and the values
correspond to the index set for each position in the index arrays. In
this example, the first index value is 0 for both index arrays, and
thus the first value of the resultant array is y[0,0]. The next value
is y[2,1], and the last is y[4,2].
If the index arrays do not have the same shape, there is an attempt to
broadcast them to the same shape. If they cannot be broadcast to the
same shape, an exception is raised: ::
>>> y[np.array([0,2,4]), np.array([0,1])]
<type 'exceptions.ValueError'>: shape mismatch: objects cannot be
broadcast to a single shape
The broadcasting mechanism permits index arrays to be combined with
scalars for other indices. The effect is that the scalar value is used
for all the corresponding values of the index arrays: ::
>>> y[np.array([0,2,4]), 1]
array([ 1, 15, 29])
Jumping to the next level of complexity, it is possible to only
partially index an array with index arrays. It takes a bit of thought
to understand what happens in such cases. For example if we just use
one index array with y: ::
>>> y[np.array([0,2,4])]
array([[ 0, 1, 2, 3, 4, 5, 6],
[14, 15, 16, 17, 18, 19, 20],
[28, 29, 30, 31, 32, 33, 34]])
What results is the construction of a new array where each value of
the index array selects one row from the array being indexed and the
resultant array has the resulting shape (size of row, number index
elements).
An example of where this may be useful is for a color lookup table
where we want to map the values of an image into RGB triples for
display. The lookup table could have a shape (nlookup, 3). Indexing
such an array with an image with shape (ny, nx) with dtype=np.uint8
(or any integer type so long as values are with the bounds of the
lookup table) will result in an array of shape (ny, nx, 3) where a
triple of RGB values is associated with each pixel location.
In general, the shape of the resulant array will be the concatenation
of the shape of the index array (or the shape that all the index arrays
were broadcast to) with the shape of any unused dimensions (those not
indexed) in the array being indexed.
Boolean or "mask" index arrays
==============================
Boolean arrays used as indices are treated in a different manner
entirely than index arrays. Boolean arrays must be of the same shape
as the initial dimensions of the array being indexed. In the
most straightforward case, the boolean array has the same shape: ::
>>> b = y>20
>>> y[b]
array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34])
The result is a 1-D array containing all the elements in the indexed
array corresponding to all the true elements in the boolean array. As
with index arrays, what is returned is a copy of the data, not a view
as one gets with slices.
The result will be multidimensional if y has more dimensions than b.
For example: ::
>>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y
array([False, False, False, True, True], dtype=bool)
>>> y[b[:,5]]
array([[21, 22, 23, 24, 25, 26, 27],
[28, 29, 30, 31, 32, 33, 34]])
Here the 4th and 5th rows are selected from the indexed array and
combined to make a 2-D array.
In general, when the boolean array has fewer dimensions than the array
being indexed, this is equivalent to y[b, ...], which means
y is indexed by b followed by as many : as are needed to fill
out the rank of y.
Thus the shape of the result is one dimension containing the number
of True elements of the boolean array, followed by the remaining
dimensions of the array being indexed.
For example, using a 2-D boolean array of shape (2,3)
with four True elements to select rows from a 3-D array of shape
(2,3,5) results in a 2-D result of shape (4,5): ::
>>> x = np.arange(30).reshape(2,3,5)
>>> x
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
>>> b = np.array([[True, True, False], [False, True, True]])
>>> x[b]
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]])
For further details, consult the numpy reference documentation on array indexing.
Combining index arrays with slices
==================================
Index arrays may be combined with slices. For example: ::
>>> y[np.array([0,2,4]),1:3]
array([[ 1, 2],
[15, 16],
[29, 30]])
In effect, the slice is converted to an index array
np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array
to produce a resultant array of shape (3,2).
Likewise, slicing can be combined with broadcasted boolean indices: ::
>>> y[b[:,5],1:3]
array([[22, 23],
[29, 30]])
Structural indexing tools
=========================
To facilitate easy matching of array shapes with expressions and in
assignments, the np.newaxis object can be used within array indices
to add new dimensions with a size of 1. For example: ::
>>> y.shape
(5, 7)
>>> y[:,np.newaxis,:].shape
(5, 1, 7)
Note that there are no new elements in the array, just that the
dimensionality is increased. This can be handy to combine two
arrays in a way that otherwise would require explicitly reshaping
operations. For example: ::
>>> x = np.arange(5)
>>> x[:,np.newaxis] + x[np.newaxis,:]
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8]])
The ellipsis syntax maybe used to indicate selecting in full any
remaining unspecified dimensions. For example: ::
>>> z = np.arange(81).reshape(3,3,3,3)
>>> z[1,...,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
This is equivalent to: ::
>>> z[1,:,:,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
Assigning values to indexed arrays
==================================
As mentioned, one can select a subset of an array to assign to using
a single index, slices, and index and mask arrays. The value being
assigned to the indexed array must be shape consistent (the same shape
or broadcastable to the shape the index produces). For example, it is
permitted to assign a constant to a slice: ::
>>> x = np.arange(10)
>>> x[2:7] = 1
or an array of the right size: ::
>>> x[2:7] = np.arange(5)
Note that assignments may result in changes if assigning
higher types to lower types (like floats to ints) or even
exceptions (assigning complex to floats or ints): ::
>>> x[1] = 1.2
>>> x[1]
1
>>> x[1] = 1.2j
<type 'exceptions.TypeError'>: can't convert complex to long; use
long(abs(z))
Unlike some of the references (such as array and mask indices)
assignments are always made to the original data in the array
(indeed, nothing else would make sense!). Note though, that some
actions may not work as one may naively expect. This particular
example is often surprising to people: ::
>>> x = np.arange(0, 50, 10)
>>> x
array([ 0, 10, 20, 30, 40])
>>> x[np.array([1, 1, 3, 1])] += 1
>>> x
array([ 0, 11, 20, 31, 40])
Where people expect that the 1st location will be incremented by 3.
In fact, it will only be incremented by 1. The reason is because
a new array is extracted from the original (as a temporary) containing
the values at 1, 1, 3, 1, then the value 1 is added to the temporary,
and then the temporary is assigned back to the original array. Thus
the value of the array at x[1]+1 is assigned to x[1] three times,
rather than being incremented 3 times.
Dealing with variable numbers of indices within programs
========================================================
The index syntax is very powerful but limiting when dealing with
a variable number of indices. For example, if you want to write
a function that can handle arguments with various numbers of
dimensions without having to write special case code for each
number of possible dimensions, how can that be done? If one
supplies to the index a tuple, the tuple will be interpreted
as a list of indices. For example (using the previous definition
for the array z): ::
>>> indices = (1,1,1,1)
>>> z[indices]
40
So one can use code to construct tuples of any number of indices
and then use these within an index.
Slices can be specified within programs by using the slice() function
in Python. For example: ::
>>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
>>> z[indices]
array([39, 40])
Likewise, ellipsis can be specified by code by using the Ellipsis
object: ::
>>> indices = (1, Ellipsis, 1) # same as [1,...,1]
>>> z[indices]
array([[28, 31, 34],
[37, 40, 43],
[46, 49, 52]])
For this reason it is possible to use the output from the np.where()
function directly as an index since it always returns a tuple of index
arrays.
Because the special treatment of tuples, they are not automatically
converted to an array as a list would be. As an example: ::
>>> z[[1,1,1,1]] # produces a large array
array([[[[27, 28, 29],
[30, 31, 32], ...
>>> z[(1,1,1,1)] # returns a single value
40
""" |
"""Stuff to parse AIFF-C and AIFF files.
Unless explicitly stated otherwise, the description below is true
both for AIFF-C files and AIFF files.
An AIFF-C file has the following structure.
+-----------------+
| FORM |
+-----------------+
| <size> |
+----+------------+
| | AIFC |
| +------------+
| | <chunks> |
| | . |
| | . |
| | . |
+----+------------+
An AIFF file has the string "AIFF" instead of "AIFC".
A chunk consists of an identifier (4 bytes) followed by a size (4 bytes,
big endian order), followed by the data. The size field does not include
the size of the 8 byte header.
The following chunk types are recognized.
FVER
<version number of AIFF-C defining document> (AIFF-C only).
MARK
<# of markers> (2 bytes)
list of markers:
<marker ID> (2 bytes, must be > 0)
<position> (4 bytes)
<marker name> ("pstring")
COMM
<# of channels> (2 bytes)
<# of sound frames> (4 bytes)
<size of the samples> (2 bytes)
<sampling frequency> (10 bytes, IEEE 80-bit extended
floating point)
in AIFF-C files only:
<compression type> (4 bytes)
<human-readable version of compression type> ("pstring")
SSND
<offset> (4 bytes, not used by this program)
<blocksize> (4 bytes, not used by this program)
<sound data>
A pstring consists of 1 byte length, a string of characters, and 0 or 1
byte pad to make the total length even.
Usage.
Reading AIFF files:
f = aifc.open(file, 'r')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods read(), seek(), and close().
In some types of audio files, if the setpos() method is not used,
the seek() method is not necessary.
This returns an instance of a class with the following public methods:
getnchannels() -- returns number of audio channels (1 for
mono, 2 for stereo)
getsampwidth() -- returns sample width in bytes
getframerate() -- returns sampling frequency
getnframes() -- returns number of audio frames
getcomptype() -- returns compression type ('NONE' for AIFF files)
getcompname() -- returns human-readable version of
compression type ('not compressed' for AIFF files)
getparams() -- returns a tuple consisting of all of the
above in the above order
getmarkers() -- get the list of marks in the audio file or None
if there are no marks
getmark(id) -- get mark with the specified id (raises an error
if the mark does not exist)
readframes(n) -- returns at most n frames of audio
rewind() -- rewind to the beginning of the audio stream
setpos(pos) -- seek to the specified position
tell() -- return the current position
close() -- close the instance (make it unusable)
The position returned by tell(), the position given to setpos() and
the position of marks are all compatible and have nothing to do with
the actual position in the file.
The close() method is called automatically when the class instance
is destroyed.
Writing AIFF files:
f = aifc.open(file, 'w')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods write(), tell(), seek(), and
close().
This returns an instance of a class with the following public methods:
aiff() -- create an AIFF file (AIFF-C default)
aifc() -- create an AIFF-C file
setnchannels(n) -- set the number of channels
setsampwidth(n) -- set the sample width
setframerate(n) -- set the frame rate
setnframes(n) -- set the number of frames
setcomptype(type, name)
-- set the compression type and the
human-readable compression type
setparams(tuple)
-- set all parameters at once
setmark(id, pos, name)
-- add specified mark to the list of marks
tell() -- return current position in output file (useful
in combination with setmark())
writeframesraw(data)
-- write audio frames without pathing up the
file header
writeframes(data)
-- write audio frames and patch up the file header
close() -- patch up the file header and close the
output file
You should set the parameters before the first writeframesraw or
writeframes. The total number of frames does not need to be set,
but when it is set to the correct value, the header does not have to
be patched up.
It is best to first set all parameters, perhaps possibly the
compression type, and then write audio frames using writeframesraw.
When all frames have been written, either call writeframes('') or
close() to patch up the sizes in the header.
Marks can be added anytime. If there are any marks, ypu must call
close() after all frames have been written.
The close() method is called automatically when the class instance
is destroyed.
When a file is opened with the extension '.aiff', an AIFF file is
written, otherwise an AIFF-C file is written. This default can be
changed by calling aiff() or aifc() before the first writeframes or
writeframesraw.
""" |
"""
Discrete Fourier Transform (:mod:`numpy.fft`)
=============================================
.. currentmodule:: numpy.fft
Standard FFTs
-------------
.. autosummary::
:toctree: generated/
fft Discrete Fourier transform.
ifft Inverse discrete Fourier transform.
fft2 Discrete Fourier transform in two dimensions.
ifft2 Inverse discrete Fourier transform in two dimensions.
fftn Discrete Fourier transform in N-dimensions.
ifftn Inverse discrete Fourier transform in N dimensions.
Real FFTs
---------
.. autosummary::
:toctree: generated/
rfft Real discrete Fourier transform.
irfft Inverse real discrete Fourier transform.
rfft2 Real discrete Fourier transform in two dimensions.
irfft2 Inverse real discrete Fourier transform in two dimensions.
rfftn Real discrete Fourier transform in N dimensions.
irfftn Inverse real discrete Fourier transform in N dimensions.
Hermitian FFTs
--------------
.. autosummary::
:toctree: generated/
hfft Hermitian discrete Fourier transform.
ihfft Inverse Hermitian discrete Fourier transform.
Helper routines
---------------
.. autosummary::
:toctree: generated/
fftfreq Discrete Fourier Transform sample frequencies.
rfftfreq DFT sample frequencies (for usage with rfft, irfft).
fftshift Shift zero-frequency component to center of spectrum.
ifftshift Inverse of fftshift.
Background information
----------------------
Fourier analysis is fundamentally a method for expressing a function as a
sum of periodic components, and for recovering the function from those
components. When both the function and its Fourier transform are
replaced with discretized counterparts, it is called the discrete Fourier
transform (DFT). The DFT has become a mainstay of numerical computing in
part because of a very fast algorithm for computing it, called the Fast
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_
provide an accessible introduction to Fourier analysis and its
applications.
Because the discrete Fourier transform separates its input into
components that contribute at discrete frequencies, it has a great number
of applications in digital signal processing, e.g., for filtering, and in
this context the discretized input to the transform is customarily
referred to as a *signal*, which exists in the *time domain*. The output
is called a *spectrum* or *transform* and exists in the *frequency
domain*.
Implementation details
----------------------
There are many ways to define the DFT, varying in the sign of the
exponent, normalization, etc. In this implementation, the DFT is defined
as
.. math::
A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\}
\\qquad k = 0,\\ldots,n-1.
The DFT is in general defined for complex inputs and outputs, and a
single-frequency component at linear frequency :math:`f` is
represented by a complex exponential
:math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t`
is the sampling interval.
The values in the result follow so-called "standard" order: If ``A =
fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the sum of
the signal), which is always purely real for real inputs. Then ``A[1:n/2]``
contains the positive-frequency terms, and ``A[n/2+1:]`` contains the
negative-frequency terms, in order of decreasingly negative frequency.
For an even number of input points, ``A[n/2]`` represents both positive and
negative Nyquist frequency, and is also purely real for real input. For
an odd number of input points, ``A[(n-1)/2]`` contains the largest positive
frequency, while ``A[(n+1)/2]`` contains the largest negative frequency.
The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies
of corresponding elements in the output. The routine
``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the
zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes
that shift.
When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)``
is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum.
The phase spectrum is obtained by ``np.angle(A)``.
The inverse DFT is defined as
.. math::
a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\}
\\qquad m = 0,\\ldots,n-1.
It differs from the forward transform by the sign of the exponential
argument and the default normalization by :math:`1/n`.
Normalization
-------------
The default normalization has the direct transforms unscaled and the inverse
transforms are scaled by :math:`1/n`. It is possible to obtain unitary
transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is
`None`) so that both direct and inverse transforms will be scaled by
:math:`1/\\sqrt{n}`.
Real and Hermitian transforms
-----------------------------
When the input is purely real, its transform is Hermitian, i.e., the
component at frequency :math:`f_k` is the complex conjugate of the
component at frequency :math:`-f_k`, which means that for real
inputs there is no information in the negative frequency components that
is not already available from the positive frequency components.
The family of `rfft` functions is
designed to operate on real inputs, and exploits this symmetry by
computing only the positive frequency components, up to and including the
Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex
output points. The inverses of this family assumes the same symmetry of
its input, and for an output of ``n`` points uses ``n/2+1`` input points.
Correspondingly, when the spectrum is purely real, the signal is
Hermitian. The `hfft` family of functions exploits this symmetry by
using ``n/2+1`` complex points in the input (time) domain for ``n`` real
points in the frequency domain.
In higher dimensions, FFTs are used, e.g., for image analysis and
filtering. The computational efficiency of the FFT means that it can
also be a faster way to compute large convolutions, using the property
that a convolution in the time domain is equivalent to a point-by-point
multiplication in the frequency domain.
Higher dimensions
-----------------
In two dimensions, the DFT is defined as
.. math::
A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1}
a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\}
\\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1,
which extends in the obvious way to higher dimensions, and the inverses
in higher dimensions also extend in the same way.
References
----------
.. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the
machine calculation of complex Fourier series," *Math. Comput.*
19: 297-301.
.. [NR] NAME NAME NAME and NAME
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
12-13. Cambridge Univ. Press, Cambridge, UK.
Examples
--------
For examples, see the various functions.
""" |
"""
=============================
Subclassing ndarray in python
=============================
Credits
-------
This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses.
Introduction
------------
Subclassing ndarray is relatively simple, but it has some complications
compared to other Python objects. On this page we explain the machinery
that allows you to subclass ndarray, and the implications for
implementing a subclass.
ndarrays and object creation
============================
Subclassing ndarray is complicated by the fact that new instances of
ndarray classes can come about in three different ways. These are:
#. Explicit constructor call - as in ``MySubClass(params)``. This is
the usual route to Python instance creation.
#. View casting - casting an existing ndarray as a given subclass
#. New from template - creating a new instance from a template
instance. Examples include returning slices from a subclassed array,
creating return types from ufuncs, and copying arrays. See
:ref:`new-from-template` for more details
The last two are characteristics of ndarrays - in order to support
things like array slicing. The complications of subclassing ndarray are
due to the mechanisms numpy has to support these latter two routes of
instance creation.
.. _view-casting:
View casting
------------
*View casting* is the standard ndarray mechanism by which you take an
ndarray of any subclass, and return a view of the array as another
(specified) subclass:
>>> import numpy as np
>>> # create a completely useless ndarray subclass
>>> class C(np.ndarray): pass
>>> # create a standard ndarray
>>> arr = np.zeros((3,))
>>> # take a view of it, as our useless subclass
>>> c_arr = arr.view(C)
>>> type(c_arr)
<class 'C'>
.. _new-from-template:
Creating new from template
--------------------------
New instances of an ndarray subclass can also come about by a very
similar mechanism to :ref:`view-casting`, when numpy finds it needs to
create a new instance from a template instance. The most obvious place
this has to happen is when you are taking slices of subclassed arrays.
For example:
>>> v = c_arr[1:]
>>> type(v) # the view is of type 'C'
<class 'C'>
>>> v is c_arr # but it's a new instance
False
The slice is a *view* onto the original ``c_arr`` data. So, when we
take a view from the ndarray, we return a new ndarray, of the same
class, that points to the data in the original.
There are other points in the use of ndarrays where we need such views,
such as copying arrays (``c_arr.copy()``), creating ufunc output arrays
(see also :ref:`array-wrap`), and reducing methods (like
``c_arr.mean()``.
Relationship of view casting and new-from-template
--------------------------------------------------
These paths both use the same machinery. We make the distinction here,
because they result in different input to your methods. Specifically,
:ref:`view-casting` means you have created a new instance of your array
type from any potential subclass of ndarray. :ref:`new-from-template`
means you have created a new instance of your class from a pre-existing
instance, allowing you - for example - to copy across attributes that
are particular to your subclass.
Implications for subclassing
----------------------------
If we subclass ndarray, we need to deal not only with explicit
construction of our array type, but also :ref:`view-casting` or
:ref:`new-from-template`. Numpy has the machinery to do this, and this
machinery that makes subclassing slightly non-standard.
There are two aspects to the machinery that ndarray uses to support
views and new-from-template in subclasses.
The first is the use of the ``ndarray.__new__`` method for the main work
of object initialization, rather then the more usual ``__init__``
method. The second is the use of the ``__array_finalize__`` method to
allow subclasses to clean up after the creation of views and new
instances from templates.
A brief Python primer on ``__new__`` and ``__init__``
=====================================================
``__new__`` is a standard Python method, and, if present, is called
before ``__init__`` when we create a class instance. See the `python
__new__ documentation
<http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail.
For example, consider the following Python code:
.. testcode::
class C(object):
def __new__(cls, *args):
print 'Cls in __new__:', cls
print 'Args in __new__:', args
return object.__new__(cls, *args)
def __init__(self, *args):
print 'type(self) in __init__:', type(self)
print 'Args in __init__:', args
meaning that we get:
>>> c = C('hello')
Cls in __new__: <class 'C'>
Args in __new__: ('hello',)
type(self) in __init__: <class 'C'>
Args in __init__: ('hello',)
When we call ``C('hello')``, the ``__new__`` method gets its own class
as first argument, and the passed argument, which is the string
``'hello'``. After python calls ``__new__``, it usually (see below)
calls our ``__init__`` method, with the output of ``__new__`` as the
first argument (now a class instance), and the passed arguments
following.
As you can see, the object can be initialized in the ``__new__``
method or the ``__init__`` method, or both, and in fact ndarray does
not have an ``__init__`` method, because all the initialization is
done in the ``__new__`` method.
Why use ``__new__`` rather than just the usual ``__init__``? Because
in some cases, as for ndarray, we want to be able to return an object
of some other class. Consider the following:
.. testcode::
class D(C):
def __new__(cls, *args):
print 'D cls is:', cls
print 'D args in __new__:', args
return C.__new__(C, *args)
def __init__(self, *args):
# we never get here
print 'In D __init__'
meaning that:
>>> obj = D('hello')
D cls is: <class 'D'>
D args in __new__: ('hello',)
Cls in __new__: <class 'C'>
Args in __new__: ('hello',)
>>> type(obj)
<class 'C'>
The definition of ``C`` is the same as before, but for ``D``, the
``__new__`` method returns an instance of class ``C`` rather than
``D``. Note that the ``__init__`` method of ``D`` does not get
called. In general, when the ``__new__`` method returns an object of
class other than the class in which it is defined, the ``__init__``
method of that class is not called.
This is how subclasses of the ndarray class are able to return views
that preserve the class type. When taking a view, the standard
ndarray machinery creates the new ndarray object with something
like::
obj = ndarray.__new__(subtype, shape, ...
where ``subdtype`` is the subclass. Thus the returned view is of the
same class as the subclass, rather than being of class ``ndarray``.
That solves the problem of returning views of the same type, but now
we have a new problem. The machinery of ndarray can set the class
this way, in its standard methods for taking views, but the ndarray
``__new__`` method knows nothing of what we have done in our own
``__new__`` method in order to set attributes, and so on. (Aside -
why not call ``obj = subdtype.__new__(...`` then? Because we may not
have a ``__new__`` method with the same call signature).
The role of ``__array_finalize__``
==================================
``__array_finalize__`` is the mechanism that numpy provides to allow
subclasses to handle the various ways that new instances get created.
Remember that subclass instances can come about in these three ways:
#. explicit constructor call (``obj = MySubClass(params)``). This will
call the usual sequence of ``MySubClass.__new__`` then (if it exists)
``MySubClass.__init__``.
#. :ref:`view-casting`
#. :ref:`new-from-template`
Our ``MySubClass.__new__`` method only gets called in the case of the
explicit constructor call, so we can't rely on ``MySubClass.__new__`` or
``MySubClass.__init__`` to deal with the view casting and
new-from-template. It turns out that ``MySubClass.__array_finalize__``
*does* get called for all three methods of object creation, so this is
where our object creation housekeeping usually goes.
* For the explicit constructor call, our subclass will need to create a
new ndarray instance of its own class. In practice this means that
we, the authors of the code, will need to make a call to
``ndarray.__new__(MySubClass,...)``, or do view casting of an existing
array (see below)
* For view casting and new-from-template, the equivalent of
``ndarray.__new__(MySubClass,...`` is called, at the C level.
The arguments that ``__array_finalize__`` recieves differ for the three
methods of instance creation above.
The following code allows us to look at the call sequences and arguments:
.. testcode::
import numpy as np
class C(np.ndarray):
def __new__(cls, *args, **kwargs):
print 'In __new__ with class %s' % cls
return np.ndarray.__new__(cls, *args, **kwargs)
def __init__(self, *args, **kwargs):
# in practice you probably will not need or want an __init__
# method for your subclass
print 'In __init__ with class %s' % self.__class__
def __array_finalize__(self, obj):
print 'In array_finalize:'
print ' self type is %s' % type(self)
print ' obj type is %s' % type(obj)
Now:
>>> # Explicit constructor
>>> c = C((10,))
In __new__ with class <class 'C'>
In array_finalize:
self type is <class 'C'>
obj type is <type 'NoneType'>
In __init__ with class <class 'C'>
>>> # View casting
>>> a = np.arange(10)
>>> cast_a = a.view(C)
In array_finalize:
self type is <class 'C'>
obj type is <type 'numpy.ndarray'>
>>> # Slicing (example of new-from-template)
>>> cv = c[:1]
In array_finalize:
self type is <class 'C'>
obj type is <class 'C'>
The signature of ``__array_finalize__`` is::
def __array_finalize__(self, obj):
``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our
own class (``self``) as well as the object from which the view has been
taken (``obj``). As you can see from the output above, the ``self`` is
always a newly created instance of our subclass, and the type of ``obj``
differs for the three instance creation methods:
* When called from the explicit constructor, ``obj`` is ``None``
* When called from view casting, ``obj`` can be an instance of any
subclass of ndarray, including our own.
* When called in new-from-template, ``obj`` is another instance of our
own subclass, that we might use to update the new ``self`` instance.
Because ``__array_finalize__`` is the only method that always sees new
instances being created, it is the sensible place to fill in instance
defaults for new object attributes, among other tasks.
This may be clearer with an example.
Simple example - adding an extra attribute to ndarray
-----------------------------------------------------
.. testcode::
import numpy as np
class InfoArray(np.ndarray):
def __new__(subtype, shape, dtype=float, buffer=None, offset=0,
strides=None, order=None, info=None):
# Create the ndarray instance of our type, given the usual
# ndarray input arguments. This will call the standard
# ndarray constructor, but return an object of our type.
# It also triggers a call to InfoArray.__array_finalize__
obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides,
order)
# set the new 'info' attribute to the value passed
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# ``self`` is a new object resulting from
# ndarray.__new__(InfoArray, ...), therefore it only has
# attributes that the ndarray.__new__ constructor gave it -
# i.e. those of a standard ndarray.
#
# We could have got to the ndarray.__new__ call in 3 ways:
# From an explicit constructor - e.g. InfoArray():
# obj is None
# (we're in the middle of the InfoArray.__new__
# constructor, and self.info will be set when we return to
# InfoArray.__new__)
if obj is None: return
# From view casting - e.g arr.view(InfoArray):
# obj is arr
# (type(obj) can be InfoArray)
# From new-from-template - e.g infoarr[:3]
# type(obj) is InfoArray
#
# Note that it is here, rather than in the __new__ method,
# that we set the default value for 'info', because this
# method sees all creation of default objects - with the
# InfoArray.__new__ constructor, but also with
# arr.view(InfoArray).
self.info = getattr(obj, 'info', None)
# We do not need to return anything
Using the object looks like this:
>>> obj = InfoArray(shape=(3,)) # explicit constructor
>>> type(obj)
<class 'InfoArray'>
>>> obj.info is None
True
>>> obj = InfoArray(shape=(3,), info='information')
>>> obj.info
'information'
>>> v = obj[1:] # new-from-template - here - slicing
>>> type(v)
<class 'InfoArray'>
>>> v.info
'information'
>>> arr = np.arange(10)
>>> cast_arr = arr.view(InfoArray) # view casting
>>> type(cast_arr)
<class 'InfoArray'>
>>> cast_arr.info is None
True
This class isn't very useful, because it has the same constructor as the
bare ndarray object, including passing in buffers and shapes and so on.
We would probably prefer the constructor to be able to take an already
formed ndarray from the usual numpy calls to ``np.array`` and return an
object.
Slightly more realistic example - attribute added to existing array
-------------------------------------------------------------------
Here is a class that takes a standard ndarray that already exists, casts
as our type, and adds an extra attribute.
.. testcode::
import numpy as np
class RealisticInfoArray(np.ndarray):
def __new__(cls, input_array, info=None):
# Input array is an already formed ndarray instance
# We first cast to be our class type
obj = np.asarray(input_array).view(cls)
# add the new attribute to the created instance
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# see InfoArray.__array_finalize__ for comments
if obj is None: return
self.info = getattr(obj, 'info', None)
So:
>>> arr = np.arange(5)
>>> obj = RealisticInfoArray(arr, info='information')
>>> type(obj)
<class 'RealisticInfoArray'>
>>> obj.info
'information'
>>> v = obj[1:]
>>> type(v)
<class 'RealisticInfoArray'>
>>> v.info
'information'
.. _array-wrap:
``__array_wrap__`` for ufuncs
-------------------------------------------------------
``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy
functions, to allow a subclass to set the type of the return value
and update attributes and metadata. Let's show how this works with an example.
First we make the same subclass as above, but with a different name and
some print statements:
.. testcode::
import numpy as np
class MySubClass(np.ndarray):
def __new__(cls, input_array, info=None):
obj = np.asarray(input_array).view(cls)
obj.info = info
return obj
def __array_finalize__(self, obj):
print 'In __array_finalize__:'
print ' self is %s' % repr(self)
print ' obj is %s' % repr(obj)
if obj is None: return
self.info = getattr(obj, 'info', None)
def __array_wrap__(self, out_arr, context=None):
print 'In __array_wrap__:'
print ' self is %s' % repr(self)
print ' arr is %s' % repr(out_arr)
# then just call the parent
return np.ndarray.__array_wrap__(self, out_arr, context)
We run a ufunc on an instance of our new array:
>>> obj = MySubClass(np.arange(5), info='spam')
In __array_finalize__:
self is MySubClass([0, 1, 2, 3, 4])
obj is array([0, 1, 2, 3, 4])
>>> arr2 = np.arange(5)+1
>>> ret = np.add(arr2, obj)
In __array_wrap__:
self is MySubClass([0, 1, 2, 3, 4])
arr is array([1, 3, 5, 7, 9])
In __array_finalize__:
self is MySubClass([1, 3, 5, 7, 9])
obj is MySubClass([0, 1, 2, 3, 4])
>>> ret
MySubClass([1, 3, 5, 7, 9])
>>> ret.info
'spam'
Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the
input with the highest ``__array_priority__`` value, in this case
``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and
``out_arr`` as the (ndarray) result of the addition. In turn, the
default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the
result to class ``MySubClass``, and called ``__array_finalize__`` -
hence the copying of the ``info`` attribute. This has all happened at the C level.
But, we could do anything we wanted:
.. testcode::
class SillySubClass(np.ndarray):
def __array_wrap__(self, arr, context=None):
return 'I lost your data'
>>> arr1 = np.arange(5)
>>> obj = arr1.view(SillySubClass)
>>> arr2 = np.arange(5)
>>> ret = np.multiply(obj, arr2)
>>> ret
'I lost your data'
So, by defining a specific ``__array_wrap__`` method for our subclass,
we can tweak the output from ufuncs. The ``__array_wrap__`` method
requires ``self``, then an argument - which is the result of the ufunc -
and an optional parameter *context*. This parameter is returned by some
ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc,
domain of the ufunc). ``__array_wrap__`` should return an instance of
its containing class. See the masked array subclass for an
implementation.
In addition to ``__array_wrap__``, which is called on the way out of the
ufunc, there is also an ``__array_prepare__`` method which is called on
the way into the ufunc, after the output arrays are created but before any
computation has been performed. The default implementation does nothing
but pass through the array. ``__array_prepare__`` should not attempt to
access the array data or resize the array, it is intended for setting the
output array type, updating attributes and metadata, and performing any
checks based on the input that may be desired before computation begins.
Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or
subclass thereof or raise an error.
Extra gotchas - custom ``__del__`` methods and ndarray.base
-----------------------------------------------------------
One of the problems that ndarray solves is keeping track of memory
ownership of ndarrays and their views. Consider the case where we have
created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``.
The two objects are looking at the same memory. Numpy keeps track of
where the data came from for a particular array or view, with the
``base`` attribute:
>>> # A normal ndarray, that owns its own data
>>> arr = np.zeros((4,))
>>> # In this case, base is None
>>> arr.base is None
True
>>> # We take a view
>>> v1 = arr[1:]
>>> # base now points to the array that it derived from
>>> v1.base is arr
True
>>> # Take a view of a view
>>> v2 = v1[1:]
>>> # base points to the view it derived from
>>> v2.base is v1
True
In general, if the array owns its own memory, as for ``arr`` in this
case, then ``arr.base`` will be None - there are some exceptions to this
- see the numpy book for more details.
The ``base`` attribute is useful in being able to tell whether we have
a view or the original array. This in turn can be useful if we need
to know whether or not to do some specific cleanup when the subclassed
array is deleted. For example, we may only want to do the cleanup if
the original array is deleted, but not the views. For an example of
how this can work, have a look at the ``memmap`` class in
``numpy.core``.
""" |
# import json
# from nose import tools as nosetools
#
# import ckan.plugins.toolkit as toolkit
# import ckan.tests.factories as factories
# import ckan.tests.helpers as helpers
#
#
# class TestprojectAuthIndex(helpers.FunctionalTestBase):
#
# def test_auth_anon_user_can_view_project_index(self):
# '''An anon (not logged in) user can view the projects index.'''
# app = self._get_test_app()
#
# app.get("/project", status=200)
#
# def test_auth_logged_in_user_can_view_project_index(self):
# '''
# A logged in user can view the project index.
# '''
# app = self._get_test_app()
#
# user = factories.User()
#
# app.get("/project", status=200,
# extra_environ={'REMOTE_USER': str(user["name"])})
#
# def test_auth_anon_user_cant_see_add_project_button(self):
# '''
# An anon (not logged in) user can't see the Add project button on the
# project index page.
# '''
# app = self._get_test_app()
#
# response = app.get("/project", status=200)
#
# # test for new project link in response
# response.mustcontain(no="/project/new")
#
# def test_auth_logged_in_user_cant_see_add_project_button(self):
# '''
# A logged in user can't see the Add project button on the project
# index page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# response = app.get("/project", status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# # test for new project link in response
# response.mustcontain(no="/project/new")
#
# def test_auth_sysadmin_can_see_add_project_button(self):
# '''
# A sysadmin can see the Add project button on the project index
# page.
# '''
# app = self._get_test_app()
# user = factories.Sysadmin()
#
# response = app.get("/project", status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# # test for new project link in response
# response.mustcontain("/project/new")
#
#
# class TestprojectAuthDetails(helpers.FunctionalTestBase):
# def test_auth_anon_user_can_view_project_details(self):
# '''
# An anon (not logged in) user can view an individual project details page.
# '''
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/my-project', status=200)
#
# def test_auth_logged_in_user_can_view_project_details(self):
# '''
# A logged in user can view an individual project details page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_anon_user_cant_see_manage_button(self):
# '''
# An anon (not logged in) user can't see the Manage button on an individual
# project details page.
# '''
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/project/my-project', status=200)
#
# # test for url to edit page
# response.mustcontain(no="/project/edit/my-project")
#
# def test_auth_logged_in_user_can_see_manage_button(self):
# '''
# A logged in user can't see the Manage button on an individual project
# details page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/project/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# # test for url to edit page
# response.mustcontain(no="/project/edit/my-project")
#
# def test_auth_sysadmin_can_see_manage_button(self):
# '''
# A sysadmin can see the Manage button on an individual project details
# page.
# '''
# app = self._get_test_app()
# user = factories.Sysadmin()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/project/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# # test for url to edit page
# response.mustcontain("/project/edit/my-project")
#
# def test_auth_project_show_anon_can_access(self):
# '''
# Anon user can request project show.
# '''
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/api/3/action/ckanext_project_show?id=my-project',
# status=200)
#
# json_response = json.loads(response.body)
#
# nosetools.assert_true(json_response['success'])
#
# def test_auth_project_show_normal_user_can_access(self):
# '''
# Normal logged in user can request project show.
# '''
# user = factories.User()
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/api/3/action/ckanext_project_show?id=my-project',
# status=200, extra_environ={'REMOTE_USER': str(user['name'])})
#
# json_response = json.loads(response.body)
#
# nosetools.assert_true(json_response['success'])
#
# def test_auth_project_show_sysadmin_can_access(self):
# '''
# Normal logged in user can request project show.
# '''
# user = factories.Sysadmin()
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/api/3/action/ckanext_project_show?id=my-project',
# status=200, extra_environ={'REMOTE_USER': str(user['name'])})
#
# json_response = json.loads(response.body)
#
# nosetools.assert_true(json_response['success'])
#
#
# class TestprojectAuthCreate(helpers.FunctionalTestBase):
#
# def test_auth_anon_user_cant_view_create_project(self):
# '''
# An anon (not logged in) user can't access the create project page.
# '''
# app = self._get_test_app()
# app.get("/project/new", status=302)
#
# def test_auth_logged_in_user_cant_view_create_project_page(self):
# '''
# A logged in user can't access the create project page.
# '''
# app = self._get_test_app()
# user = factories.User()
# app.get("/project/new", status=401,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_sysadmin_can_view_create_project_page(self):
# '''
# A sysadmin can access the create project page.
# '''
# app = self._get_test_app()
# user = factories.Sysadmin()
# app.get("/project/new", status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
#
# class TestprojectAuthList(helpers.FunctionalTestBase):
#
# def test_auth_project_list_anon_can_access(self):
# '''
# Anon user can request project list.
# '''
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/api/3/action/ckanext_project_list',
# status=200)
#
# json_response = json.loads(response.body)
#
# nosetools.assert_true(json_response['success'])
#
# def test_auth_project_list_normal_user_can_access(self):
# '''
# Normal logged in user can request project list.
# '''
# user = factories.User()
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/api/3/action/ckanext_project_list',
# status=200, extra_environ={'REMOTE_USER': str(user['name'])})
#
# json_response = json.loads(response.body)
#
# nosetools.assert_true(json_response['success'])
#
# def test_auth_project_list_sysadmin_can_access(self):
# '''
# Normal logged in user can request project list.
# '''
# user = factories.Sysadmin()
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# response = app.get('/api/3/action/ckanext_project_list',
# status=200, extra_environ={'REMOTE_USER': str(user['name'])})
#
# json_response = json.loads(response.body)
#
# nosetools.assert_true(json_response['success'])
#
#
# class TestprojectAuthEdit(helpers.FunctionalTestBase):
#
# def test_auth_anon_user_cant_view_edit_project_page(self):
# '''
# An anon (not logged in) user can't access the project edit page.
# '''
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/edit/my-project', status=302)
#
# def test_auth_logged_in_user_cant_view_edit_project_page(self):
# '''
# A logged in user can't access the project edit page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/edit/my-project', status=401,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_sysadmin_can_view_edit_project_page(self):
# '''
# A sysadmin can access the project edit page.
# '''
# app = self._get_test_app()
# user = factories.Sysadmin()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/edit/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_project_admin_can_view_edit_project_page(self):
# '''
# A project admin can access the project edit page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# # Make user a project admin
# helpers.call_action('ckanext_project_admin_add', context={},
# username=user['name'])
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/edit/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_anon_user_cant_view_manage_datasets(self):
# '''
# An anon (not logged in) user can't access the project manage datasets page.
# '''
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/manage_datasets/my-project', status=302)
#
# def test_auth_logged_in_user_cant_view_manage_datasets(self):
# '''
# A logged in user (not sysadmin) can't access the project manage datasets page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/manage_datasets/my-project', status=401,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_sysadmin_can_view_manage_datasets(self):
# '''
# A sysadmin can access the project manage datasets page.
# '''
# app = self._get_test_app()
# user = factories.Sysadmin()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/manage_datasets/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_project_admin_can_view_manage_datasets(self):
# '''
# A project admin can access the project manage datasets page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# # Make user a project admin
# helpers.call_action('ckanext_project_admin_add', context={},
# username=user['name'])
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/manage_datasets/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_anon_user_cant_view_delete_project_page(self):
# '''
# An anon (not logged in) user can't access the project delete page.
# '''
# app = self._get_test_app()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/delete/my-project', status=302)
#
# def test_auth_logged_in_user_cant_view_delete_project_page(self):
# '''
# A logged in user can't access the project delete page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/delete/my-project', status=401,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_sysadmin_can_view_delete_project_page(self):
# '''
# A sysadmin can access the project delete page.
# '''
# app = self._get_test_app()
# user = factories.Sysadmin()
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/delete/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_project_admin_can_view_delete_project_page(self):
# '''
# A project admin can access the project delete page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# # Make user a project admin
# helpers.call_action('ckanext_project_admin_add', context={},
# username=user['name'])
#
# factories.Dataset(type='project', name='my-project')
#
# app.get('/project/delete/my-project', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_anon_user_cant_view_addtoproject_dropdown_dataset_project_list(self):
# '''
# An anonymous user can't view the 'Add to project' dropdown selector
# from a datasets project list page.
# '''
# app = self._get_test_app()
#
# factories.Dataset(name='my-project', type='project')
# factories.Dataset(name='my-dataset')
#
# project_list_response = app.get('/dataset/projects/my-dataset', status=200)
#
# nosetools.assert_false('project-add' in project_list_response.forms)
#
# def test_auth_normal_user_cant_view_addtoproject_dropdown_dataset_project_list(self):
# '''
# A normal (logged in) user can't view the 'Add to project' dropdown
# selector from a datasets project list page.
# '''
# user = factories.User()
# app = self._get_test_app()
#
# factories.Dataset(name='my-project', type='project')
# factories.Dataset(name='my-dataset')
#
# project_list_response = app.get('/dataset/projects/my-dataset', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# nosetools.assert_false('project-add' in project_list_response.forms)
#
# def test_auth_sysadmin_can_view_addtoproject_dropdown_dataset_project_list(self):
# '''
# A sysadmin can view the 'Add to project' dropdown selector from a
# datasets project list page.
# '''
# user = factories.Sysadmin()
# app = self._get_test_app()
#
# factories.Dataset(name='my-project', type='project')
# factories.Dataset(name='my-dataset')
#
# project_list_response = app.get('/dataset/projects/my-dataset', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# nosetools.assert_true('project-add' in project_list_response.forms)
#
# def test_auth_project_admin_can_view_addtoproject_dropdown_dataset_project_list(self):
# '''
# A project admin can view the 'Add to project' dropdown selector from
# a datasets project list page.
# '''
# app = self._get_test_app()
# user = factories.User()
#
# # Make user a project admin
# helpers.call_action('ckanext_project_admin_add', context={},
# username=user['name'])
#
# factories.Dataset(name='my-project', type='project')
# factories.Dataset(name='my-dataset')
#
# project_list_response = app.get('/dataset/projects/my-dataset', status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# nosetools.assert_true('project-add' in project_list_response.forms)
#
#
# class TestprojectPackageAssociationCreate(helpers.FunctionalTestBase):
#
# def test_project_package_association_create_no_user(self):
# '''
# Calling project package association create with no user raises
# NotAuthorized.
# '''
#
# context = {'user': None, 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_package_association_create',
# context=context)
#
# def test_project_package_association_create_sysadmin(self):
# '''
# Calling project package association create by a sysadmin doesn't
# raise NotAuthorized.
# '''
# a_sysadmin = factories.Sysadmin()
# context = {'user': a_sysadmin['name'], 'model': None}
# helpers.call_auth('ckanext_project_package_association_create',
# context=context)
#
# def test_project_package_association_create_project_admin(self):
# '''
# Calling project package association create by a project admin
# doesn't raise NotAuthorized.
# '''
# project_admin = factories.User()
#
# # Make user a project admin
# helpers.call_action('ckanext_project_admin_add', context={},
# username=project_admin['name'])
#
# context = {'user': project_admin['name'], 'model': None}
# helpers.call_auth('ckanext_project_package_association_create',
# context=context)
#
# def test_project_package_association_create_unauthorized_creds(self):
# '''
# Calling project package association create with unauthorized user
# raises NotAuthorized.
# '''
# not_a_sysadmin = factories.User()
# context = {'user': not_a_sysadmin['name'], 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_package_association_create',
# context=context)
#
#
# class TestprojectPackageAssociationDelete(helpers.FunctionalTestBase):
#
# def test_project_package_association_delete_no_user(self):
# '''
# Calling project package association create with no user raises
# NotAuthorized.
# '''
#
# context = {'user': None, 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_package_association_delete',
# context=context)
#
# def test_project_package_association_delete_sysadmin(self):
# '''
# Calling project package association create by a sysadmin doesn't
# raise NotAuthorized.
# '''
# a_sysadmin = factories.Sysadmin()
# context = {'user': a_sysadmin['name'], 'model': None}
# helpers.call_auth('ckanext_project_package_association_delete',
# context=context)
#
# def test_project_package_association_delete_project_admin(self):
# '''
# Calling project package association create by a project admin
# doesn't raise NotAuthorized.
# '''
# project_admin = factories.User()
#
# # Make user a project admin
# helpers.call_action('ckanext_project_admin_add', context={},
# username=project_admin['name'])
#
# context = {'user': project_admin['name'], 'model': None}
# helpers.call_auth('ckanext_project_package_association_delete',
# context=context)
#
# def test_project_package_association_delete_unauthorized_creds(self):
# '''
# Calling project package association create with unauthorized user
# raises NotAuthorized.
# '''
# not_a_sysadmin = factories.User()
# context = {'user': not_a_sysadmin['name'], 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_package_association_delete',
# context=context)
#
#
# class TestprojectAdminAddAuth(helpers.FunctionalTestBase):
#
# def test_project_admin_add_no_user(self):
# '''
# Calling project admin add with no user raises NotAuthorized.
# '''
#
# context = {'user': None, 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_admin_add', context=context)
#
# def test_project_admin_add_correct_creds(self):
# '''
# Calling project admin add by a sysadmin doesn't raise
# NotAuthorized.
# '''
# a_sysadmin = factories.Sysadmin()
# context = {'user': a_sysadmin['name'], 'model': None}
# helpers.call_auth('ckanext_project_admin_add', context=context)
#
# def test_project_admin_add_unauthorized_creds(self):
# '''
# Calling project admin add with unauthorized user raises
# NotAuthorized.
# '''
# not_a_sysadmin = factories.User()
# context = {'user': not_a_sysadmin['name'], 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_admin_add', context=context)
#
#
# class TestprojectAdminRemoveAuth(helpers.FunctionalTestBase):
#
# def test_project_admin_remove_no_user(self):
# '''
# Calling project admin remove with no user raises NotAuthorized.
# '''
#
# context = {'user': None, 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_admin_remove', context=context)
#
# def test_project_admin_remove_correct_creds(self):
# '''
# Calling project admin remove by a sysadmin doesn't raise
# NotAuthorized.
# '''
# a_sysadmin = factories.Sysadmin()
# context = {'user': a_sysadmin['name'], 'model': None}
# helpers.call_auth('ckanext_project_admin_remove', context=context)
#
# def test_project_admin_remove_unauthorized_creds(self):
# '''
# Calling project admin remove with unauthorized user raises
# NotAuthorized.
# '''
# not_a_sysadmin = factories.User()
# context = {'user': not_a_sysadmin['name'], 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_admin_remove', context=context)
#
#
# class TestprojectAdminListAuth(helpers.FunctionalTestBase):
#
# def test_project_admin_list_no_user(self):
# '''
# Calling project admin list with no user raises NotAuthorized.
# '''
#
# context = {'user': None, 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_admin_list', context=context)
#
# def test_project_admin_list_correct_creds(self):
# '''
# Calling project admin list by a sysadmin doesn't raise
# NotAuthorized.
# '''
# a_sysadmin = factories.Sysadmin()
# context = {'user': a_sysadmin['name'], 'model': None}
# helpers.call_auth('ckanext_project_admin_list', context=context)
#
# def test_project_admin_list_unauthorized_creds(self):
# '''
# Calling project admin list with unauthorized user raises
# NotAuthorized.
# '''
# not_a_sysadmin = factories.User()
# context = {'user': not_a_sysadmin['name'], 'model': None}
# nosetools.assert_raises(toolkit.NotAuthorized, helpers.call_auth,
# 'ckanext_project_admin_list', context=context)
#
#
# class TestprojectAuthManageprojectAdmins(helpers.FunctionalTestBase):
#
# def test_auth_anon_user_cant_view_project_admin_manage_page(self):
# '''
# An anon (not logged in) user can't access the manage project admin
# page.
# '''
# app = self._get_test_app()
# app.get("/project/new", status=302)
#
# def test_auth_logged_in_user_cant_view_project_admin_manage_page(self):
# '''
# A logged in user can't access the manage project admin page.
# '''
# app = self._get_test_app()
# user = factories.User()
# app.get("/project/new", status=401,
# extra_environ={'REMOTE_USER': str(user['name'])})
#
# def test_auth_sysadmin_can_view_project_admin_manage_page(self):
# '''
# A sysadmin can access the manage project admin page.
# '''
# app = self._get_test_app()
# user = factories.Sysadmin()
# app.get("/project/new", status=200,
# extra_environ={'REMOTE_USER': str(user['name'])})
|
"""This module tests SyntaxErrors.
Here's an example of the sort of thing that is tested.
>>> def f(x):
... global x
Traceback (most recent call last):
SyntaxError: name 'x' is parameter and global
The tests are all raise SyntaxErrors. They were created by checking
each C call that raises SyntaxError. There are several modules that
raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and
symtable.c.
The parser itself outlaws a lot of invalid syntax. None of these
errors are tested here at the moment. We should add some tests; since
there are infinitely many programs with invalid syntax, we would need
to be judicious in selecting some.
The compiler generates a synthetic module name for code executed by
doctest. Since all the code comes from the same module, a suffix like
[1] is appended to the module name, As a consequence, changing the
order of tests in this module means renumbering all the errors after
it. (Maybe we should enable the ellipsis option for these tests.)
In ast.c, syntax errors are raised by calling ast_error().
Errors from set_context():
>>> obj.None = 1
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> None = 1
Traceback (most recent call last):
SyntaxError: assignment to keyword
It's a syntax error to assign to the empty tuple. Why isn't it an
error to assign to the empty list? It will always raise some error at
runtime.
>>> () = 1
Traceback (most recent call last):
SyntaxError: can't assign to ()
>>> f() = 1
Traceback (most recent call last):
SyntaxError: can't assign to function call
>>> del f()
Traceback (most recent call last):
SyntaxError: can't delete function call
>>> a + 1 = 2
Traceback (most recent call last):
SyntaxError: can't assign to operator
>>> (x for x in x) = 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression
>>> 1 = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> "abc" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> b"" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> `1` = 1
Traceback (most recent call last):
SyntaxError: invalid syntax
If the left-hand side of an assignment is a list or tuple, an illegal
expression inside that contain should still cause a syntax error.
This test just checks a couple of cases rather than enumerating all of
them.
>>> (a, "b", c) = (1, 2, 3)
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> [a, b, c + 1] = [1, 2, 3]
Traceback (most recent call last):
SyntaxError: can't assign to operator
>>> a if 1 else b = 1
Traceback (most recent call last):
SyntaxError: can't assign to conditional expression
From compiler_complex_args():
>>> def f(None=1):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_arguments():
>>> def f(x, y=1, z):
... pass
Traceback (most recent call last):
SyntaxError: non-default argument follows default argument
>>> def f(x, None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> def f(*None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> def f(**None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_funcdef():
>>> def None(x):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_call():
>>> def f(it, *varargs):
... return list(it)
>>> L = range(10)
>>> f(x for x in L)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(x for x in L, 1)
Traceback (most recent call last):
SyntaxError: Generator expression must be parenthesized if not sole argument
>>> f((x for x in L), 1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... i244, i245, i246, i247, i248, i249, i250, i251, i252,
... i253, i254, i255)
Traceback (most recent call last):
SyntaxError: more than 255 arguments
The actual error cases counts positional arguments, keyword arguments,
and generator expression arguments separately. This test combines the
three.
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... (x for x in i244), i245, i246, i247, i248, i249, i250, i251,
... i252=1, i253=1, i254=1, i255=1)
Traceback (most recent call last):
SyntaxError: more than 255 arguments
>>> f(lambda x: x[0] = 3)
Traceback (most recent call last):
SyntaxError: lambda cannot contain assignment
The grammar accepts any test (basically, any expression) in the
keyword slot of a call site. Test a few different options.
>>> f(x()=2)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
>>> f(a or b=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
>>> f(x.y=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
More set_context():
>>> (x for x in x) += 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression
>>> None += 1
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> f() += 1
Traceback (most recent call last):
SyntaxError: can't assign to function call
Test continue in finally in weird combinations.
continue in for loop under finally should be ok.
>>> def test():
... try:
... pass
... finally:
... for abc in range(10):
... continue
... print(abc)
>>> test()
9
Start simple, a continue in a finally should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
This is essentially a continue in a finally which should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... try:
... continue
... except:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... try:
... continue
... finally:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try: pass
... finally:
... try:
... pass
... except:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
There is one test for a break that is not in a loop. The compiler
uses a single data structure to keep track of try-finally and loops,
so we need to be sure that a break is actually inside a loop. If it
isn't, there should be a syntax error.
>>> try:
... print(1)
... break
... print(2)
... finally:
... print(3)
Traceback (most recent call last):
...
SyntaxError: 'break' outside loop
This should probably raise a better error than a SystemError (or none at all).
In 2.5 there was a missing exception and an assert was triggered in a debug
build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514
>>> while 1:
... while 2:
... while 3:
... while 4:
... while 5:
... while 6:
... while 8:
... while 9:
... while 10:
... while 11:
... while 12:
... while 13:
... while 14:
... while 15:
... while 16:
... while 17:
... while 18:
... while 19:
... while 20:
... while 21:
... while 22:
... break
Traceback (most recent call last):
...
SystemError: too many statically nested blocks
Misuse of the nonlocal statement can lead to a few unique syntax errors.
>>> def f(x):
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: name 'x' is parameter and nonlocal
>>> def f():
... global x
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: name 'x' is nonlocal and global
>>> def f():
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: no binding for nonlocal 'x' found
From SF bug #1705365
>>> nonlocal x
Traceback (most recent call last):
...
SyntaxError: nonlocal declaration not allowed at module level
TODO(jhylton): Figure out how to test SyntaxWarning with doctest.
## >>> def f(x):
## ... def f():
## ... print(x)
## ... nonlocal x
## Traceback (most recent call last):
## ...
## SyntaxWarning: name 'x' is assigned to before nonlocal declaration
## >>> def f():
## ... x = 1
## ... nonlocal x
## Traceback (most recent call last):
## ...
## SyntaxWarning: name 'x' is assigned to before nonlocal declaration
This tests assignment-context; there was a bug in Python 2.5 where compiling
a complex 'if' (one with 'elif') would fail to notice an invalid suite,
leading to spurious errors.
>>> if 1:
... x() = 1
... elif 1:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... x() = 1
... elif 1:
... pass
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... pass
... else:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
Make sure that the old "raise X, Y[, Z]" form is gone:
>>> raise X, Y
Traceback (most recent call last):
...
SyntaxError: invalid syntax
>>> raise X, Y, Z
Traceback (most recent call last):
...
SyntaxError: invalid syntax
>>> f(a=23, a=234)
Traceback (most recent call last):
...
SyntaxError: keyword argument repeated
>>> del ()
Traceback (most recent call last):
SyntaxError: can't delete ()
>>> {1, 2, 3} = 42
Traceback (most recent call last):
SyntaxError: can't assign to literal
Corner-cases that used to fail to raise the correct error:
>>> def f(*, x=lambda __debug__:0): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(*args:(lambda __debug__:0)): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(**kwargs:(lambda __debug__:0)): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> with (lambda *:0): pass
Traceback (most recent call last):
SyntaxError: named arguments must follow bare *
Corner-cases that used to crash:
>>> def f(**__debug__): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(*xx, __debug__): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
""" |
# FIXME: to be fixed... does not work as of today
# import unittest
# from datetime import datetime, timedelta
# from DIRAC.ResourceStatusSystem.Utilities.mock import Mock
# from DIRAC.Core.LCG.GOCDBClient import GOCDBClient
# from DIRAC.Core.LCG.SLSClient import *
# from DIRAC.Core.LCG.SAMResultsClient import *
# from DIRAC.Core.LCG.GGUSTicketsClient import GGUSTicketsClient
# #from DIRAC.ResourceStatusSystem.Utilities.Exceptions import *
# #from DIRAC.ResourceStatusSystem.Utilities.Utils import *
#
# #############################################################################
#
# class ClientsTestCase(unittest.TestCase):
# """ Base class for the clients test cases
# """
# def setUp(self):
#
# from DIRAC.Core.Base.Script import parseCommandLine
# parseCommandLine()
#
# self.mockRSS = Mock()
#
# self.GOCCli = GOCDBClient()
# self.SLSCli = SLSClient()
# self.SAMCli = SAMResultsClient()
# self.GGUSCli = GGUSTicketsClient()
#
# #############################################################################
#
# class GOCDBClientSuccess(ClientsTestCase):
#
# def test__downTimeXMLParsing(self):
# now = datetime.utcnow().replace(microsecond = 0, second = 0)
# tomorrow = datetime.utcnow().replace(microsecond = 0, second = 0) + timedelta(hours = 24)
# inAWeek = datetime.utcnow().replace(microsecond = 0, second = 0) + timedelta(days = 7)
#
# nowLess12h = str( now - timedelta(hours = 12) )[:-3]
# nowPlus8h = str( now + timedelta(hours = 8) )[:-3]
# nowPlus24h = str( now + timedelta(hours = 24) )[:-3]
# nowPlus40h = str( now + timedelta(hours = 40) )[:-3]
# nowPlus50h = str( now + timedelta(hours = 50) )[:-3]
# nowPlus60h = str( now + timedelta(hours = 60) )[:-3]
#
# XML_site_ongoing = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
# XML_node_ongoing = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505455" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><HOSTNAME>egse-cresco.portici.enea.it</HOSTNAME><HOSTED_BY>GRISU-ENEA-GRID</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
# XML_nodesite_ongoing = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505455" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><HOSTNAME>egse-cresco.portici.enea.it</HOSTNAME><HOSTED_BY>GRISU-ENEA-GRID</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus8h+'</FORMATED_END_DATE></DOWNTIME><DOWNTIME ID="78505456" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
# XML_site_startingIn8h = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus8h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
# XML_node_startingIn8h = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505455" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><HOSTNAME>egse-cresco.portici.enea.it</HOSTNAME><HOSTED_BY>GRISU-ENEA-GRID</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus8h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
# XML_site_ongoing_and_site_starting_in_24_hours = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G1" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus8h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME><DOWNTIME ID="78505457" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE 2</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus24h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus40h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
# XML_site_startingIn24h_and_site_startingIn50h = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G1" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus24h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus40h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME><DOWNTIME ID="78505457" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus50h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus60h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
# XML_site_ongoing_and_other_site_starting_in_24_hours = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G1" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus8h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME><DOWNTIME ID="78505457" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>CERN-PROD</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE 2</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus24h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus40h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
# XML_node_ongoing_and_other_node_starting_in_24_hours = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G1" CLASSIFICATION="SCHEDULED"><HOSTNAME>egse-cresco.portici.enea.it</HOSTNAME><HOSTED_BY>GRISU-ENEA-GRID</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems RESOURCE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus8h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME><DOWNTIME ID="78505457" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><HOSTNAME>ce112.cern.ch</HOSTNAME><HOSTED_BY>CERN-PROD</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems RESOURCE 2</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus24h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus40h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing, 'Site')
# self.assertEqual(res.keys()[0], '28490G0 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G0 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing, 'Resource')
# self.assertEqual(res.keys()[0], '28490G0 egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G0 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G0 egse-cresco.portici.enea.it']['HOSTED_BY'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing, 'Resource')
# self.assertEqual(res, {})
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing, 'Site')
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_nodesite_ongoing, 'Site')
# self.assertEquals(len(res), 1)
# self.assertEqual(res.keys()[0], '28490G0 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G0 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_nodesite_ongoing, 'Resource')
# self.assertEquals(len(res), 1)
# self.assertEqual(res.keys()[0], '28490G0 egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G0 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_startingIn8h, 'Site', None, now)
# self.assertEqual(res, {})
# res = self.GOCCli._downTimeXMLParsing(XML_node_startingIn8h, 'Resource', None, now)
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_site_starting_in_24_hours, 'Site', None, now)
# self.assertEqual(res.keys()[0], '28490G1 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_site_starting_in_24_hours, 'Resource', None, now)
# self.assertEqual(res, {})
# res = self.GOCCli._downTimeXMLParsing(XML_site_startingIn24h_and_site_startingIn50h, 'Site', None, now)
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_startingIn24h_and_site_startingIn50h, 'Site', None, tomorrow)
# self.assertEqual(res.keys()[0], '28490G1 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', ['GRISU-ENEA-GRID'])
# self.assertEqual(res.keys()[0], '28490G1 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', ['GRISU-ENEA-GRID', 'CERN-PROD'])
# self.assert_('28490G1 GRISU-ENEA-GRID' in res.keys())
# self.assert_('28490G0 CERN-PROD' in res.keys())
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
# self.assertEqual(res['28490G0 CERN-PROD']['SITENAME'], 'CERN-PROD')
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', 'CERN-PROD')
# self.assertEqual(res.keys()[0], '28490G0 CERN-PROD')
# self.assertEqual(res['28490G0 CERN-PROD']['SITENAME'], 'CERN-PROD')
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', 'CNAF-T1')
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', ['GRISU-ENEA-GRID', 'CERN-PROD'], now)
# self.assertEqual(res.keys()[0], '28490G1 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', ['GRISU-ENEA-GRID', 'CERN-PROD'], inAWeek)
# self.assertEqual(res.keys()[0], '28490G0 CERN-PROD')
# self.assertEqual(res['28490G0 CERN-PROD']['SITENAME'], 'CERN-PROD')
#
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', ['egse-cresco.portici.enea.it'])
# self.assertEqual(res.keys()[0], '28490G1 egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G1 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', ['egse-cresco.portici.enea.it', 'ce112.cern.ch'])
# self.assert_('28490G1 egse-cresco.portici.enea.it' in res.keys())
# self.assert_('28490G0 ce112.cern.ch' in res.keys())
# self.assertEqual(res['28490G1 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G0 ce112.cern.ch']['HOSTNAME'], 'ce112.cern.ch')
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', 'ce112.cern.ch')
# self.assertEqual(res.keys()[0], '28490G0 ce112.cern.ch')
# self.assertEqual(res['28490G0 ce112.cern.ch']['HOSTNAME'], 'ce112.cern.ch')
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', 'grid0.fe.infn.it')
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', ['egse-cresco.portici.enea.it', 'ce112.cern.ch'], now)
# self.assert_('28490G1 egse-cresco.portici.enea.it' in res.keys())
# self.assertEqual(res['28490G1 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', ['egse-cresco.portici.enea.it', 'ce112.cern.ch'], inAWeek)
# self.assertEqual(res.keys()[0], '28490G0 ce112.cern.ch')
# self.assertEqual(res['28490G0 ce112.cern.ch']['HOSTNAME'], 'ce112.cern.ch')
#
#
# def test_getStatus(self):
# for granularity in ('Site', 'Resource'):
# res = self.GOCCli.getStatus(granularity, 'XX')['Value']
# self.assertEqual(res, None)
# res = self.GOCCli.getStatus(granularity, 'XX', datetime.utcnow())['Value']
# self.assertEqual(res, None)
# res = self.GOCCli.getStatus(granularity, 'XX', datetime.utcnow(), 12)['Value']
# self.assertEqual(res, None)
#
# res = self.GOCCli.getStatus('Site', 'pic')['Value']
# self.assertEqual(res, None)
#
# def test_getServiceEndpointInfo(self):
# for granularity in ('hostname', 'sitename', 'roc',
# 'country', 'service_type', 'monitored'):
# res = self.GOCCli.getServiceEndpointInfo(granularity, 'XX')['Value']
# self.assertEqual(res, [])
#
# #############################################################################
#
# class SAMResultsClientSuccess(ClientsTestCase):
#
# def test_getStatus(self):
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA')['Value']
# self.assertEqual(res, {'SS':'ok'})
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA', ['ver'])['Value']
# self.assertEqual(res, {'ver':'ok'})
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA', ['LHCb CE-lhcb-os', 'PilotRole'])['Value']
# self.assertEqual(res, {'PilotRole':'ok', 'LHCb CE-lhcb-os':'ok'})
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA', ['wrong'])['Value']
# self.assertEqual(res, None)
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA', ['ver', 'wrong'])['Value']
# self.assertEqual(res, {'ver':'ok'})
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA')['Value']
# self.assertEqual(res, {'SS':'ok'})
#
# res = self.SAMCli.getStatus('Site', 'INFN-FERRARA')['Value']
# self.assertEqual(res, {'SiteStatus':'ok'})
#
# #############################################################################
#
# #class SAMResultsClientFailure(ClientsTestCase):
# #
# # def test_getStatus(self):
# # self.failUnlessRaises(NoSAMTests, self.SAMCli.getStatus, 'Resource', 'XX', 'INFN-FERRARA')
#
# #############################################################################
#
# class SLSClientSuccess(ClientsTestCase):
#
# def test_getAvailabilityStatus(self):
# res = self.SLSCli.getAvailabilityStatus('RAL-LHCb_FAILOVER')['Value']
# self.assertEqual(res, 100)
#
# def test_getServiceInfo(self):
# res = self.SLSCli.getServiceInfo('CASTORLHCB_LHCBMDST', ["Volume to be recallled GB"])['Value']
# self.assertEqual(res["Volume to be recallled GB"], 0.0)
#
# #############################################################################
#
# #class SLSClientFailure(ClientsTestCase):
# #
# # def test_getStatus(self):
# # self.failUnlessRaises(NoServiceException, self.SLSCli.getAvailabilityStatus, 'XX')
#
# #############################################################################
#
# class GGUSTicketsClientSuccess(ClientsTestCase):
#
# def test_getTicketsList(self):
# res = self.GGUSCli.getTicketsList('INFN-CAGLIARI')['Value']
# self.assertEqual(res[0]['open'], 0)
#
#
# #############################################################################
#
# if __name__ == '__main__':
# suite = unittest.defaultTestLoader.loadTestsFromTestCase(ClientsTestCase)
# suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(GOCDBClientSuccess))
# # suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(GOCDBClient_Failure))
# suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(SAMResultsClientSuccess))
# # suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(SAMResultsClientFailure))
# suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(SLSClientSuccess))
# # suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(SLSClientFailure))
# suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(GGUSTicketsClientSuccess))
# testResult = unittest.TextTestRunner(verbosity=2).run(suite)
|
"""
This is a procedural interface to the matplotlib object-oriented
plotting library.
The following plotting commands are provided; the majority have
Matlab(TM) analogs and similar argument.
_Plotting commands
acorr - plot the autocorrelation function
annotate - annotate something in the figure
arrow - add an arrow to the axes
axes - Create a new axes
axhline - draw a horizontal line across axes
axvline - draw a vertical line across axes
axhspan - draw a horizontal bar across axes
axvspan - draw a vertical bar across axes
axis - Set or return the current axis limits
bar - make a bar chart
barh - a horizontal bar chart
broken_barh - a set of horizontal bars with gaps
box - set the axes frame on/off state
boxplot - make a box and whisker plot
cla - clear current axes
clabel - label a contour plot
clf - clear a figure window
clim - adjust the color limits of the current image
close - close a figure window
colorbar - add a colorbar to the current figure
cohere - make a plot of coherence
contour - make a contour plot
contourf - make a filled contour plot
csd - make a plot of cross spectral density
delaxes - delete an axes from the current figure
draw - Force a redraw of the current figure
errorbar - make an errorbar graph
figlegend - make legend on the figure rather than the axes
figimage - make a figure image
figtext - add text in figure coords
figure - create or change active figure
fill - make filled polygons
findobj - recursively find all objects matching some criteria
gca - return the current axes
gcf - return the current figure
gci - get the current image, or None
getp - get a handle graphics property
grid - set whether gridding is on
hist - make a histogram
hold - set the axes hold state
ioff - turn interaction mode off
ion - turn interaction mode on
isinteractive - return True if interaction mode is on
imread - load image file into array
imshow - plot image data
ishold - return the hold state of the current axes
legend - make an axes legend
loglog - a log log plot
matshow - display a matrix in a new figure preserving aspect
pcolor - make a pseudocolor plot
pcolormesh - make a pseudocolor plot using a quadrilateral mesh
pie - make a pie chart
plot - make a line plot
plot_date - plot dates
plotfile - plot column data from an ASCII tab/space/comma delimited file
pie - pie charts
polar - make a polar plot on a PolarAxes
psd - make a plot of power spectral density
quiver - make a direction field (arrows) plot
rc - control the default params
rgrids - customize the radial grids and labels for polar
savefig - save the current figure
scatter - make a scatter plot
setp - set a handle graphics property
semilogx - log x axis
semilogy - log y axis
show - show the figures
specgram - a spectrogram plot
spy - plot sparsity pattern using markers or image
stem - make a stem plot
subplot - make a subplot (numrows, numcols, axesnum)
subplots_adjust - change the params controlling the subplot positions of current figure
subplot_tool - launch the subplot configuration tool
suptitle - add a figure title
table - add a table to the plot
text - add some text at location x,y to the current axes
thetagrids - customize the radial theta grids and labels for polar
title - add a title to the current axes
xcorr - plot the autocorrelation function of x and y
xlim - set/get the xlimits
ylim - set/get the ylimits
xticks - set/get the xticks
yticks - set/get the yticks
xlabel - add an xlabel to the current axes
ylabel - add a ylabel to the current axes
autumn - set the default colormap to autumn
bone - set the default colormap to bone
cool - set the default colormap to cool
copper - set the default colormap to copper
flag - set the default colormap to flag
gray - set the default colormap to gray
hot - set the default colormap to hot
hsv - set the default colormap to hsv
jet - set the default colormap to jet
pink - set the default colormap to pink
prism - set the default colormap to prism
spring - set the default colormap to spring
summer - set the default colormap to summer
winter - set the default colormap to winter
spectral - set the default colormap to spectral
_Event handling
connect - register an event handler
disconnect - remove a connected event handler
_Matrix commands
cumprod - the cumulative product along a dimension
cumsum - the cumulative sum along a dimension
detrend - remove the mean or besdt fit line from an array
diag - the k-th diagonal of matrix
diff - the n-th differnce of an array
eig - the eigenvalues and eigen vectors of v
eye - a matrix where the k-th diagonal is ones, else zero
find - return the indices where a condition is nonzero
fliplr - flip the rows of a matrix up/down
flipud - flip the columns of a matrix left/right
linspace - a linear spaced vector of N values from min to max inclusive
logspace - a log spaced vector of N values from min to max inclusive
meshgrid - repeat x and y to make regular matrices
ones - an array of ones
rand - an array from the uniform distribution [0,1]
randn - an array from the normal distribution
rot90 - rotate matrix k*90 degress counterclockwise
squeeze - squeeze an array removing any dimensions of length 1
tri - a triangular matrix
tril - a lower triangular matrix
triu - an upper triangular matrix
vander - the Vandermonde matrix of vector x
svd - singular value decomposition
zeros - a matrix of zeros
_Probability
levypdf - The levy probability density function from the char. func.
normpdf - The Gaussian probability density function
rand - random numbers from the uniform distribution
randn - random numbers from the normal distribution
_Statistics
corrcoef - correlation coefficient
cov - covariance matrix
amax - the maximum along dimension m
mean - the mean along dimension m
median - the median along dimension m
amin - the minimum along dimension m
norm - the norm of vector x
prod - the product along dimension m
ptp - the max-min along dimension m
std - the standard deviation along dimension m
asum - the sum along dimension m
_Time series analysis
bartlett - M-point Bartlett window
blackman - M-point Blackman window
cohere - the coherence using average periodiogram
csd - the cross spectral density using average periodiogram
fft - the fast Fourier transform of vector x
hamming - M-point Hamming window
hanning - M-point Hanning window
hist - compute the histogram of x
kaiser - M length Kaiser window
psd - the power spectral density using average periodiogram
sinc - the sinc function of array x
_Dates
date2num - convert python datetimes to numeric representation
drange - create an array of numbers for date plots
num2date - convert numeric type (float days since 0001) to datetime
_Other
angle - the angle of a complex array
griddata - interpolate irregularly distributed data to a regular grid
load - load ASCII data into array
polyfit - fit x, y to an n-th order polynomial
polyval - evaluate an n-th order polynomial
roots - the roots of the polynomial coefficients in p
save - save an array to an ASCII file
trapz - trapezoidal integration
__end
""" |
#
# XML-RPC CLIENT LIBRARY
# $Id$
#
# an XML-RPC client interface for Python.
#
# the marshalling and response parser code can also be used to
# implement XML-RPC servers.
#
# Notes:
# this version is designed to work with Python 2.1 or newer.
#
# History:
# 1999-01-14 fl Created
# 1999-01-15 fl Changed dateTime to use localtime
# 1999-01-16 fl Added Binary/base64 element, default to RPC2 service
# 1999-01-19 fl Fixed array data element (from Skip Montanaro)
# 1999-01-21 fl Fixed dateTime constructor, etc.
# 1999-02-02 fl Added fault handling, handle empty sequences, etc.
# 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro)
# 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8)
# 2000-11-28 fl Changed boolean to check the truth value of its argument
# 2001-02-24 fl Added encoding/Unicode/SafeTransport patches
# 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1)
# 2001-03-28 fl Make sure response tuple is a singleton
# 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2)
# 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser
# 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup)
# 2001-10-01 fl Remove containers from memo cache when done with them
# 2001-10-01 fl Use faster escape method (80% dumps speedup)
# 2001-10-02 fl More dumps microtuning
# 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow
# 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems)
# 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments
# 2002-04-16 fl Added __str__ methods to datetime/binary wrappers
# 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version
# 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type
# 2003-02-27 gvr Remove apply calls
# 2003-04-24 sm Use cStringIO if available
# 2003-04-25 ak Add support for nil
# 2003-06-15 gn Add support for time.struct_time
# 2003-07-12 gp Correct marshalling of Faults
# 2003-10-31 mvl Add multicall support
# 2004-08-20 mvl Bump minimum supported Python version to 2.1
#
# Copyright (c) 1999-2002 by Secret Labs AB.
# Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com
#
# --------------------------------------------------------------------
# The XML-RPC client interface is
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
|
"""
Filename: ddp.py
Author: NAME for solving dynamic programs (also known as Markov decision
processes) with finite states and actions.
Discrete Dynamic Programming
----------------------------
A discrete dynamic program consists of the following components:
* finite set of states :math:`S = \{0, \ldots, n-1\}`;
* finite set of available actions :math:`A(s)` for each state :math:`s
\in S` with :math:`A = \bigcup_{s \in S} A(s) = \{0, \ldots, m-1\}`,
where :math:`\mathit{SA} = \{(s, a) \in S \times A \mid a \in A(s)\}`
is the set of feasible state-action pairs;
* reward function :math:`r\colon \mathit{SA} \to \mathbb{R}`, where
:math:`r(s, a)` is the reward when the current state is :math:`s` and
the action chosen is :math:`a`;
* transition probability function :math:`q\colon \mathit{SA} \to
\Delta(S)`, where :math:`q(s'|s, a)` is the probability that the state
in the next period is :math:`s'` when the current state is :math:`s`
and the action chosen is :math:`a`; and
* discount factor :math:`\beta \in [0, 1)`.
For a policy function :math:`\sigma`, let :math:`r_{\sigma}` and
:math:`Q_{\sigma}` be the reward vector and the transition probability
matrix for :math:`\sigma`, which are defined by :math:`r_{\sigma}(s) =
r(s, \sigma(s))` and :math:`Q_{\sigma}(s, s') = q(s'|s, \sigma(s))`,
respectively. The policy value function :math:`v_{\sigma}` for
:math`\sigma` is defined by
..math::
v_{\sigma}(s) = \sum_{t=0}^{\infty}
\beta^t (Q_{\sigma}^t r_{\sigma})(s)
\quad (s \in S).
The *optimal value function* :math:`v^*` is the function such that
:math:`v^*(s) = \max_{\sigma} v_{\sigma}(s)` for all :math:`s \in S`. A
policy function :math:`\sigma^*` is *optimal* if :math:`v_{\sigma}(s) =
v(s)` for all :math:`s \in S`.
The *Bellman equation* is written as
..math::
v(s) = \max_{a \in A(s)} r(s, a)
+ \beta \sum_{s' \in S} q(s'|s, a) v(s') \quad (s \in S).
The *Bellman operator* :math:`T` is defined by the right hand side of
the Bellman equation:
..math::
(T v)(s) = \max_{a \in A(s)} r(s, a)
+ \beta \sum_{s' \in S} q(s'|s, a) v(s') \quad (s \in S).
For a policy function :math:`\sigma`, the operator :math:`T_{\sigma}` is
defined by
..math::
(T_{\sigma} v)(s) = r(s, \sigma(s))
+ \beta \sum_{s' \in S} q(s'|s, \sigma(s)) v(s')
\quad (s \in S),
or :math:`T_{\sigma} v = r_{\sigma} + \beta Q_{\sigma} v`.
The main result of the theory of dynamic programming states that the
optimal value function :math:`v^*` is the unique solution to the Bellman
equation, or the unique fixed point of the Bellman operator, and that
:math:`\sigma^*` is an optimal policy function if and only if it is
:math:`v^*`-greedy, i.e., it satisfies :math:`T v^* = T_{\sigma^*} v^*`.
Solution Algorithms
-------------------
The `DiscreteDP` class currently implements the following solution
algorithms:
* value iteration;
* policy iteration;
* modified policy iteration.
Policy iteration computes an exact optimal policy in finitely many
iterations, while value iteration and modified policy iteration return
an :math:`\varepsilon`-optimal policy and an
:math:`\varepsilon/2`-approximation of the optimal value function for a
prespecified value of :math:`\varepsilon`.
Our implementations of value iteration and modified policy iteration
employ the norm-based and span-based termination rules, respectively.
* Value iteration is terminated when the condition :math:`\lVert T v - v
\rVert < [(1 - \beta) / (2\beta)] \varepsilon` is satisfied.
* Modified policy iteration is terminated when the condition
:math:`\mathrm{span}(T v - v) < [(1 - \beta) / \beta] \varepsilon` is
satisfied, where :math:`\mathrm{span}(z) = \max(z) - \min(z)`.
References
----------
M. NAME Markov Decision Processes: Discrete Stochastic Dynamic
Programming, Wiley-Interscience, 2005.
""" |
"""
>>> from mat import Mat
>>> from vec import Vec
>>> from GF2 import one
No operations should mutate the input matrices, except setitem.
For getitem(M,k):
>>> M = Mat(({1,3,5}, {'a'}), {(1,'a'):4, (5,'a'): 2})
>>> M[1,'a']
4
>>> M[3,'a']
0
Make sure your operations work on other fields, like GF(2).
>>> M = Mat((set(range(1000)), {'e',' '}), {(500, ' '): one, (255, 'e'): 0})
>>> M[500, ' ']
one
>>> M[500, 'e']
0
>>> M[255, 'e']
0
>>> M == Mat((set(range(1000)), {'e',' '}), {(500, ' '): one, (255, 'e'): 0})
True
For setitem(M,k,val)
>>> M = Mat(({'a','b','c'}, {5}), {('a', 5):3, ('b', 5):7})
>>> M['b', 5] = 9
>>> M['c', 5] = 13
>>> M == Mat(({'a','b','c'}, {5}), {('a', 5):3, ('b', 5):9, ('c',5):13})
True
Make sure your operations work with bizarre and unordered keys.
>>> N = Mat(({((),), 7}, {True, False}), {})
>>> N[(7, False)] = 1
>>> N[(((),), True)] = 2
>>> N == Mat(({((),), 7}, {True, False}), {(7,False):1, (((),), True):2})
True
For add(A, B):
>>> A1 = Mat(({3, 6}, {'x','y'}), {(3,'x'):-2, (6,'y'):3})
>>> A2 = Mat(({3, 6}, {'x','y'}), {(3,'y'):4})
>>> B = Mat(({3, 6}, {'x','y'}), {(3,'x'):-2, (3,'y'):4, (6,'y'):3})
>>> A1 + A2 == B
True
>>> A2 + A1 == B
True
>>> A1 == Mat(({3, 6}, {'x','y'}), {(3,'x'):-2, (6,'y'):3})
True
>>> zero = Mat(({3,6}, {'x','y'}), {})
>>> B + zero == B
True
>>> C1 = Mat(({1,3}, {2,4}), {(1,2):2, (3,4):3})
>>> C2 = Mat(({1,3}, {2,4}), {(1,4):1, (1,2):4})
>>> D = Mat(({1,3}, {2,4}), {(1,2):6, (1,4):1, (3,4):3})
>>> C1 + C2 == D
True
For scalar_mul(M, x):
>>> M = Mat(({1,3,5}, {2,4}), {(1,2):4, (5,4):2, (3,4):3})
>>> 0*M == Mat(({1, 3, 5}, {2, 4}), {})
True
>>> 1*M == M
True
>>> 0.25*M == Mat(({1,3,5}, {2,4}), {(1,2):1.0, (5,4):0.5, (3,4):0.75})
True
>>> M = Mat(({1,2,3},{4,5,6}), {(1,4):one, (3,5):one, (2,5): 0})
>>> one * M == Mat(({1,2,3},{4,5,6}), {(1,4):one, (3,5):one, (2,5): 0})
True
>>> 0 * M == Mat(({1,2,3},{4,5,6}), {})
True
For equal(A, B):
>>> Mat(({'a','b'}, {0,1}), {('a',1):0}) == Mat(({'a','b'}, {0,1}), {('b',1):0})
True
>>> A = Mat(({'a','b'}, {0,1}), {('a',1):2, ('b',0):1})
>>> B = Mat(({'a','b'}, {0,1}), {('a',1):2, ('b',0):1, ('b',1):0})
>>> C = Mat(({'a','b'}, {0,1}), {('a',1):2, ('b',0):1, ('b',1):5})
>>> A == B
True
>>> A == C
False
>>> A == Mat(({'a','b'}, {0,1}), {('a',1):2, ('b',0):1})
True
For transpose(M):
>>> M = Mat(({0,1}, {0,1}), {(0,1):3, (1,0):2, (1,1):4})
>>> M.transpose() == Mat(({0,1}, {0,1}), {(0,1):2, (1,0):3, (1,1):4})
True
>>> M = Mat(({'x','y','z'}, {2,4}), {('x',4):3, ('x',2):2, ('y',4):4, ('z',4):5})
>>> Mt = Mat(({2,4}, {'x','y','z'}), {(4,'x'):3, (2,'x'):2, (4,'y'):4, (4,'z'):5})
>>> M.transpose() == Mt
True
For vector_matrix_mul(v, M):
>>> v1 = Vec({1, 2, 3}, {1: 1, 2: 8})
>>> M1 = Mat(({1, 2, 3}, {1, 2, 3}), {(1, 2): 2, (2, 1):-1, (3, 1): 1, (3, 3): 7})
>>> v1*M1 == Vec({1, 2, 3},{1: -8, 2: 2, 3: 0})
True
>>> v1 == Vec({1, 2, 3}, {1: 1, 2: 8})
True
>>> M1 == Mat(({1, 2, 3}, {1, 2, 3}), {(1, 2): 2, (2, 1):-1, (3, 1): 1, (3, 3): 7})
True
>>> v2 = Vec({'a','b'}, {})
>>> M2 = Mat(({'a','b'}, {0, 2, 4, 6, 7}), {})
>>> v2*M2 == Vec({0, 2, 4, 6, 7},{})
True
For matrix_vector_mul(M, v):
>>> N1 = Mat(({1, 3, 5, 7}, {'a', 'b'}), {(1, 'a'): -1, (1, 'b'): 2, (3, 'a'): 1, (3, 'b'):4, (7, 'a'): 3, (5, 'b'):-1})
>>> u1 = Vec({'a', 'b'}, {'a': 1, 'b': 2})
>>> N1*u1 == Vec({1, 3, 5, 7},{1: 3, 3: 9, 5: -2, 7: 3})
True
>>> N1 == Mat(({1, 3, 5, 7}, {'a', 'b'}), {(1, 'a'): -1, (1, 'b'): 2, (3, 'a'): 1, (3, 'b'):4, (7, 'a'): 3, (5, 'b'):-1})
True
>>> u1 == Vec({'a', 'b'}, {'a': 1, 'b': 2})
True
>>> N2 = Mat(({('a', 'b'), ('c', 'd')}, {1, 2, 3, 5, 8}), {})
>>> u2 = Vec({1, 2, 3, 5, 8}, {})
>>> N2*u2 == Vec({('a', 'b'), ('c', 'd')},{})
True
For matrix_matrix_mul(A, B):
>>> A = Mat(({0,1,2}, {0,1,2}), {(1,1):4, (0,0):0, (1,2):1, (1,0):5, (0,1):3, (0,2):2})
>>> B = Mat(({0,1,2}, {0,1,2}), {(1,0):5, (2,1):3, (1,1):2, (2,0):0, (0,0):1, (0,1):4})
>>> A*B == Mat(({0,1,2}, {0,1,2}), {(0,0):15, (0,1):12, (1,0):25, (1,1):31})
True
>>> C = Mat(({0,1,2}, {'a','b'}), {(0,'a'):4, (0,'b'):-3, (1,'a'):1, (2,'a'):1, (2,'b'):-2})
>>> D = Mat(({'a','b'}, {'x','y'}), {('a','x'):3, ('a','y'):-2, ('b','x'):4, ('b','y'):-1})
>>> C*D == Mat(({0,1,2}, {'x','y'}), {(0,'y'):-5, (1,'x'):3, (1,'y'):-2, (2,'x'):-5})
True
>>> M = Mat(({0, 1}, {'a', 'c', 'b'}), {})
>>> N = Mat(({'a', 'c', 'b'}, {(1, 1), (2, 2)}), {})
>>> M*N == Mat(({0,1}, {(1,1), (2,2)}), {})
True
""" |
#!/usr/bin/env python
# ***** BEGIN LICENSE BLOCK *****
# Version: MPL 1.1/GPL 2.0/LGPL 2.1
#
# The contents of this file are subject to the Mozilla Public License Version
# 1.1 (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
# http://www.mozilla.org/MPL/
#
# Software distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
# for the specific language governing rights and limitations under the
# License.
#
# The Original Code is font utility code.
#
# The Initial Developer of the Original Code is Mozilla Corporation.
# Portions created by the Initial Developer are Copyright (C) 2009
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# NAME <EMAIL>
#
# Alternatively, the contents of this file may be used under the terms of
# either the GNU General Public License Version 2 or later (the "GPL"), or
# the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
# in which case the provisions of the GPL or the LGPL are applicable instead
# of those above. If you wish to allow use of your version of this file only
# under the terms of either the GPL or the LGPL, and not to allow others to
# use your version of this file under the terms of the MPL, indicate your
# decision by deleting the provisions above and replace them with the notice
# and other provisions required by the GPL or the LGPL. If you do not delete
# the provisions above, a recipient may use your version of this file under
# the terms of any one of the MPL, the GPL or the LGPL.
#
# ***** END LICENSE BLOCK ***** */
# eotlitetool.py - create EOT version of OpenType font for use with IE
#
# Usage: eotlitetool.py [-o output-filename] font1 [font2 ...]
#
# OpenType file structure
# http://www.microsoft.com/typography/otspec/otff.htm
#
# Types:
#
# BYTE 8-bit unsigned integer.
# CHAR 8-bit signed integer.
# USHORT 16-bit unsigned integer.
# SHORT 16-bit signed integer.
# ULONG 32-bit unsigned integer.
# Fixed 32-bit signed fixed-point number (16.16)
# LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer.
#
# SFNT Header
#
# Fixed sfnt version // 0x00010000 for version 1.0.
# USHORT numTables // Number of tables.
# USHORT searchRange // (Maximum power of 2 <= numTables) x 16.
# USHORT entrySelector // Log2(maximum power of 2 <= numTables).
# USHORT rangeShift // NumTables x 16-searchRange.
#
# Table Directory
#
# ULONG tag // 4-byte identifier.
# ULONG checkSum // CheckSum for this table.
# ULONG offset // Offset from beginning of TrueType font file.
# ULONG length // Length of this table.
#
# OS/2 Table (Version 4)
#
# USHORT version // 0x0004
# SHORT xAvgCharWidth
# USHORT usWeightClass
# USHORT usWidthClass
# USHORT fsType
# SHORT ySubscriptXSize
# SHORT ySubscriptYSize
# SHORT ySubscriptXOffset
# SHORT ySubscriptYOffset
# SHORT ySuperscriptXSize
# SHORT ySuperscriptYSize
# SHORT ySuperscriptXOffset
# SHORT ySuperscriptYOffset
# SHORT yStrikeoutSize
# SHORT yStrikeoutPosition
# SHORT sFamilyClass
# BYTE panose[10]
# ULONG ulUnicodeRange1 // Bits 0-31
# ULONG ulUnicodeRange2 // Bits 32-63
# ULONG ulUnicodeRange3 // Bits 64-95
# ULONG ulUnicodeRange4 // Bits 96-127
# CHAR achVendID[4]
# USHORT fsSelection
# USHORT usFirstCharIndex
# USHORT usLastCharIndex
# SHORT sTypoAscender
# SHORT sTypoDescender
# SHORT sTypoLineGap
# USHORT usWinAscent
# USHORT usWinDescent
# ULONG ulCodePageRange1 // Bits 0-31
# ULONG ulCodePageRange2 // Bits 32-63
# SHORT sxHeight
# SHORT sCapHeight
# USHORT usDefaultChar
# USHORT usBreakChar
# USHORT usMaxContext
#
#
# The Naming Table is organized as follows:
#
# [name table header]
# [name records]
# [string data]
#
# Name Table Header
#
# USHORT format // Format selector (=0).
# USHORT count // Number of name records.
# USHORT stringOffset // Offset to start of string storage (from start of table).
#
# Name Record
#
# USHORT platformID // Platform ID.
# USHORT encodingID // Platform-specific encoding ID.
# USHORT languageID // Language ID.
# USHORT nameID // Name ID.
# USHORT length // String length (in bytes).
# USHORT offset // String offset from start of storage area (in bytes).
#
# head Table
#
# Fixed tableVersion // Table version number 0x00010000 for version 1.0.
# Fixed fontRevision // Set by font manufacturer.
# ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum.
# ULONG magicNumber // Set to 0x5F0F3CF5.
# USHORT flags
# USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines.
# LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer
# LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer
# SHORT xMin // For all glyph bounding boxes.
# SHORT yMin
# SHORT xMax
# SHORT yMax
# USHORT macStyle
# USHORT lowestRecPPEM // Smallest readable size in pixels.
# SHORT fontDirectionHint
# SHORT indexToLocFormat // 0 for short offsets, 1 for long.
# SHORT glyphDataFormat // 0 for current format.
#
#
#
# Embedded OpenType (EOT) file format
# http://www.w3.org/Submission/EOT/
#
# EOT version 0x00020001
#
# An EOT font consists of a header with the original OpenType font
# appended at the end. Most of the data in the EOT header is simply a
# copy of data from specific tables within the font data. The exceptions
# are the 'Flags' field and the root string name field. The root string
# is a set of names indicating domains for which the font data can be
# used. A null root string implies the font data can be used anywhere.
# The EOT header is in little-endian byte order but the font data remains
# in big-endian order as specified by the OpenType spec.
#
# Overall structure:
#
# [EOT header]
# [EOT name records]
# [font data]
#
# EOT header
#
# ULONG eotSize // Total structure length in bytes (including string and font data)
# ULONG fontDataSize // Length of the OpenType font (FontData) in bytes
# ULONG version // Version number of this format - 0x00020001
# ULONG flags // Processing Flags (0 == no special processing)
# BYTE fontPANOSE[10] // OS/2 Table panose
# BYTE charset // DEFAULT_CHARSET (0x01)
# BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise
# ULONG weight // OS/2 Table usWeightClass
# USHORT fsType // OS/2 Table fsType (specifies embedding permission flags)
# USHORT magicNumber // Magic number for EOT file - 0x504C.
# ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1
# ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2
# ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3
# ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4
# ULONG codePageRange1 // OS/2 Table ulCodePageRange1
# ULONG codePageRange2 // OS/2 Table ulCodePageRange2
# ULONG checkSumAdjustment // head Table CheckSumAdjustment
# ULONG reserved[4] // Reserved - must be 0
# USHORT padding1 // Padding - must be 0
#
# EOT name records
#
# USHORT FamilyNameSize // Font family name size in bytes
# BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16
# USHORT Padding2 // Padding - must be 0
#
# USHORT StyleNameSize // Style name size in bytes
# BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16
# USHORT Padding3 // Padding - must be 0
#
# USHORT VersionNameSize // Version name size in bytes
# bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16
# USHORT Padding4 // Padding - must be 0
#
# USHORT FullNameSize // Full name size in bytes
# BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16
# USHORT Padding5 // Padding - must be 0
#
# USHORT RootStringSize // Root string size in bytes
# BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
|
"""
=============
Miscellaneous
=============
IEEE 754 Floating Point Special Values:
-----------------------------------------------
Special values defined in numpy: nan, inf,
NaNs can be used as a poor-man's mask (if you don't care what the
original value was)
Note: cannot use equality to test NaNs. E.g.: ::
>>> myarr = np.array([1., 0., np.nan, 3.])
>>> np.where(myarr == np.nan)
>>> np.nan == np.nan # is always False! Use special numpy functions instead.
False
>>> myarr[myarr == np.nan] = 0. # doesn't work
>>> myarr
array([ 1., 0., NaN, 3.])
>>> myarr[np.isnan(myarr)] = 0. # use this instead find
>>> myarr
array([ 1., 0., 0., 3.])
Other related special value functions: ::
isinf(): True if value is inf
isfinite(): True if not nan or inf
nan_to_num(): Map nan to 0, inf to max float, -inf to min float
The following corresponds to the usual functions except that nans are excluded
from the results: ::
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
How numpy handles numerical exceptions:
------------------------------------------
The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
set individually for different kinds of exceptions. The different behaviors
are:
- 'ignore' : Take no action when the exception occurs.
- 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
- 'raise' : Raise a `FloatingPointError`.
- 'call' : Call a function specified using the `seterrcall` function.
- 'print' : Print a warning directly to ``stdout``.
- 'log' : Record error in a Log object specified by `seterrcall`.
These behaviors can be set for all kinds of errors or specific ones:
- all : apply to all numeric exceptions
- invalid : when NaNs are generated
- divide : divide by zero (for integers as well!)
- overflow : floating point overflows
- underflow : floating point underflows
Note that integer divide-by-zero is handled by the same machinery.
These behaviors are set on a per-thread basis.
Examples:
------------
::
>>> oldsettings = np.seterr(all='warn')
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under='ignore')
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid='raise')
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
... print "saw stupid error!"
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all='call')
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
... # error-handling settings
Interfacing to C:
-----------------
Only a survey of the choices. Little detail on how each works.
1) Bare metal, wrap your own C-code manually.
- Plusses:
- Efficient
- No dependencies on other tools
- Minuses:
- Lots of learning overhead:
- need to learn basics of Python C API
- need to learn basics of numpy C API
- need to learn how to handle reference counting and love it.
- Reference counting often difficult to get right.
- getting it wrong leads to memory leaks, and worse, segfaults
- API will change for Python 3.0!
2) pyrex
- Plusses:
- avoid learning C API's
- no dealing with reference counting
- can code in psuedo python and generate C code
- can also interface to existing C code
- should shield you from changes to Python C api
- become pretty popular within Python community
- Minuses:
- Can write code in non-standard form which may become obsolete
- Not as flexible as manual wrapping
- Maintainers not easily adaptable to new features
Thus:
3) cython - fork of pyrex to allow needed features for SAGE
- being considered as the standard scipy/numpy wrapping tool
- fast indexing support for arrays
4) ctypes
- Plusses:
- part of Python standard library
- good for interfacing to existing sharable libraries, particularly
Windows DLLs
- avoids API/reference counting issues
- good numpy support: arrays have all these in their ctypes
attribute: ::
a.ctypes.data a.ctypes.get_strides
a.ctypes.data_as a.ctypes.shape
a.ctypes.get_as_parameter a.ctypes.shape_as
a.ctypes.get_data a.ctypes.strides
a.ctypes.get_shape a.ctypes.strides_as
- Minuses:
- can't use for writing code to be turned into C extensions, only a wrapper
tool.
5) SWIG (automatic wrapper generator)
- Plusses:
- around a long time
- multiple scripting language support
- C++ support
- Good for wrapping large (many functions) existing C libraries
- Minuses:
- generates lots of code between Python and the C code
- can cause performance problems that are nearly impossible to optimize
out
- interface files can be hard to write
- doesn't necessarily avoid reference counting issues or needing to know
API's
7) Weave
- Plusses:
- Phenomenal tool
- can turn many numpy expressions into C code
- dynamic compiling and loading of generated C code
- can embed pure C code in Python module and have weave extract, generate
interfaces and compile, etc.
- Minuses:
- Future uncertain--lacks a champion
8) Psyco
- Plusses:
- Turns pure python into efficient machine code through jit-like
optimizations
- very fast when it optimizes well
- Minuses:
- Only on intel (windows?)
- Doesn't do much for numpy?
Interfacing to Fortran:
-----------------------
Fortran: Clear choice is f2py. (Pyfort is an older alternative, but not
supported any longer)
Interfacing to C++:
-------------------
1) CXX
2) Boost.python
3) SWIG
4) Sage has used cython to wrap C++ (not pretty, but it can be done)
5) SIP (used mainly in PyQT)
""" |
"""
=====================================
Structured Arrays (and Record Arrays)
=====================================
Introduction
============
Numpy provides powerful capabilities to create arrays of structs or records.
These arrays permit one to manipulate the data by the structs or by fields of
the struct. A simple example will show what is meant.: ::
>>> x = np.zeros((2,),dtype=('i4,f4,a10'))
>>> x[:] = [(1,2.,'Hello'),(2,3.,"World")]
>>> x
array([(1, 2.0, 'Hello'), (2, 3.0, 'World')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
Here we have created a one-dimensional array of length 2. Each element of
this array is a record that contains three items, a 32-bit integer, a 32-bit
float, and a string of length 10 or less. If we index this array at the second
position we get the second record: ::
>>> x[1]
(2,3.,"World")
Conveniently, one can access any field of the array by indexing using the
string that names that field. In this case the fields have received the
default names 'f0', 'f1' and 'f2'. ::
>>> y = x['f1']
>>> y
array([ 2., 3.], dtype=float32)
>>> y[:] = 2*y
>>> y
array([ 4., 6.], dtype=float32)
>>> x
array([(1, 4.0, 'Hello'), (2, 6.0, 'World')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
In these examples, y is a simple float array consisting of the 2nd field
in the record. But, rather than being a copy of the data in the structured
array, it is a view, i.e., it shares exactly the same memory locations.
Thus, when we updated this array by doubling its values, the structured
array shows the corresponding values as doubled as well. Likewise, if one
changes the record, the field view also changes: ::
>>> x[1] = (-1,-1.,"Master")
>>> x
array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
>>> y
array([ 4., -1.], dtype=float32)
Defining Structured Arrays
==========================
One defines a structured array through the dtype object. There are
**several** alternative ways to define the fields of a record. Some of
these variants provide backward compatibility with Numeric, numarray, or
another module, and should not be used except for such purposes. These
will be so noted. One specifies record structure in
one of four alternative ways, using an argument (as supplied to a dtype
function keyword or a dtype object constructor itself). This
argument must be one of the following: 1) string, 2) tuple, 3) list, or
4) dictionary. Each of these is briefly described below.
1) String argument (as used in the above examples).
In this case, the constructor expects a comma-separated list of type
specifiers, optionally with extra shape information.
The type specifiers can take 4 different forms: ::
a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n>
(representing bytes, ints, unsigned ints, floats, complex and
fixed length strings of specified byte lengths)
b) int8,...,uint8,...,float16, float32, float64, complex64, complex128
(this time with bit sizes)
c) older Numeric/numarray type specifications (e.g. Float32).
Don't use these in new code!
d) Single character type specifiers (e.g H for unsigned short ints).
Avoid using these unless you must. Details can be found in the
Numpy book
These different styles can be mixed within the same string (but why would you
want to do that?). Furthermore, each type specifier can be prefixed
with a repetition number, or a shape. In these cases an array
element is created, i.e., an array within a record. That array
is still referred to as a single field. An example: ::
>>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64')
>>> x
array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])],
dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))])
By using strings to define the record structure, it precludes being
able to name the fields in the original definition. The names can
be changed as shown later, however.
2) Tuple argument: The only relevant tuple case that applies to record
structures is when a structure is mapped to an existing data type. This
is done by pairing in a tuple, the existing data type with a matching
dtype definition (using any of the variants being described here). As
an example (using a definition using a list, so see 3) for further
details): ::
>>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')]))
>>> x
array([0, 0, 0])
>>> x['r']
array([0, 0, 0], dtype=uint8)
In this case, an array is produced that looks and acts like a simple int32 array,
but also has definitions for fields that use only one byte of the int32 (a bit
like Fortran equivalencing).
3) List argument: In this case the record structure is defined with a list of
tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field
('' is permitted), 2) the type of the field, and 3) the shape (optional).
For example::
>>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
>>> x
array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])],
dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))])
4) Dictionary argument: two different forms are permitted. The first consists
of a dictionary with two required keys ('names' and 'formats'), each having an
equal sized list of values. The format list contains any type/shape specifier
allowed in other contexts. The names must be strings. There are two optional
keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to
the required two where offsets contain integer offsets for each field, and
titles are objects containing metadata for each field (these do not have
to be strings), where the value of None is permitted. As an example: ::
>>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[('col1', '>i4'), ('col2', '>f4')])
The other dictionary form permitted is a dictionary of name keys with tuple
values specifying type, offset, and an optional title. ::
>>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')])
Accessing and modifying field names
===================================
The field names are an attribute of the dtype object defining the record structure.
For the last example: ::
>>> x.dtype.names
('col1', 'col2')
>>> x.dtype.names = ('x', 'y')
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')])
>>> x.dtype.names = ('x', 'y', 'z') # wrong number of names
<type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2
Accessing field titles
====================================
The field titles provide a standard place to put associated info for fields.
They do not have to be strings. ::
>>> x.dtype.fields['x'][2]
'title 1'
Accessing multiple fields at once
====================================
You can access multiple fields at once using a list of field names: ::
>>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))],
dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
Notice that `x` is created with a list of tuples. ::
>>> x[['x','y']]
array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)],
dtype=[('x', '<f4'), ('y', '<f4')])
>>> x[['x','value']]
array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]),
(1.0, [[2.0, 6.0], [2.0, 6.0]])],
dtype=[('x', '<f4'), ('value', '<f4', (2, 2))])
The fields are returned in the order they are asked for.::
>>> x[['y','x']]
array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)],
dtype=[('y', '<f4'), ('x', '<f4')])
Filling structured arrays
=========================
Structured arrays can be filled by field or row by row. ::
>>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')])
>>> arr['var1'] = np.arange(5)
If you fill it in row by row, it takes a take a tuple
(but not a list or array!)::
>>> arr[0] = (10,20)
>>> arr
array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)],
dtype=[('var1', '<f8'), ('var2', '<f8')])
More information
====================================
You can find some more information on recarrays and structured arrays
(including the difference between the two) `here
<http://www.scipy.org/Cookbook/Recarray>`_.
""" |
"""
==============
Array Creation
==============
Introduction
============
There are 5 general mechanisms for creating arrays:
1) Conversion from other Python structures (e.g., lists, tuples)
2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros,
etc.)
3) Reading arrays from disk, either from standard or custom formats
4) Creating arrays from raw bytes through the use of strings or buffers
5) Use of special library functions (e.g., random)
This section will not cover means of replicating, joining, or otherwise
expanding or mutating existing arrays. Nor will it cover creating object
arrays or record arrays. Both of those are covered in their own sections.
Converting Python array_like Objects to Numpy Arrays
====================================================
In general, numerical data arranged in an array-like structure in Python can
be converted to arrays through the use of the array() function. The most
obvious examples are lists and tuples. See the documentation for array() for
details for its use. Some objects may support the array-protocol and allow
conversion to arrays this way. A simple way to find out if the object can be
converted to a numpy array using array() is simply to try it interactively and
see if it works! (The Python Way).
Examples: ::
>>> x = np.array([2,3,1,0])
>>> x = np.array([2, 3, 1, 0])
>>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists,
and types
>>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]])
Intrinsic Numpy Array Creation
==============================
Numpy has built-in functions for creating arrays from scratch:
zeros(shape) will create an array filled with 0 values with the specified
shape. The default dtype is float64.
``>>> np.zeros((2, 3))
array([[ 0., 0., 0.], [ 0., 0., 0.]])``
ones(shape) will create an array filled with 1 values. It is identical to
zeros in all other respects.
arange() will create arrays with regularly incrementing values. Check the
docstring for complete information on the various ways it can be used. A few
examples will be given here: ::
>>> np.arange(10)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> np.arange(2, 10, dtype=np.float)
array([ 2., 3., 4., 5., 6., 7., 8., 9.])
>>> np.arange(2, 3, 0.1)
array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9])
Note that there are some subtleties regarding the last usage that the user
should be aware of that are described in the arange docstring.
linspace() will create arrays with a specified number of elements, and
spaced equally between the specified beginning and end values. For
example: ::
>>> np.linspace(1., 4., 6)
array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ])
The advantage of this creation function is that one can guarantee the
number of elements and the starting and end point, which arange()
generally will not do for arbitrary start, stop, and step values.
indices() will create a set of arrays (stacked as a one-higher dimensioned
array), one per dimension with each representing variation in that dimension.
An example illustrates much better than a verbal description: ::
>>> np.indices((3,3))
array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]])
This is particularly useful for evaluating functions of multiple dimensions on
a regular grid.
Reading Arrays From Disk
========================
This is presumably the most common case of large array creation. The details,
of course, depend greatly on the format of data on disk and so this section
can only give general pointers on how to handle various formats.
Standard Binary Formats
-----------------------
Various fields have standard formats for array data. The following lists the
ones with known python libraries to read them and return numpy arrays (there
may be others for which it is possible to read and convert to numpy arrays so
check the last section as well)
::
HDF5: PyTables
FITS: PyFITS
Examples of formats that cannot be read directly but for which it is not hard
to convert are libraries like PIL (able to read and write many image formats
such as jpg, png, etc).
Common ASCII Formats
------------------------
Comma Separated Value files (CSV) are widely used (and an export and import
option for programs like Excel). There are a number of ways of reading these
files in Python. There are CSV functions in Python and functions in pylab
(part of matplotlib).
More generic ascii files can be read using the io package in scipy.
Custom Binary Formats
---------------------
There are a variety of approaches one can use. If the file has a relatively
simple format then one can write a simple I/O library and use the numpy
fromfile() function and .tofile() method to read and write numpy arrays
directly (mind your byteorder though!) If a good C or C++ library exists that
read the data, one can wrap that library with a variety of techniques though
that certainly is much more work and requires significantly more advanced
knowledge to interface with C or C++.
Use of Special Libraries
------------------------
There are libraries that can be used to generate arrays for special purposes
and it isn't possible to enumerate all of them. The most common uses are use
of the many array generation functions in random that can generate arrays of
random values, and some utility functions to generate special matrices (e.g.
diagonal).
""" |
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
# SKR03
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
|
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to reqd all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
#!/usr/bin/env python
# (c) 2013, NAME <EMAIL>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
#
# Author: NAME <EMAIL>
#
# Description:
# This module queries local or remote Docker daemons and generates
# inventory information.
#
# This plugin does not support targeting of specific hosts using the --host
# flag. Instead, it queries the Docker API for each container, running
# or not, and returns this data all once.
#
# The plugin returns the following custom attributes on Docker containers:
# docker_args
# docker_config
# docker_created
# docker_driver
# docker_exec_driver
# docker_host_config
# docker_hostname_path
# docker_hosts_path
# docker_id
# docker_image
# docker_name
# docker_network_settings
# docker_path
# docker_resolv_conf_path
# docker_state
# docker_volumes
# docker_volumes_rw
#
# Requirements:
# The docker-py module: https://github.com/dotcloud/docker-py
#
# Notes:
# A config file can be used to configure this inventory module, and there
# are several environment variables that can be set to modify the behavior
# of the plugin at runtime:
# DOCKER_CONFIG_FILE
# DOCKER_HOST
# DOCKER_VERSION
# DOCKER_TIMEOUT
# DOCKER_PRIVATE_SSH_PORT
# DOCKER_DEFAULT_IP
#
# Environment Variables:
# environment variable: DOCKER_CONFIG_FILE
# description:
# - A path to a Docker inventory hosts/defaults file in YAML format
# - A sample file has been provided, colocated with the inventory
# file called 'docker.yml'
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_HOST
# description:
# - The socket on which to connect to a Docker daemon API
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_VERSION
# description:
# - Version of the Docker API to use
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_TIMEOUT
# description:
# - Timeout in seconds for connections to Docker daemon API
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_PRIVATE_SSH_PORT
# description:
# - The private port (container port) on which SSH is listening
# for connections
# default: 22
# required: false
# environment variable: DOCKER_DEFAULT_IP
# description:
# - This environment variable overrides the container SSH connection
# IP address (aka, 'ansible_ssh_host')
#
# This option allows one to override the ansible_ssh_host whenever
# Docker has exercised its default behavior of binding private ports
# to all interfaces of the Docker host. This behavior, when dealing
# with remote Docker hosts, does not allow Ansible to determine
# a proper host IP address on which to connect via SSH to containers.
# By default, this inventory module assumes all IP_ADDRESS-exposed
# ports to be bound to localhost:<port>. To override this
# behavior, for example, to bind a container's SSH port to the public
# interface of its host, one must manually set this IP.
#
# It is preferable to begin to launch Docker containers with
# ports exposed on publicly accessible IP addresses, particularly
# if the containers are to be targeted by Ansible for remote
# configuration, not accessible via localhost SSH connections.
#
# Docker containers can be explicitly exposed on IP addresses by
# a) starting the daemon with the --ip argument
# b) running containers with the -P/--publish ip::containerPort
# argument
# default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker
# required: false
#
# Examples:
# Use the config file:
# DOCKER_CONFIG_FILE=./docker.yml docker.py --list
#
# Connect to docker instance on localhost port 4243
# DOCKER_HOST=tcp://localhost:4243 docker.py --list
#
# Any container's ssh port exposed on IP_ADDRESS will mapped to
# another IP address (where Ansible will attempt to connect via SSH)
# DOCKER_DEFAULT_IP=IP_ADDRESS docker.py --list
|
"""
Introduction
============
SqlSoup provides a convenient way to access existing database tables without
having to declare table or mapper classes ahead of time. It is built on top of
the SQLAlchemy ORM and provides a super-minimalistic interface to an existing
database.
SqlSoup effectively provides a coarse grained, alternative interface to
working with the SQLAlchemy ORM, providing a "self configuring" interface
for extremely rudimental operations. It's somewhat akin to a
"super novice mode" version of the ORM. While SqlSoup can be very handy,
users are strongly encouraged to use the full ORM for non-trivial applications.
Suppose we have a database with users, books, and loans tables
(corresponding to the PyWebOff dataset, if you're curious).
Creating a SqlSoup gateway is just like creating an SQLAlchemy
engine::
>>> from sqlalchemy.ext.sqlsoup import SqlSoup
>>> db = SqlSoup('sqlite:///:memory:')
or, you can re-use an existing engine::
>>> db = SqlSoup(engine)
You can optionally specify a schema within the database for your
SqlSoup::
>>> db.schema = myschemaname
Loading objects
===============
Loading objects is as easy as this::
>>> users = db.users.all()
>>> users.sort()
>>> users
[MappedUsers(name=u'Joe NAME MappedUsers(name=u'Bhargan Basepair',email=u'EMAIL',password=u'basepair',classname=None,admin=1)]
Of course, letting the database do the sort is better::
>>> db.users.order_by(db.users.name).all()
[MappedUsers(name=u'Bhargan Basepair',email=u'EMAIL',password=u'basepair',classname=None,admin=1), MappedUsers(name=u'Joe NAME access is intuitive::
>>> users[0].email
u'EMAIL'
Of course, you don't want to load all users very often. Let's add a
WHERE clause. Let's also switch the order_by to DESC while we're at
it::
>>> from sqlalchemy import or_, and_, desc
>>> where = or_(db.users.name=='Bhargan Basepair', db.users.email=='EMAIL')
>>> db.users.filter(where).order_by(desc(db.users.name)).all()
[MappedUsers(name=u'Joe NAME MappedUsers(name=u'Bhargan NAME can also use .first() (to retrieve only the first object from a query) or
.one() (like .first when you expect exactly one user -- it will raise an
exception if more were returned)::
>>> db.users.filter(db.users.name=='Bhargan NAME
MappedUsers(name=u'Bhargan NAME name is the primary key, this is equivalent to
>>> db.users.get('Bhargan NAME
MappedUsers(name=u'Bhargan NAME is also equivalent to
>>> db.users.filter_by(name='Bhargan NAME
MappedUsers(name=u'Bhargan NAME is like filter, but takes kwargs instead of full clause expressions.
This makes it more concise for simple queries like this, but you can't do
complex queries like the or\_ above or non-equality based comparisons this way.
Full query documentation
------------------------
Get, filter, filter_by, order_by, limit, and the rest of the
query methods are explained in detail in :ref:`ormtutorial_querying`.
Modifying objects
=================
Modifying objects is intuitive::
>>> user = _
>>> user.email = 'EMAIL'
>>> db.commit()
(SqlSoup leverages the sophisticated SQLAlchemy unit-of-work code, so
multiple updates to a single object will be turned into a single
``UPDATE`` statement when you commit.)
To finish covering the basics, let's insert a new loan, then delete
it::
>>> book_id = db.books.filter_by(title='Regional Variation in Moss').first().id
>>> db.loans.insert(book_id=book_id, user_name=user.name)
MappedLoans(book_id=2,user_name=u'Bhargan NAME
>>> loan = db.loans.filter_by(book_id=2, user_name='Bhargan NAME
>>> db.delete(loan)
>>> db.commit()
You can also delete rows that have not been loaded as objects. Let's
do our insert/delete cycle once more, this time using the loans
table's delete method. (For SQLAlchemy experts: note that no flush()
call is required since this delete acts at the SQL level, not at the
Mapper level.) The same where-clause construction rules apply here as
to the select methods.
::
>>> db.loans.insert(book_id=book_id, user_name=user.name)
MappedLoans(book_id=2,user_name=u'Bhargan NAME
>>> db.loans.delete(db.loans.book_id==2)
You can similarly update multiple rows at once. This will change the
book_id to 1 in all loans whose book_id is 2::
>>> db.loans.update(db.loans.book_id==2, book_id=1)
>>> db.loans.filter_by(book_id=1).all()
[MappedLoans(book_id=1,user_name=u'Joe NAME 7, 12, 0, 0))]
Joins
=====
Occasionally, you will want to pull out a lot of data from related
tables all at once. In this situation, it is far more efficient to
have the database perform the necessary join. (Here we do not have *a
lot of data* but hopefully the concept is still clear.) SQLAlchemy is
smart enough to recognize that loans has a foreign key to users, and
uses that as the join condition automatically.
::
>>> join1 = db.join(db.users, db.loans, isouter=True)
>>> join1.filter_by(name='Joe NAME
[MappedJoin(name=u'Joe NAME0,book_id=1,user_name=u'Joe NAME 7, 12, 0, 0))]
If you're unfortunate enough to be using MySQL with the default MyISAM
storage engine, you'll have to specify the join condition manually,
since MyISAM does not store foreign keys. Here's the same join again,
with the join condition explicitly specified::
>>> db.join(db.users, db.loans, db.users.name==db.loans.user_name, isouter=True)
<class 'sqlalchemy.ext.sqlsoup.MappedJoin'>
You can compose arbitrarily complex joins by combining Join objects
with tables or other joins. Here we combine our first join with the
books table::
>>> join2 = db.join(join1, db.books)
>>> join2.all()
[MappedJoin(name=u'Joe NAME0,book_id=1,user_name=u'Joe NAME 7, 12, 0, 0),id=1,title=u'Mustards I Have Known',published_year=u'1989',authors=u'Jones')]
If you join tables that have an identical column name, wrap your join
with `with_labels`, to disambiguate columns with their table name
(.c is short for .columns)::
>>> db.with_labels(join1).c.keys()
[u'users_name', u'users_email', u'users_password', u'users_classname', u'users_admin', u'loans_book_id', u'loans_user_name', u'loans_loan_date']
You can also join directly to a labeled object::
>>> labeled_loans = db.with_labels(db.loans)
>>> db.join(db.users, labeled_loans, isouter=True).c.keys()
[u'name', u'email', u'password', u'classname', u'admin', u'loans_book_id', u'loans_user_name', u'loans_loan_date']
Relationships
=============
You can define relationships on SqlSoup classes:
>>> db.users.relate('loans', db.loans)
These can then be used like a normal SA property:
>>> db.users.get('Joe Student').loans
[MappedLoans(book_id=1,user_name=u'Joe NAME 7, 12, 0, 0))]
>>> db.users.filter(~db.users.loans.any()).all()
[MappedUsers(name=u'Bhargan NAME can take any options that the relationship function accepts in normal mapper definition:
>>> del db._cache['users']
>>> db.users.relate('loans', db.loans, order_by=db.loans.loan_date, cascade='all, delete-orphan')
Advanced Use
============
Sessions, Transations and Application Integration
-------------------------------------------------
**Note:** please read and understand this section thoroughly before using SqlSoup in any web application.
SqlSoup uses a ScopedSession to provide thread-local sessions. You
can get a reference to the current one like this::
>>> session = db.session
The default session is available at the module level in SQLSoup, via::
>>> from sqlalchemy.ext.sqlsoup import Session
The configuration of this session is ``autoflush=True``, ``autocommit=False``.
This means when you work with the SqlSoup object, you need to call ``db.commit()``
in order to have changes persisted. You may also call ``db.rollback()`` to
roll things back.
Since the SqlSoup object's Session automatically enters into a transaction as soon
as it's used, it is *essential* that you call ``commit()`` or ``rollback()``
on it when the work within a thread completes. This means all the guidelines
for web application integration at :ref:`session_lifespan` must be followed.
The SqlSoup object can have any session or scoped session configured onto it.
This is of key importance when integrating with existing code or frameworks
such as Pylons. If your application already has a ``Session`` configured,
pass it to your SqlSoup object::
>>> from myapplication import Session
>>> db = SqlSoup(session=Session)
If the ``Session`` is configured with ``autocommit=True``, use ``flush()``
instead of ``commit()`` to persist changes - in this case, the ``Session``
closes out its transaction immediately and no external management is needed. ``rollback()`` is also not available. Configuring a new SQLSoup object in "autocommit" mode looks like::
>>> from sqlalchemy.orm import scoped_session, sessionmaker
>>> db = SqlSoup('sqlite://', session=scoped_session(sessionmaker(autoflush=False, expire_on_commit=False, autocommit=True)))
Mapping arbitrary Selectables
-----------------------------
SqlSoup can map any SQLAlchemy ``Selectable`` with the map
method. Let's map a ``Select`` object that uses an aggregate function;
we'll use the SQLAlchemy ``Table`` that SqlSoup introspected as the
basis. (Since we're not mapping to a simple table or join, we need to
tell SQLAlchemy how to find the *primary key* which just needs to be
unique within the select, and not necessarily correspond to a *real*
PK in the database.)
::
>>> from sqlalchemy import select, func
>>> b = db.books._table
>>> s = select([b.c.published_year, func.count('*').label('n')], from_obj=[b], group_by=[b.c.published_year])
>>> s = s.alias('years_with_count')
>>> years_with_count = db.map(s, primary_key=[s.c.published_year])
>>> years_with_count.filter_by(published_year='1989').all()
[MappedBooks(published_year=u'1989',n=1)]
Obviously if we just wanted to get a list of counts associated with
book years once, raw SQL is going to be less work. The advantage of
mapping a Select is reusability, both standalone and in Joins. (And if
you go to full SQLAlchemy, you can perform mappings like this directly
to your object models.)
An easy way to save mapped selectables like this is to just hang them on
your db object::
>>> db.years_with_count = years_with_count
Python is flexible like that!
Raw SQL
-------
SqlSoup works fine with SQLAlchemy's text construct, described in :ref:`sqlexpression_text`.
You can also execute textual SQL directly using the `execute()` method,
which corresponds to the `execute()` method on the underlying `Session`.
Expressions here are expressed like ``text()`` constructs, using named parameters
with colons::
>>> rp = db.execute('select name, email from users where name like :name order by name', name='%Bhargan%')
>>> for name, email in rp.fetchall(): print name, email
Bhargan Basepair EMAIL you can get at the current transaction's connection using `connection()`. This is the
raw connection object which can accept any sort of SQL expression or raw SQL string passed to the database::
>>> conn = db.connection()
>>> conn.execute("'select name, email from users where name like ? order by name'", '%Bhargan%')
Dynamic table names
-------------------
You can load a table whose name is specified at runtime with the entity() method:
>>> tablename = 'loans'
>>> db.entity(tablename) == db.loans
True
entity() also takes an optional schema argument. If none is specified, the
default schema is used.
""" |
"""
This module contains generic generator functions for traversing tree
(and DAG) structures. It is agnostic to the underlying data structure
and implementation of the tree object. It does this through dependency
injection of the tree's accessor functions: get_parents and
get_children.
The following depth-first traversal methods are implemented:
* Pre-order: Parent yielded before children; child with multiple
parents is yielded when first encountered.
Example use cases (when DAGs are *not* supported):
1. User access. If computing a user's access to a node relies
on the user's access to the node's parents, access to the
parent has to be computed before access to the child can
be determined. To support access chains, a user's access on
a node is actually an accumulation of accesses down from the
root node through the ancestor chain to the actual node.
2. Field value percolated down. If a value for a field is
dependent on a combination of the child's and the parent's
value, the parent's value should be computed before that of
the child's. Similar to "User access", the value would be
percolated down through the entire ancestor chain.
Example: Start Date is
max(node's start date, start date of each ancestor)
This takes the most restrictive value.
3. Depth. When computing the depth of a tree, since a child's
depth value is 1 + the parent's depth value, the parent's
value should be computed before the child's.
4. Fast Subtree Deletion. If the tree is to be pruned during
traversal, an entire subtree can be deleted, without
traversing the children, as soon as the parent is determined
to be deleted.
* Topological: Parent yielded before children; child with multiple
parents yielded only after all its parents are visited.
Example use cases (when DAGs *are* supported):
1. User access. Similar to pre-order, except a user's access
is now determined by taking a *union* of the percolated
access value from each of the node's parents combined with
its own access.
2. Field value percolated down. Similar to pre-order, except the
value for a node is calculated from the array of
percolated values from each of its parents combined
with its own.
Example: Start Date is
max(node's start date, min(max(ancestry of each parent))
This takes the most permissive from all ancestry chains.
3. Depth. Similar to pre-order, except the depth of a node will
be 1 + the minimum (or the maximum depending on semantics)
of the depth of all its parents.
4. Deletion. Deletion of subtrees are not as fast as they are
for pre-order since a node can be accessed through multiple
parents.
* Post-order: Children yielded before its parents.
Example use cases:
1. Counting. When each node wants to count the number of nodes
within its sub-structure, the count for each child has to be
calculated before its parents, since a parent's value
depends on its children.
2. Map function (when order doesn't matter). If a function
needs to be evaluated for each node in a DAG and the order
that the nodes are iterated doesn't matter, then use
post-order since it is faster than topological for DAGs.
3. Field value percolated up. If a value for a field is based
on the value from it's children, the children's values need
to be computed before their parents.
Example: Minimum Due Date of all nodes within the
sub-structure.
Note: In-order traversal is not implemented as of yet. We can do so
if/when needed.
Optimization once DAGs are not supported:
Supporting Directed Acyclic Graphs (DAGs) requires us to use
topological sort, which has the following negative performance
implications:
* For a simple tree, we can immediately skip over traversing
descendants, once it is determined that a parent is not to be yielded
(based on the return value from the 'filter_func' function). However,
since we support DAGs, we cannot simply skip over descendants since
they may still be accessible through a different ancestry chain and
need to be revisited once all their parents are visited.
* For topological sort, we need the get_parents accessor function in
order to determine whether all of a node's parents have been visited.
This means the underlying implementation of the graph needs to have
an efficient way to get a node's parents, perhaps with back pointers
to each node's parents. This requires additional storage space, which
could be eliminated if DAGs are not supported.
""" |
"""
Beta diversity measures (:mod:`skbio.diversity.beta`)
=====================================================
.. currentmodule:: skbio.diversity.beta
This package contains helper functions for working with scipy's pairwise
distance (``pdist``) functions in scikit-bio, and will eventually be expanded
to contain pairwise distance/dissimilarity methods that are not implemented
(or planned to be implemented) in scipy.
The functions in this package currently support applying ``pdist`` functions
to all pairs of samples in a sample by observation count or abundance matrix
and returning an ``skbio.DistanceMatrix`` object. This application is
illustrated below for a few different forms of input.
Functions
---------
.. autosummary::
:toctree: generated/
pw_distances
pw_distances_from_table
Examples
--------
Create a table containing 7 OTUs and 6 samples:
.. plot::
:context:
>>> from skbio.diversity.beta import pw_distances
>>> import numpy as np
>>> data = [[23, 64, 14, 0, 0, 3, 1],
... [0, 3, 35, 42, 0, 12, 1],
... [0, 5, 5, 0, 40, 40, 0],
... [44, 35, 9, 0, 1, 0, 0],
... [0, 2, 8, 0, 35, 45, 1],
... [0, 0, 25, 35, 0, 19, 0]]
>>> ids = list('ABCDEF')
Compute Bray-Curtis distances between all pairs of samples and return a
``DistanceMatrix`` object:
>>> bc_dm = pw_distances(data, ids, "braycurtis")
>>> print(bc_dm)
6x6 distance matrix
IDs:
'A', 'B', 'C', 'D', 'E', 'F'
Data:
[[ 0. 0.78787879 0.86666667 0.30927835 0.85714286 0.81521739]
[ 0.78787879 0. 0.78142077 0.86813187 0.75 0.1627907 ]
[ 0.86666667 0.78142077 0. 0.87709497 0.09392265 0.71597633]
[ 0.30927835 0.86813187 0.87709497 0. 0.87777778 0.89285714]
[ 0.85714286 0.75 0.09392265 0.87777778 0. 0.68235294]
[ 0.81521739 0.1627907 0.71597633 0.89285714 0.68235294 0. ]]
Compute Jaccard distances between all pairs of samples and return a
``DistanceMatrix`` object:
>>> j_dm = pw_distances(data, ids, "jaccard")
>>> print(j_dm)
6x6 distance matrix
IDs:
'A', 'B', 'C', 'D', 'E', 'F'
Data:
[[ 0. 0.83333333 1. 1. 0.83333333 1. ]
[ 0.83333333 0. 1. 1. 0.83333333 1. ]
[ 1. 1. 0. 1. 1. 1. ]
[ 1. 1. 1. 0. 1. 1. ]
[ 0.83333333 0.83333333 1. 1. 0. 1. ]
[ 1. 1. 1. 1. 1. 0. ]]
Determine if the resulting distance matrices are significantly correlated
by computing the Mantel correlation between them. Then determine if the
p-value is significant based on an alpha of 0.05:
>>> from skbio.stats.distance import mantel
>>> r, p_value, n = mantel(j_dm, bc_dm)
>>> print(r)
-0.209362157621
>>> print(p_value < 0.05)
False
Compute PCoA for both distance matrices, and then find the Procrustes
M-squared value that results from comparing the coordinate matrices.
>>> from skbio.stats.ordination import PCoA
>>> bc_pc = PCoA(bc_dm).scores()
>>> j_pc = PCoA(j_dm).scores()
>>> from skbio.stats.spatial import procrustes
>>> print(procrustes(bc_pc.site, j_pc.site)[2])
0.466134984787
All of this only gets interesting in the context of sample metadata, so
let's define some:
>>> import pandas as pd
>>> try:
... # not necessary for normal use
... pd.set_option('show_dimensions', True)
... except KeyError:
... pass
>>> sample_md = {
... 'A': {'body_site': 'gut', 'subject': 's1'},
... 'B': {'body_site': 'skin', 'subject': 's1'},
... 'C': {'body_site': 'tongue', 'subject': 's1'},
... 'D': {'body_site': 'gut', 'subject': 's2'},
... 'E': {'body_site': 'tongue', 'subject': 's2'},
... 'F': {'body_site': 'skin', 'subject': 's2'}}
>>> sample_md = pd.DataFrame.from_dict(sample_md, orient='index')
>>> sample_md
subject body_site
A s1 gut
B s1 skin
C s1 tongue
D s2 gut
E s2 tongue
F s2 skin
<BLANKLINE>
[6 rows x 2 columns]
Now let's plot our PCoA results, coloring each sample by the subject it
was taken from:
>>> fig = bc_pc.plot(sample_md, 'subject',
... axis_labels=('PC 1', 'PC 2', 'PC 3'),
... title='Samples colored by subject', cmap='jet', s=50)
.. plot::
:context:
We don't see any clustering/grouping of samples. If we were to instead color
the samples by the body site they were taken from, we see that the samples
form three separate groups:
>>> import matplotlib.pyplot as plt
>>> plt.close('all') # not necessary for normal use
>>> fig = bc_pc.plot(sample_md, 'body_site',
... axis_labels=('PC 1', 'PC 2', 'PC 3'),
... title='Samples colored by body site', cmap='jet', s=50)
Ordination techniques, such as PCoA, are useful for exploratory analysis. The
next step is to quantify the strength of the grouping/clustering that we see in
ordination plots. There are many statistical methods available to accomplish
this; many operate on distance matrices. Let's use ANOSIM to quantify the
strength of the clustering we see in the ordination plots above, using our
Bray-Curtis distance matrix and sample metadata.
First test the grouping of samples by subject:
>>> from skbio.stats.distance import anosim
>>> results = anosim(bc_dm, sample_md, column='subject', permutations=999)
>>> results['test statistic']
-0.4074074074074075
>>> results['p-value'] < 0.1
False
The negative value of ANOSIM's R statistic indicates anti-clustering and the
p-value is insignificant at an alpha of 0.1.
Now let's test the grouping of samples by body site:
>>> results = anosim(bc_dm, sample_md, column='body_site', permutations=999)
>>> results['test statistic']
1.0
>>> results['p-value'] < 0.1
True
The R statistic of 1.0 indicates strong separation of samples based on body
site. The p-value is significant at an alpha of 0.1.
References
----------
.. [1] http://matplotlib.org/examples/mplot3d/scatter3d_demo.html
""" |
"""
Binary serialization
NPY format
==========
A simple format for saving numpy arrays to disk with the full
information about them.
The ``.npy`` format is the standard binary file format in NumPy for
persisting a *single* arbitrary NumPy array on disk. The format stores all
of the shape and dtype information necessary to reconstruct the array
correctly even on another machine with a different architecture.
The format is designed to be as simple as possible while achieving
its limited goals.
The ``.npz`` format is the standard format for persisting *multiple* NumPy
arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy``
files, one for each array.
Capabilities
------------
- Can represent all NumPy arrays including nested record arrays and
object arrays.
- Represents the data in its native binary form.
- Supports Fortran-contiguous arrays directly.
- Stores all of the necessary information to reconstruct the array
including shape and dtype on a machine of a different
architecture. Both little-endian and big-endian arrays are
supported, and a file with little-endian numbers will yield
a little-endian array on any machine reading the file. The
types are described in terms of their actual sizes. For example,
if a machine with a 64-bit C "long int" writes out an array with
"long ints", a reading machine with 32-bit C "long ints" will yield
an array with 64-bit integers.
- Is straightforward to reverse engineer. Datasets often live longer than
the programs that created them. A competent developer should be
able to create a solution in their preferred programming language to
read most ``.npy`` files that he has been given without much
documentation.
- Allows memory-mapping of the data. See `open_memmep`.
- Can be read from a filelike stream object instead of an actual file.
- Stores object arrays, i.e. arrays containing elements that are arbitrary
Python objects. Files with object arrays are not to be mmapable, but
can be read and written to disk.
Limitations
-----------
- Arbitrary subclasses of numpy.ndarray are not completely preserved.
Subclasses will be accepted for writing, but only the array data will
be written out. A regular numpy.ndarray object will be created
upon reading the file.
.. warning::
Due to limitations in the interpretation of structured dtypes, dtypes
with fields with empty names will have the names replaced by 'f0', 'f1',
etc. Such arrays will not round-trip through the format entirely
accurately. The data is intact; only the field names will differ. We are
working on a fix for this. This fix will not require a change in the
file format. The arrays with such structures can still be saved and
restored, and the correct dtype may be restored by using the
``loadedarray.view(correct_dtype)`` method.
File extensions
---------------
We recommend using the ``.npy`` and ``.npz`` extensions for files saved
in this format. This is by no means a requirement; applications may wish
to use these file formats but use an extension specific to the
application. In the absence of an obvious alternative, however,
we suggest using ``.npy`` and ``.npz``.
Version numbering
-----------------
The version numbering of these formats is independent of NumPy version
numbering. If the format is upgraded, the code in `numpy.io` will still
be able to read and write Version 1.0 files.
Format Version 1.0
------------------
The first 6 bytes are a magic string: exactly ``\\x93NUMPY``.
The next 1 byte is an unsigned byte: the major version number of the file
format, e.g. ``\\x01``.
The next 1 byte is an unsigned byte: the minor version number of the file
format, e.g. ``\\x00``. Note: the version of the file format is not tied
to the version of the numpy package.
The next 2 bytes form a little-endian unsigned short int: the length of
the header data HEADER_LEN.
The next HEADER_LEN bytes form the header data describing the array's
format. It is an ASCII string which contains a Python literal expression
of a dictionary. It is terminated by a newline (``\\n``) and padded with
spaces (``\\x20``) to make the total of
``len(magic string) + 2 + len(length) + HEADER_LEN`` be evenly divisible
by 64 for alignment purposes.
The dictionary contains three keys:
"descr" : dtype.descr
An object that can be passed as an argument to the `numpy.dtype`
constructor to create the array's dtype.
"fortran_order" : bool
Whether the array data is Fortran-contiguous or not. Since
Fortran-contiguous arrays are a common form of non-C-contiguity,
we allow them to be written directly to disk for efficiency.
"shape" : tuple of int
The shape of the array.
For repeatability and readability, the dictionary keys are sorted in
alphabetic order. This is for convenience only. A writer SHOULD implement
this if possible. A reader MUST NOT depend on this.
Following the header comes the array data. If the dtype contains Python
objects (i.e. ``dtype.hasobject is True``), then the data is a Python
pickle of the array. Otherwise the data is the contiguous (either C-
or Fortran-, depending on ``fortran_order``) bytes of the array.
Consumers can figure out the number of bytes by multiplying the number
of elements given by the shape (noting that ``shape=()`` means there is
1 element) by ``dtype.itemsize``.
Format Version 2.0
------------------
The version 1.0 format only allowed the array header to have a total size of
65535 bytes. This can be exceeded by structured arrays with a large number of
columns. The version 2.0 format extends the header size to 4 GiB.
`numpy.save` will automatically save in 2.0 format if the data requires it,
else it will always use the more compatible 1.0 format.
The description of the fourth element of the header therefore has become:
"The next 4 bytes form a little-endian unsigned int: the length of the header
data HEADER_LEN."
Notes
-----
The ``.npy`` format, including motivation for creating it and a comparison of
alternatives, is described in the `"npy-format" NEP
<https://www.numpy.org/neps/nep-0001-npy-format.html>`_, however details have
evolved with time and this document is more current.
""" |
"""
Airy Functions
--------------
* airy -- Airy functions and their derivatives.
* airye -- Exponentially scaled Airy functions
* ai_zeros -- [+]Zeros of Airy functions Ai(x) and Ai'(x)
* bi_zeros -- [+]Zeros of Airy functions Bi(x) and Bi'(x)
Elliptic Functions and Integrals
--------------------------------
* ellipj -- Jacobian elliptic functions
* ellipk -- Complete elliptic integral of the first kind.
* ellipkinc -- Incomplete elliptic integral of the first kind.
* ellipe -- Complete elliptic integral of the second kind.
* ellipeinc -- Incomplete elliptic integral of the second kind.
Bessel Functions
----------------
* jn -- Bessel function of integer order and real argument.
* jv -- Bessel function of real-valued order and complex argument.
* jve -- Exponentially scaled Bessel function.
* yn -- Bessel function of second kind (integer order).
* yv -- Bessel function of the second kind (real-valued order).
* yve -- Exponentially scaled Bessel function of the second kind.
* kn -- Modified Bessel function of the second kind (integer order).
* kv -- Modified Bessel function of the second kind (real order).
* kve -- Exponentially scaled modified Bessel function of the second kind.
* iv -- Modified Bessel function.
* ive -- Exponentially scaled modified Bessel function.
* hankel1 -- Hankel function of the first kind.
* hankel1e -- Exponentially scaled Hankel function of the first kind.
* hankel2 -- Hankel function of the second kind.
* hankel2e -- Exponentially scaled Hankel function of the second kind.
* lmbda -- [+]Sequence of lambda functions with arbitrary order v.
Zeros of Bessel Functions
.........................
* jnjnp_zeros -- [+]Zeros of integer-order Bessel functions and derivatives sorted in order.
* jnyn_zeros -- [+]Zeros of integer-order Bessel functions and derivatives as separate arrays.
* jn_zeros -- [+]Zeros of Jn(x)
* jnp_zeros -- [+]Zeros of Jn'(x)
* yn_zeros -- [+]Zeros of Yn(x)
* ynp_zeros -- [+]Zeros of Yn'(x)
* y0_zeros -- [+]Complex zeros: Y0(z0)=0 and values of Y0'(z0)
* y1_zeros -- [+]Complex zeros: Y1(z1)=0 and values of Y1'(z1)
* y1p_zeros -- [+]Complex zeros of Y1'(z1')=0 and values of Y1(z1')
Faster versions of common Bessel Functions
..........................................
* j0 -- Bessel function of order 0.
* j1 -- Bessel function of order 1.
* y0 -- Bessel function of second kind of order 0.
* y1 -- Bessel function of second kind of order 1.
* i0 -- Modified Bessel function of order 0.
* i0e -- Exponentially scaled modified Bessel function of order 0.
* i1 -- Modified Bessel function of order 1.
* i1e -- Exponentially scaled modified Bessel function of order 1.
* k0 -- Modified Bessel function of the second kind of order 0.
* k0e -- Exponentially scaled modified Bessel function of the second kind of order 0.
* k1 -- Modified Bessel function of the second kind of order 1.
* k1e -- Exponentially scaled modified Bessel function of the second kind of order 1.
Integrals of Bessel Functions
.............................
* itj0y0 -- Basic integrals of j0 and y0 from 0 to x.
* it2j0y0 -- Integrals of (1-j0(t))/t from 0 to x and y0(t)/t from x to inf.
* iti0k0 -- Basic integrals of i0 and k0 from 0 to x.
* it2i0k0 -- Integrals of (i0(t)-1)/t from 0 to x and k0(t)/t from x to inf.
* besselpoly -- Integral of a bessel function: Jv(2* a* x) * x[+]lambda from x=0 to 1.
Derivatives of Bessel Functions
...............................
* jvp -- Nth derivative of Jv(v,z)
* yvp -- Nth derivative of Yv(v,z)
* kvp -- Nth derivative of Kv(v,z)
* ivp -- Nth derivative of Iv(v,z)
* h1vp -- Nth derivative of H1v(v,z)
* h2vp -- Nth derivative of H2v(v,z)
Spherical Bessel Functions
..........................
* sph_jn -- [+]Sequence of spherical Bessel functions, jn(z)
* sph_yn -- [+]Sequence of spherical Bessel functions, yn(z)
* sph_jnyn -- [+]Sequence of spherical Bessel functions, jn(z) and yn(z)
* sph_in -- [+]Sequence of spherical Bessel functions, in(z)
* sph_kn -- [+]Sequence of spherical Bessel functions, kn(z)
* sph_inkn -- [+]Sequence of spherical Bessel functions, in(z) and kn(z)
Ricatti-Bessel Functions
........................
* riccati_jn -- [+]Sequence of Ricatti-Bessel functions of first kind.
* riccati_yn -- [+]Sequence of Ricatti-Bessel functions of second kind.
Struve Functions
----------------
* struve -- Struve function --- Hv(x)
* modstruve -- Modified struve function --- Lv(x)
* itstruve0 -- Integral of H0(t) from 0 to x
* it2struve0 -- Integral of H0(t)/t from x to Inf.
* itmodstruve0 -- Integral of L0(t) from 0 to x.
Raw Statistical Functions (Friendly versions in scipy.stats)
------------------------------------------------------------
* bdtr -- Sum of terms 0 through k of of the binomial pdf.
* bdtrc -- Sum of terms k+1 through n of the binomial pdf.
* bdtri -- Inverse of bdtr
* btdtr -- Integral from 0 to x of beta pdf.
* btdtri -- Quantiles of beta distribution
* fdtr -- Integral from 0 to x of F pdf.
* fdtrc -- Integral from x to infinity under F pdf.
* fdtri -- Inverse of fdtrc
* gdtr -- Integral from 0 to x of gamma pdf.
* gdtrc -- Integral from x to infinity under gamma pdf.
* gdtria --
* gdtrib --
* gdtrix --
* nbdtr -- Sum of terms 0 through k of the negative binomial pdf.
* nbdtrc -- Sum of terms k+1 to infinity under negative binomial pdf.
* nbdtri -- Inverse of nbdtr
* pdtr -- Sum of terms 0 through k of the Poisson pdf.
* pdtrc -- Sum of terms k+1 to infinity of the Poisson pdf.
* pdtri -- Inverse of pdtr
* stdtr -- Integral from -infinity to t of the Student-t pdf.
* stdtridf --
* stdtrit --
* chdtr -- Integral from 0 to x of the Chi-square pdf.
* chdtrc -- Integral from x to infnity of Chi-square pdf.
* chdtri -- Inverse of chdtrc.
* ndtr -- Integral from -infinity to x of standard normal pdf
* ndtri -- Inverse of ndtr (quantiles)
* smirnov -- Kolmogorov-Smirnov complementary CDF for one-sided test statistic (Dn+ or Dn-)
* smirnovi -- Inverse of smirnov.
* kolmogorov -- The complementary CDF of the (scaled) two-sided test statistic (Kn*) valid for large n.
* kolmogi -- Inverse of kolmogorov
* tklmbda -- Tukey-Lambda CDF
Gamma and Related Functions
---------------------------
* gamma -- Gamma function.
* gammaln -- Log of the absolute value of the gamma function.
* gammainc -- Incomplete gamma integral.
* gammaincinv -- Inverse of gammainc.
* gammaincc -- Complemented incomplete gamma integral.
* gammainccinv -- Inverse of gammaincc.
* beta -- Beta function.
* betaln -- Log of the absolute value of the beta function.
* betainc -- Incomplete beta integral.
* betaincinv -- Inverse of betainc.
* psi(digamma) -- Logarithmic derivative of the gamma function.
* rgamma -- One divided by the gamma function.
* polygamma -- Nth derivative of psi function.
Error Function and Fresnel Integrals
------------------------------------
* erf -- Error function.
* erfc -- Complemented error function (1- erf(x))
* erfinv -- Inverse of error function
* erfcinv -- Inverse of erfc
* erf_zeros -- [+]Complex zeros of erf(z)
* fresnel -- Fresnel sine and cosine integrals.
* fresnel_zeros -- Complex zeros of both Fresnel integrals
* fresnelc_zeros -- [+]Complex zeros of fresnel cosine integrals
* fresnels_zeros -- [+]Complex zeros of fresnel sine integrals
* modfresnelp -- Modified Fresnel integrals F_+(x) and K_+(x)
* modfresnelm -- Modified Fresnel integrals F_-(x) and K_-(x)
Legendre Functions
------------------
* lpn -- [+]Legendre Functions (polynomials) of the first kind
* lqn -- [+]Legendre Functions of the second kind.
* lpmn -- [+]Associated Legendre Function of the first kind.
* lqmn -- [+]Associated Legendre Function of the second kind.
* lpmv -- Associated Legendre Function of arbitrary non-negative degree v.
* sph_harm -- Spherical Harmonics (complex-valued) Y^m_n(theta,phi)
Orthogonal polynomials --- 15 types
These functions all return a polynomial class which can then be
evaluated: vals = chebyt(n)(x)
This class also has an attribute 'weights' which
return the roots, weights, and total weights for the appropriate
form of Gaussian quadrature. These are returned in an n x 3 array with roots
in the first column, weights in the second column, and total weights in the final
column
* legendre -- [+]Legendre polynomial P_n(x) (lpn -- for function).
* chebyt -- [+]Chebyshev polynomial T_n(x)
* chebyu -- [+]Chebyshev polynomial U_n(x)
* chebyc -- [+]Chebyshev polynomial C_n(x)
* chebys -- [+]Chebyshev polynomial S_n(x)
* jacobi -- [+]Jacobi polynomial P^(alpha,beta)_n(x)
* laguerre -- [+]Laguerre polynomial, L_n(x)
* genlaguerre -- [+]Generalized (Associated) Laguerre polynomial, L^alpha_n(x)
* hermite -- [+]Hermite polynomial H_n(x)
* hermitenorm -- [+]Normalized Hermite polynomial, He_n(x)
* gegenbauer -- [+]Gegenbauer (Ultraspherical) polynomials, C^(alpha)_n(x)
* sh_legendre -- [+]shifted Legendre polynomial, P*_n(x)
* sh_chebyt -- [+]shifted Chebyshev polynomial, T*_n(x)
* sh_chebyu -- [+]shifted Chebyshev polynomial, U*_n(x)
* sh_jacobi -- [+]shifted Jacobi polynomial, J*_n(x) = G^(p,q)_n(x)
HyperGeometric Functions
------------------------
* hyp2f1 -- Gauss hypergeometric function (2F1)
* hyp1f1 -- Confluent hypergeometric function (1F1)
* hyperu -- Confluent hypergeometric function (U)
* hyp0f1 -- Confluent hypergeometric limit function (0F1)
* hyp2f0 -- Hypergeometric function (2F0)
* hyp1f2 -- Hypergeometric function (1F2)
* hyp3f0 -- Hypergeometric function (3F0)
Parabolic Cylinder Functions
----------------------------
* pbdv -- Parabolic cylinder function Dv(x) and derivative.
* pbvv -- Parabolic cylinder function Vv(x) and derivative.
* pbwa -- Parabolic cylinder function W(a,x) and derivative.
* pbdv_seq -- [+]Sequence of parabolic cylinder functions Dv(x)
* pbvv_seq -- [+]Sequence of parabolic cylinder functions Vv(x)
* pbdn_seq -- [+]Sequence of parabolic cylinder functions Dn(z), complex z
mathieu and Related Functions (and derivatives)
-----------------------------------------------
* mathieu_a -- Characteristic values for even solution (ce_m)
* mathieu_b -- Characteristic values for odd solution (se_m)
* mathieu_even_coef -- [+]sequence of expansion coefficients for even solution
* mathieu_odd_coef -- [+]sequence of expansion coefficients for odd solution
**All the following return both function and first derivative**
* mathieu_cem -- Even mathieu function
* mathieu_sem -- Odd mathieu function
* mathieu_modcem1 -- Even modified mathieu function of the first kind
* mathieu_modcem2 -- Even modified mathieu function of the second kind
* mathieu_modsem1 -- Odd modified mathieu function of the first kind
* mathieu_modsem2 -- Odd modified mathieu function of the second kind
Spheroidal Wave Functions
-------------------------
* pro_ang1 -- Prolate spheroidal angular function of the first kind
* pro_rad1 -- Prolate spheroidal radial function of the first kind
* pro_rad2 -- Prolate spheroidal radial function of the second kind
* obl_ang1 -- Oblate spheroidal angluar function of the first kind
* obl_rad1 -- Oblate spheroidal radial function of the first kind
* obl_rad2 -- Oblate spheroidal radial function of the second kind
* pro_cv -- Compute characteristic value for prolate functions
* obl_cv -- Compute characteristic value for oblate functions
* pro_cv_seq -- Compute sequence of prolate characteristic values
* obl_cv_seq -- Compute sequence of oblate characteristic values
**The following functions require pre-computed characteristic values**
* pro_ang1_cv -- Prolate spheroidal angular function of the first kind
* pro_rad1_cv -- Prolate spheroidal radial function of the first kind
* pro_rad2_cv -- Prolate spheroidal radial function of the second kind
* obl_ang1_cv -- Oblate spheroidal angluar function of the first kind
* obl_rad1_cv -- Oblate spheroidal radial function of the first kind
* obl_rad2_cv -- Oblate spheroidal radial function of the second kind
Kelvin Functions
----------------
* kelvin -- All Kelvin functions (order 0) and derivatives.
* kelvin_zeros -- [+]Zeros of All Kelvin functions (order 0) and derivatives
* ber -- Kelvin function ber x
* bei -- Kelvin function bei x
* berp -- Derivative of Kelvin function ber x
* beip -- Derivative of Kelvin function bei x
* ker -- Kelvin function ker x
* kei -- Kelvin function kei x
* kerp -- Derivative of Kelvin function ker x
* keip -- Derivative of Kelvin function kei x
* ber_zeros -- [+]Zeros of Kelvin function bei x
* bei_zeros -- [+]Zeros of Kelvin function ber x
* berp_zeros -- [+]Zeros of derivative of Kelvin function ber x
* beip_zeros -- [+]Zeros of derivative of Kelvin function bei x
* ker_zeros -- [+]Zeros of Kelvin function kei x
* kei_zeros -- [+]Zeros of Kelvin function ker x
* kerp_zeros -- [+]Zeros of derivative of Kelvin function ker x
* keip_zeros -- [+]Zeros of derivative of Kelvin function kei x
Other Special Functions
-----------------------
* expn -- Exponential integral.
* exp1 -- Exponential integral of order 1 (for complex argument)
* expi -- Another exponential integral -- Ei(x)
* wofz -- Fadeeva function.
* dawsn -- Dawson's integral.
* shichi -- Hyperbolic sine and cosine integrals.
* sici -- Integral of the sinc and "cosinc" functions.
* spence -- Dilogarithm integral.
* zeta -- Riemann zeta function of two arguments.
* zetac -- 1.0 - standard Riemann zeta function.
Convenience Functions
---------------------
* cbrt -- Cube root.
* exp10 -- 10 raised to the x power.
* exp2 -- 2 raised to the x power.
* radian -- radian angle given degrees, minutes, and seconds.
* cosdg -- cosine of the angle given in degrees.
* sindg -- sine of the angle given in degrees.
* tandg -- tangent of the angle given in degrees.
* cotdg -- cotangent of the angle given in degrees.
* log1p -- log(1+x)
* expm1 -- exp(x)-1
* cosm1 -- cos(x)-1
* round -- round the argument to the nearest integer. If argument ends in 0.5 exactly, pick the nearest even integer.
-------
[+] in the description indicates a function which is not a universal
function and does not follow broadcasting and automatic
array-looping rules.
Error handling
--------------
Errors are handled by returning nans, or other appropriate values.
Some of the special function routines will print an error message
when an error occurs. By default this printing
is disabled. To enable such messages use errprint(1)
To disable such messages use errprint(0).
Example:
>>> print scipy.special.bdtr(-1,10,0.3)
>>> scipy.special.errprint(1)
>>> print scipy.special.bdtr(-1,10,0.3)
""" |
# -*- coding: utf-8 -*-
# This file is part of ranger, the console file manager.
# This configuration file is licensed under the same terms as ranger.
# ===================================================================
#
# NOTE: If you copied this file to ~/.config/ranger/commands_full.py,
# then it will NOT be loaded by ranger, and only serve as a reference.
#
# ===================================================================
# This file contains ranger's commands.
# It's all in python; lines beginning with # are comments.
#
# Note that additional commands are automatically generated from the methods
# of the class ranger.core.actions.Actions.
#
# You can customize commands in the file ~/.config/ranger/commands.py.
# It has the same syntax as this file. In fact, you can just copy this
# file there with `ranger --copy-config=commands' and make your modifications.
# But make sure you update your configs when you update ranger.
#
# ===================================================================
# Every class defined here which is a subclass of `Command' will be used as a
# command in ranger. Several methods are defined to interface with ranger:
# execute(): called when the command is executed.
# cancel(): called when closing the console.
# tab(): called when <TAB> is pressed.
# quick(): called after each keypress.
#
# The return values for tab() can be either:
# None: There is no tab completion
# A string: Change the console to this string
# A list/tuple/generator: cycle through every item in it
#
# The return value for quick() can be:
# False: Nothing happens
# True: Execute the command afterwards
#
# The return value for execute() and cancel() doesn't matter.
#
# ===================================================================
# Commands have certain attributes and methods that facilitate parsing of
# the arguments:
#
# self.line: The whole line that was written in the console.
# self.args: A list of all (space-separated) arguments to the command.
# self.quantifier: If this command was mapped to the key "X" and
# the user pressed 6X, self.quantifier will be 6.
# self.arg(n): The n-th argument, or an empty string if it doesn't exist.
# self.rest(n): The n-th argument plus everything that followed. For example,
# if the command was "search foo bar a b c", rest(2) will be "bar a b c"
# self.start(n): Anything before the n-th argument. For example, if the
# command was "search foo bar a b c", start(2) will be "search foo"
#
# ===================================================================
# And this is a little reference for common ranger functions and objects:
#
# self.fm: A reference to the "fm" object which contains most information
# about ranger.
# self.fm.notify(string): Print the given string on the screen.
# self.fm.notify(string, bad=True): Print the given string in RED.
# self.fm.reload_cwd(): Reload the current working directory.
# self.fm.thisdir: The current working directory. (A File object.)
# self.fm.thisfile: The current file. (A File object too.)
# self.fm.thistab.get_selection(): A list of all selected files.
# self.fm.execute_console(string): Execute the string as a ranger command.
# self.fm.open_console(string): Open the console with the given string
# already typed in for you.
# self.fm.move(direction): Moves the cursor in the given direction, which
# can be something like down=3, up=5, right=1, left=1, to=6, ...
#
# File objects (for example self.fm.thisfile) have these useful attributes and
# methods:
#
# cf.path: The path to the file.
# cf.basename: The base name only.
# cf.load_content(): Force a loading of the directories content (which
# obviously works with directories only)
# cf.is_directory: True/False depending on whether it's a directory.
#
# For advanced commands it is unavoidable to dive a bit into the source code
# of ranger.
# ===================================================================
|
"""This module tests SyntaxErrors.
Here's an example of the sort of thing that is tested.
>>> def f(x):
... global x
Traceback (most recent call last):
SyntaxError: name 'x' is local and global (<doctest test.test_syntax[0]>, line 1)
The tests are all raise SyntaxErrors. They were created by checking
each C call that raises SyntaxError. There are several modules that
raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and
symtable.c.
The parser itself outlaws a lot of invalid syntax. None of these
errors are tested here at the moment. We should add some tests; since
there are infinitely many programs with invalid syntax, we would need
to be judicious in selecting some.
The compiler generates a synthetic module name for code executed by
doctest. Since all the code comes from the same module, a suffix like
[1] is appended to the module name, As a consequence, changing the
order of tests in this module means renumbering all the errors after
it. (Maybe we should enable the ellipsis option for these tests.)
In ast.c, syntax errors are raised by calling ast_error().
Errors from set_context():
TODO(jhylton): "assignment to None" is inconsistent with other messages
>>> obj.None = 1
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[1]>, line 1)
>>> None = 1
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[2]>, line 1)
It's a syntax error to assign to the empty tuple. Why isn't it an
error to assign to the empty list? It will always raise some error at
runtime.
>>> () = 1
Traceback (most recent call last):
SyntaxError: can't assign to () (<doctest test.test_syntax[3]>, line 1)
>>> f() = 1
Traceback (most recent call last):
SyntaxError: can't assign to function call (<doctest test.test_syntax[4]>, line 1)
>>> del f()
Traceback (most recent call last):
SyntaxError: can't delete function call (<doctest test.test_syntax[5]>, line 1)
>>> a + 1 = 2
Traceback (most recent call last):
SyntaxError: can't assign to operator (<doctest test.test_syntax[6]>, line 1)
>>> (x for x in x) = 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression (<doctest test.test_syntax[7]>, line 1)
>>> 1 = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal (<doctest test.test_syntax[8]>, line 1)
>>> "abc" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal (<doctest test.test_syntax[9]>, line 1)
>>> `1` = 1
Traceback (most recent call last):
SyntaxError: can't assign to repr (<doctest test.test_syntax[10]>, line 1)
If the left-hand side of an assignment is a list or tuple, an illegal
expression inside that contain should still cause a syntax error.
This test just checks a couple of cases rather than enumerating all of
them.
>>> (a, "b", c) = (1, 2, 3)
Traceback (most recent call last):
SyntaxError: can't assign to literal (<doctest test.test_syntax[11]>, line 1)
>>> [a, b, c + 1] = [1, 2, 3]
Traceback (most recent call last):
SyntaxError: can't assign to operator (<doctest test.test_syntax[12]>, line 1)
>>> a if 1 else b = 1
Traceback (most recent call last):
SyntaxError: can't assign to conditional expression (<doctest test.test_syntax[13]>, line 1)
From compiler_complex_args():
>>> def f(None=1):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[14]>, line 1)
From ast_for_arguments():
>>> def f(x, y=1, z):
... pass
Traceback (most recent call last):
SyntaxError: non-default argument follows default argument (<doctest test.test_syntax[15]>, line 1)
>>> def f(x, None):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[16]>, line 1)
>>> def f(*None):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[17]>, line 1)
>>> def f(**None):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[18]>, line 1)
From ast_for_funcdef():
>>> def None(x):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[19]>, line 1)
From ast_for_call():
>>> def f(it, *varargs):
... return list(it)
>>> L = range(10)
>>> f(x for x in L)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(x for x in L, 1)
Traceback (most recent call last):
SyntaxError: Generator expression must be parenthesized if not sole argument (<doctest test.test_syntax[23]>, line 1)
>>> f((x for x in L), 1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... i244, i245, i246, i247, i248, i249, i250, i251, i252,
... i253, i254, i255)
Traceback (most recent call last):
SyntaxError: more than 255 arguments (<doctest test.test_syntax[25]>, line 1)
The actual error cases counts positional arguments, keyword arguments,
and generator expression arguments separately. This test combines the
three.
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... (x for x in i244), i245, i246, i247, i248, i249, i250, i251,
... i252=1, i253=1, i254=1, i255=1)
Traceback (most recent call last):
SyntaxError: more than 255 arguments (<doctest test.test_syntax[26]>, line 1)
>>> f(lambda x: x[0] = 3)
Traceback (most recent call last):
SyntaxError: lambda cannot contain assignment (<doctest test.test_syntax[27]>, line 1)
The grammar accepts any test (basically, any expression) in the
keyword slot of a call site. Test a few different options.
>>> f(x()=2)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression (<doctest test.test_syntax[28]>, line 1)
>>> f(a or b=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression (<doctest test.test_syntax[29]>, line 1)
>>> f(x.y=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression (<doctest test.test_syntax[30]>, line 1)
From ast_for_expr_stmt():
>>> (x for x in x) += 1
Traceback (most recent call last):
SyntaxError: augmented assignment to generator expression not possible (<doctest test.test_syntax[31]>, line 1)
>>> None += 1
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[32]>, line 1)
>>> f() += 1
Traceback (most recent call last):
SyntaxError: illegal expression for augmented assignment (<doctest test.test_syntax[33]>, line 1)
Test continue in finally in weird combinations.
continue in for loop under finally shouuld be ok.
>>> def test():
... try:
... pass
... finally:
... for abc in range(10):
... continue
... print abc
>>> test()
9
Start simple, a continue in a finally should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[36]>, line 6)
This is essentially a continue in a finally which should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... try:
... continue
... except:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[37]>, line 7)
>>> def foo():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[38]>, line 5)
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[39]>, line 6)
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... try:
... continue
... finally:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[40]>, line 7)
>>> def foo():
... for a in ():
... try: pass
... finally:
... try:
... pass
... except:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause (<doctest test.test_syntax[41]>, line 8)
There is one test for a break that is not in a loop. The compiler
uses a single data structure to keep track of try-finally and loops,
so we need to be sure that a break is actually inside a loop. If it
isn't, there should be a syntax error.
>>> try:
... print 1
... break
... print 2
... finally:
... print 3
Traceback (most recent call last):
...
SyntaxError: 'break' outside loop (<doctest test.test_syntax[42]>, line 3)
This should probably raise a better error than a SystemError (or none at all).
In 2.5 there was a missing exception and an assert was triggered in a debug
build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514
>>> while 1:
... while 2:
... while 3:
... while 4:
... while 5:
... while 6:
... while 8:
... while 9:
... while 10:
... while 11:
... while 12:
... while 13:
... while 14:
... while 15:
... while 16:
... while 17:
... while 18:
... while 19:
... while 20:
... while 21:
... while 22:
... break
Traceback (most recent call last):
...
SystemError: too many statically nested blocks
This tests assignment-context; there was a bug in Python 2.5 where compiling
a complex 'if' (one with 'elif') would fail to notice an invalid suite,
leading to spurious errors.
>>> if 1:
... x() = 1
... elif 1:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call (<doctest test.test_syntax[44]>, line 2)
>>> if 1:
... pass
... elif 1:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call (<doctest test.test_syntax[45]>, line 4)
>>> if 1:
... x() = 1
... elif 1:
... pass
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call (<doctest test.test_syntax[46]>, line 2)
>>> if 1:
... pass
... elif 1:
... x() = 1
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call (<doctest test.test_syntax[47]>, line 4)
>>> if 1:
... pass
... elif 1:
... pass
... else:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call (<doctest test.test_syntax[48]>, line 6)
>>> f(a=23, a=234)
Traceback (most recent call last):
...
SyntaxError: keyword argument repeated (<doctest test.test_syntax[49]>, line 1)
""" |
"""
This is a procedural interface to the matplotlib object-oriented
plotting library.
The following plotting commands are provided; the majority have
Matlab(TM) analogs and similar argument.
_Plotting commands
acorr - plot the autocorrelation function
annotate - annotate something in the figure
arrow - add an arrow to the axes
axes - Create a new axes
axhline - draw a horizontal line across axes
axvline - draw a vertical line across axes
axhspan - draw a horizontal bar across axes
axvspan - draw a vertical bar across axes
axis - Set or return the current axis limits
bar - make a bar chart
barh - a horizontal bar chart
broken_barh - a set of horizontal bars with gaps
box - set the axes frame on/off state
boxplot - make a box and whisker plot
cla - clear current axes
clabel - label a contour plot
clf - clear a figure window
clim - adjust the color limits of the current image
close - close a figure window
colorbar - add a colorbar to the current figure
cohere - make a plot of coherence
contour - make a contour plot
contourf - make a filled contour plot
csd - make a plot of cross spectral density
delaxes - delete an axes from the current figure
draw - Force a redraw of the current figure
errorbar - make an errorbar graph
figlegend - make legend on the figure rather than the axes
figimage - make a figure image
figtext - add text in figure coords
figure - create or change active figure
fill - make filled polygons
findobj - recursively find all objects matching some criteria
gca - return the current axes
gcf - return the current figure
gci - get the current image, or None
getp - get a handle graphics property
grid - set whether gridding is on
hist - make a histogram
hold - set the axes hold state
ioff - turn interaction mode off
ion - turn interaction mode on
isinteractive - return True if interaction mode is on
imread - load image file into array
imshow - plot image data
ishold - return the hold state of the current axes
legend - make an axes legend
loglog - a log log plot
matshow - display a matrix in a new figure preserving aspect
pcolor - make a pseudocolor plot
pcolormesh - make a pseudocolor plot using a quadrilateral mesh
pie - make a pie chart
plot - make a line plot
plot_date - plot dates
plotfile - plot column data from an ASCII tab/space/comma delimited file
pie - pie charts
polar - make a polar plot on a PolarAxes
psd - make a plot of power spectral density
quiver - make a direction field (arrows) plot
rc - control the default params
rgrids - customize the radial grids and labels for polar
savefig - save the current figure
scatter - make a scatter plot
setp - set a handle graphics property
semilogx - log x axis
semilogy - log y axis
show - show the figures
specgram - a spectrogram plot
spy - plot sparsity pattern using markers or image
stem - make a stem plot
subplot - make a subplot (numrows, numcols, axesnum)
subplots_adjust - change the params controlling the subplot positions of current figure
subplot_tool - launch the subplot configuration tool
suptitle - add a figure title
table - add a table to the plot
text - add some text at location x,y to the current axes
thetagrids - customize the radial theta grids and labels for polar
title - add a title to the current axes
xcorr - plot the autocorrelation function of x and y
xlim - set/get the xlimits
ylim - set/get the ylimits
xticks - set/get the xticks
yticks - set/get the yticks
xlabel - add an xlabel to the current axes
ylabel - add a ylabel to the current axes
autumn - set the default colormap to autumn
bone - set the default colormap to bone
cool - set the default colormap to cool
copper - set the default colormap to copper
flag - set the default colormap to flag
gray - set the default colormap to gray
hot - set the default colormap to hot
hsv - set the default colormap to hsv
jet - set the default colormap to jet
pink - set the default colormap to pink
prism - set the default colormap to prism
spring - set the default colormap to spring
summer - set the default colormap to summer
winter - set the default colormap to winter
spectral - set the default colormap to spectral
_Event handling
connect - register an event handler
disconnect - remove a connected event handler
_Matrix commands
cumprod - the cumulative product along a dimension
cumsum - the cumulative sum along a dimension
detrend - remove the mean or besdt fit line from an array
diag - the k-th diagonal of matrix
diff - the n-th differnce of an array
eig - the eigenvalues and eigen vectors of v
eye - a matrix where the k-th diagonal is ones, else zero
find - return the indices where a condition is nonzero
fliplr - flip the rows of a matrix up/down
flipud - flip the columns of a matrix left/right
linspace - a linear spaced vector of N values from min to max inclusive
logspace - a log spaced vector of N values from min to max inclusive
meshgrid - repeat x and y to make regular matrices
ones - an array of ones
rand - an array from the uniform distribution [0,1]
randn - an array from the normal distribution
rot90 - rotate matrix k*90 degress counterclockwise
squeeze - squeeze an array removing any dimensions of length 1
tri - a triangular matrix
tril - a lower triangular matrix
triu - an upper triangular matrix
vander - the Vandermonde matrix of vector x
svd - singular value decomposition
zeros - a matrix of zeros
_Probability
levypdf - The levy probability density function from the char. func.
normpdf - The Gaussian probability density function
rand - random numbers from the uniform distribution
randn - random numbers from the normal distribution
_Statistics
corrcoef - correlation coefficient
cov - covariance matrix
amax - the maximum along dimension m
mean - the mean along dimension m
median - the median along dimension m
amin - the minimum along dimension m
norm - the norm of vector x
prod - the product along dimension m
ptp - the max-min along dimension m
std - the standard deviation along dimension m
asum - the sum along dimension m
_Time series analysis
bartlett - M-point Bartlett window
blackman - M-point Blackman window
cohere - the coherence using average periodiogram
csd - the cross spectral density using average periodiogram
fft - the fast Fourier transform of vector x
hamming - M-point Hamming window
hanning - M-point Hanning window
hist - compute the histogram of x
kaiser - M length Kaiser window
psd - the power spectral density using average periodiogram
sinc - the sinc function of array x
_Dates
date2num - convert python datetimes to numeric representation
drange - create an array of numbers for date plots
num2date - convert numeric type (float days since 0001) to datetime
_Other
angle - the angle of a complex array
griddata - interpolate irregularly distributed data to a regular grid
load - load ASCII data into array
polyfit - fit x, y to an n-th order polynomial
polyval - evaluate an n-th order polynomial
roots - the roots of the polynomial coefficients in p
save - save an array to an ASCII file
trapz - trapezoidal integration
__end
""" |
"""
Overriding descriptor (a.k.a. data descriptor or enforced descriptor):
# BEGIN DESCR_KINDS_DEMO1
>>> obj = Managed() # <1>
>>> obj.over # <2>
-> Overriding.__get__(<Overriding object>, <Managed object>, <class Managed>)
>>> Managed.over # <3>
-> Overriding.__get__(<Overriding object>, None, <class Managed>)
>>> obj.over = 7 # <4>
-> Overriding.__set__(<Overriding object>, <Managed object>, 7)
>>> obj.over # <5>
-> Overriding.__get__(<Overriding object>, <Managed object>, <class Managed>)
>>> obj.__dict__['over'] = 8 # <6>
>>> vars(obj) # <7>
{'over': 8}
>>> obj.over # <8>
-> Overriding.__get__(<Overriding object>, <Managed object>, <class Managed>)
# END DESCR_KINDS_DEMO1
Overriding descriptor without ``__get__``:
(these tests are reproduced below without +ELLIPSIS directives for inclusion in the book;
look for DESCR_KINDS_DEMO2)
>>> obj.over_no_get # doctest: +ELLIPSIS
<descriptorkinds.OverridingNoGet object at 0x...>
>>> Managed.over_no_get # doctest: +ELLIPSIS
<descriptorkinds.OverridingNoGet object at 0x...>
>>> obj.over_no_get = 7
-> OverridingNoGet.__set__(<OverridingNoGet object>, <Managed object>, 7)
>>> obj.over_no_get # doctest: +ELLIPSIS
<descriptorkinds.OverridingNoGet object at 0x...>
>>> obj.__dict__['over_no_get'] = 9
>>> obj.over_no_get
9
>>> obj.over_no_get = 7
-> OverridingNoGet.__set__(<OverridingNoGet object>, <Managed object>, 7)
>>> obj.over_no_get
9
Non-overriding descriptor (a.k.a. non-data descriptor or shadowable descriptor):
# BEGIN DESCR_KINDS_DEMO3
>>> obj = Managed()
>>> obj.non_over # <1>
-> NonOverriding.__get__(<NonOverriding object>, <Managed object>, <class Managed>)
>>> obj.non_over = 7 # <2>
>>> obj.non_over # <3>
7
>>> Managed.non_over # <4>
-> NonOverriding.__get__(<NonOverriding object>, None, <class Managed>)
>>> del obj.non_over # <5>
>>> obj.non_over # <6>
-> NonOverriding.__get__(<NonOverriding object>, <Managed object>, <class Managed>)
# END DESCR_KINDS_DEMO3
No descriptor type survives being overwritten on the class itself:
# BEGIN DESCR_KINDS_DEMO4
>>> obj = Managed() # <1>
>>> Managed.over = 1 # <2>
>>> Managed.over_no_get = 2
>>> Managed.non_over = 3
>>> obj.over, obj.over_no_get, obj.non_over # <3>
(1, 2, 3)
# END DESCR_KINDS_DEMO4
Methods are non-overriding descriptors:
>>> obj.spam # doctest: +ELLIPSIS
<bound method Managed.spam of <descriptorkinds.Managed object at 0x...>>
>>> Managed.spam # doctest: +ELLIPSIS
<function Managed.spam at 0x...>
>>> obj.spam()
-> Managed.spam(<Managed object>)
>>> Managed.spam()
Traceback (most recent call last):
...
TypeError: spam() missing 1 required positional argument: 'self'
>>> Managed.spam(obj)
-> Managed.spam(<Managed object>)
>>> Managed.spam.__get__(obj) # doctest: +ELLIPSIS
<bound method Managed.spam of <descriptorkinds.Managed object at 0x...>>
>>> obj.spam.__func__ is Managed.spam
True
>>> obj.spam = 7
>>> obj.spam
7
""" |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# cell_metadata_filter: all
# notebook_metadata_filter: all,-language_info
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.2'
# jupytext_version: 1.2.1
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# toc:
# base_numbering: 1
# nav_menu: {}
# number_sections: true
# sideBar: true
# skip_h1_title: false
# title_cell: Table of Contents
# title_sidebar: Contents
# toc_cell: true
# toc_position: {}
# toc_section_display: true
# toc_window_display: true
# ---
# %% [markdown] {"toc": true}
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Functions" data-toc-modified-id="Functions-1"><span class="toc-item-num">1 </span>Functions</a></span><ul class="toc-item"><li><span><a href="#User-defined-functions" data-toc-modified-id="User-defined-functions-1.1"><span class="toc-item-num">1.1 </span>User-defined functions</a></span><ul class="toc-item"><li><span><a href="#Looping-over-arrays-in-user-defined-functions" data-toc-modified-id="Looping-over-arrays-in-user-defined-functions-1.1.1"><span class="toc-item-num">1.1.1 </span>Looping over arrays in user-defined functions</a></span></li><li><span><a href="#Fast-array-processing-in-user-defined-functions" data-toc-modified-id="Fast-array-processing-in-user-defined-functions-1.1.2"><span class="toc-item-num">1.1.2 </span>Fast array processing in user-defined functions</a></span></li><li><span><a href="#Functions-with-more-(or-less)-than-one-input-or-output" data-toc-modified-id="Functions-with-more-(or-less)-than-one-input-or-output-1.1.3"><span class="toc-item-num">1.1.3 </span>Functions with more (or less) than one input or output</a></span></li><li><span><a href="#Positional-and-keyword-arguments" data-toc-modified-id="Positional-and-keyword-arguments-1.1.4"><span class="toc-item-num">1.1.4 </span>Positional and keyword arguments</a></span></li><li><span><a href="#Variable-number-of-arguments" data-toc-modified-id="Variable-number-of-arguments-1.1.5"><span class="toc-item-num">1.1.5 </span>Variable number of arguments</a></span></li><li><span><a href="#Passing-data-to-and-from-functions" data-toc-modified-id="Passing-data-to-and-from-functions-1.1.6"><span class="toc-item-num">1.1.6 </span>Passing data to and from functions</a></span></li></ul></li><li><span><a href="#Methods-and-attributes" data-toc-modified-id="Methods-and-attributes-1.2"><span class="toc-item-num">1.2 </span>Methods and attributes</a></span></li><li><span><a href="#Example:-linear-least-squares-fitting" data-toc-modified-id="Example:-linear-least-squares-fitting-1.3"><span class="toc-item-num">1.3 </span>Example: linear least squares fitting</a></span><ul class="toc-item"><li><span><a href="#Linear-regression" data-toc-modified-id="Linear-regression-1.3.1"><span class="toc-item-num">1.3.1 </span>Linear regression</a></span></li><li><span><a href="#Linear-regression-with-weighting:-$\chi^2$" data-toc-modified-id="Linear-regression-with-weighting:-$\chi^2$-1.3.2"><span class="toc-item-num">1.3.2 </span>Linear regression with weighting: $\chi^2$</a></span></li></ul></li><li><span><a href="#Anonymous-functions-(lambda)" data-toc-modified-id="Anonymous-functions-(lambda)-1.4"><span class="toc-item-num">1.4 </span>Anonymous functions (lambda)</a></span></li><li><span><a href="#Exercises" data-toc-modified-id="Exercises-1.5"><span class="toc-item-num">1.5 </span>Exercises</a></span></li></ul></li></ul></div>
# %% [markdown]
# python
#
# Functions
# =========
#
# As you develop more complex computer code, it becomes increasingly
# important to organize your code into modular blocks. One important means
# for doing so is *user-defined* Python functions. User-defined functions
# are a lot like built-in functions that we have encountered in core
# Python as well as in NumPy and Matplotlib. The main difference is that
# user-defined functions are written by you. The idea is to define
# functions to simplify your code and to allow you to reuse the same code
# in different contexts.
#
# The number of ways that functions are used in programming is so varied
# that we cannot possibly enumerate all the possibilities. As our use of
# Python functions in scientific program is somewhat specialized, we
# introduce only a few of the possible uses of Python functions, ones that
# are the most common in scientific programming.
#
# single: functions; user defined
#
# User-defined functions
# ----------------------
#
# The NumPy package contains a plethora of mathematical functions. You can
# find a listing of the mathematical functions available through NumPy on
# the web page
# <http://docs.scipy.org/doc/numpy/reference/routines.math.html>. While
# the list may seem pretty exhaustive, you may nevertheless find that you
# need a function that is not available in the NumPy Python library. In
# those cases, you will want to write your own function.
#
# In studies of optics and signal processing one often runs into the sinc
# function, which is defined as
#
# $$\mathrm{sinc}\,x \equiv \frac{\sin x}{x} \;.$$
#
# Let's write a Python function for the sinc function. Here is our first
# attempt:
#
# ``` python
# def sinc(x):
# y = np.sin(x)/x
# return y
# ```
#
# Every function definition begins with the word `def` followed by the
# name you want to give to the function, `sinc` in this case, then a list
# of arguments enclosed in parentheses, and finally terminated with a
# colon. In this case there is only one argument, `x`, but in general
# there can be as many arguments as you want, including no arguments at
# all. For the moment, we will consider just the case of a single
# argument.
#
# The indented block of code following the first line defines what the
# function does. In this case, the first line calculates
# $\mathrm{sinc}\,x = \sin x/x$ and sets it equal to `y`. The `return`
# statement of the last line tells Python to return the value of `y` to
# the user.
#
# We can try it out in the IPython shell. First we type in the function
# definition.
#
# ``` ipython
# In [1]: def sinc(x):
# ...: y = sin(x)/x
# ...: return y
# ```
#
# Because we are doing this from the IPython shell, we don't need to
# import NumPy; it's preloaded. Now the function $\mathrm{sinc}\,x$ is
# available to be used from the IPython shell
#
# ``` ipython
# In [2]: sinc(4)
# Out[2]: -0.18920062382698205
#
# In [3]: a = sinc(1.2)
#
# In [4]: a
# Out[4]: 0.77669923830602194
#
# In [5]: sin(1.2)/1.2
# Out[5]: 0.77669923830602194
# ```
#
# Inputs and outputs 4 and 5 verify that the function does indeed give the
# same result as an explicit calculation of $\sin x/x$.
#
# You may have noticed that there is a problem with our definition of
# $\mathrm{sinc}\,x$ when `x=0.0`. Let's try it out and see what happens
#
# ``` ipython
# In [6]: sinc(0.0)
# Out[6]: nan
# ```
#
# IPython returns `nan` or "not a number", which occurs when Python
# attempts a division by zero, which is not defined. This is not the
# desired response as $\mathrm{sinc}\,x$ is, in fact, perfectly well
# defined for $x=0$. You can verify this using L'Hopital's rule, which you
# may have learned in your study of calculus, or you can ascertain the
# correct answer by calculating the Taylor series for $\mathrm{sinc}\,x$.
# Here is what we get
#
# $$\mathrm{sinc}\,x = \frac{\sin x}{x}
# = \frac{x - \frac{x^3}{3!} + \frac{x^5}{5!} + ...}{x}
# = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} + ... \;.$$
#
# From the Taylor series, it is clear that $\mathrm{sinc}\,x$ is
# well-defined at and near $x=0$ and that, in fact, $\mathrm{sinc}(0)=1$.
# Let's modify our function so that it gives the correct value for `x=0`.
#
# ``` ipython
# In [7]: def sinc(x):
# ...: if x==0.0:
# ...: y = 1.0
# ...: else:
# ...: y = sin(x)/x
# ...: return y
#
# In [8]: sinc(0)
# Out[8]: 1.0
#
# In [9]: sinc(1.2)
# Out[9]: 0.77669923830602194
# ```
#
# Now our function gives the correct value for `x=0` as well as for values
# different from zero.
#
# single: functions; looping over arrays
#
# ### Looping over arrays in user-defined functions
#
# The code for $\mathrm{sinc}\,x$ works just fine when the argument is a
# single number or a variable that represents a single number. However, if
# the argument is a NumPy array, we run into a problem, as illustrated
# below.
#
# ``` ipython
# In [10]: x = arange(0, 5., 0.5)
#
# In [11]: x
# Out[11]: array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5,
# 4. , 4.5])
#
# In [12]: sinc(x)
# ----------------------------------------------------------
# ValueError Traceback (most recent call last)
# ----> 1 sinc(x)
#
# 1 def sinc(x):
# ----> 2 if x==0.0:
# 3 y = 1.0
# 4 else:
# 5 y = np.sin(x)/x
#
# ValueError: The truth value of an array with more than one
# element is ambiguous.
# ```
#
# The `if` statement in Python is set up to evaluate the truth value of a
# single variable, not of multielement arrays. When Python is asked to
# evaluate the truth value for a multi-element array, it doesn't know what
# to do and therefore returns an error.
#
# An obvious way to handle this problem is to write the code so that it
# processes the array one element at a time, which you could do using a
# `for` loop, as illustrated below.
#
# ``` python
# def sinc(x):
# y = [] # creates an empty list to store results
# for xx in x: # loops over all elements in x array
# if xx==0.0: # adds result of 1.0 to y list if
# y += [1.0] # xx is zero
# else: # adds result of sin(xx)/xx to y list if
# y += [np.sin(xx)/xx] # xx is not zero
# return np.array(y) # converts y to array and returns array
#
# import numpy as np
# import matplotlib.pyplot as plt
#
# x = np.linspace(-10, 10, 256)
# y = sinc(x)
#
# plt.plot(x, y)
# plt.axhline(color="gray", zorder=-1)
# plt.axvline(color="gray", zorder=-1)
# plt.show()
# ```
#
# The `for` loop evaluates the elements of the `x` array one by one and
# appends the results to the list `y` one by one. When it is finished, it
# converts the list to an array and returns the array. The code following
# the function definition plots $\mathrm{sinc}\,x$ as a function of $x$.
#
# In the program above, you may have noticed that the NumPy library is
# imported *after* the `sinc(x)` function definition. As the function uses
# the NumPy functions `sin` and `array`, you may wonder how this program
# can work. Doesn't the `import numpy` statement have to be called before
# any NumPy functions are used? The answer it an emphatic "YES". What you
# need to understand is that the function definition is *not executed*
# when it is defined, nor can it be as it has no input `x` data to
# process. That part of the code is just a definition. The first time the
# code for the `sinc(x)` function is actually executed is when it is
# called on line 14 of the program, which occurs after the NumPy library
# is imported in line 10. The figure below shows the plot of the
# $\mathrm{sinc}\,x$ function generated by the above code.
#
# <figure>
# <img src="attachment:sinc.png" class="align-center" alt="" /><figcaption>Plot of user-defined <code>sinc(x)</code> function.</figcaption>
# </figure>
#
# single: functions; fast array processing single: conditionals; applied
# to arrays
#
# ### Fast array processing in user-defined functions
#
# While using loops to process arrays works just fine, it is usually not
# the best way to accomplish the task in Python. The reason is that loops
# in Python are executed rather slowly. To deal with this problem, the
# developers of NumPy introduced a number of functions designed to process
# arrays quickly and efficiently. For the present case, what we need is a
# conditional statement or function that can process arrays directly. The
# function we want is called `where` and it is a part of the NumPy
# library. There `where` function has the form
#
# ``` python
# where(condition, output if True, output if False)
# ```
#
# The first argument of the `where` function is a conditional statement
# involving an array. The `where` function applies the condition to the
# array element by element, and returns the second argument for those
# array elements for which the condition is `True`, and returns the third
# argument for those array elements that are `False`. We can apply it to
# the `sinc(x)` function as follows
#
# ``` python
# def sinc(x):
# z = np.where(x==0.0, 1.0, np.sin(x)/x)
# return z
# ```
#
# The `where` function creates the array `y` and sets the elements of `y`
# equal to 1.0 where the corresponding elements of `x` are zero, and
# otherwise sets the corresponding elements to `sin(x)/x`. This code
# executes much faster, 25 to 100 times, depending on the size of the
# array, than the code using a `for` loop. Moreover, the new code is much
# simpler to write and read. An additional benefit of the `where` function
# is that it can handle single variables and arrays equally well. The code
# we wrote for the sinc function with the `for` loop cannot handle single
# variables. Of course we could rewrite the code so that it did, but the
# code becomes even more clunky. It's better just to use NumPy's `where`
# function.
#
# #### The moral of the story
#
# The moral of the story is that you should avoid using `for` and `while`
# loops to process arrays in Python programs whenever an array-processing
# method is available. As a beginning Python programmer, you may not
# always see how to avoid loops, and indeed, avoiding them is not always
# possible, but you should look for ways to avoid loops, especially loops
# that iterate a large number of times. As you become more experienced,
# you will find that using array-processing methods in Python becomes more
# natural. Using them can greatly speed up the execution of your code,
# especially when working with large arrays.
#
# single: functions; multiple inputs and/or outputs
#
# ### Functions with more (or less) than one input or output
#
# Python functions can have any number of input arguments and can return
# any number of variables. For example, suppose you want a function that
# outputs $n$ $(x,y)$ coordinates around a circle of radius $r$ centered
# at the point $(x_0,y_0)$. The inputs to the function would be $r$,
# $x_0$, $y_0$, and $n$. The outputs would be the $n$ $(x,y)$ coordinates.
# The following code implements this function.
#
# ``` python
# def circle(r, x0, y0, n):
# theta = np.linspace(0., 2.*np.pi, n, endpoint=False)
# x = r * np.cos(theta)
# y = r * np.sin(theta)
# return x0+x, y0+y
# ```
#
# This function has four inputs and two outputs. In this case, the four
# inputs are simple numeric variables and the two outputs are NumPy
# arrays. In general, the inputs and outputs can be any combination of
# data types: arrays, lists, strings, *etc*. Of course, the body of the
# function must be written to be consistent with the prescribed data
# types.
#
# Functions can also return nothing to the calling program but just
# perform some task. For example, here is a program that clears the
# terminal screen
#
# ``` python
# import subprocess
# import platform
#
# def clear():
# subprocess.Popen( "cls" if platform.system() ==
# "Windows" else "clear", shell=True)
# ```
#
# The function is invoked by typing `clear()`. It has no inputs and no
# outputs but it performs a useful task. This function uses two standard
# Python libraries, `subprocess` and `platform` that are useful for
# performing computer system tasks. It's not important that you know
# anything about them at this point. We simply use them here to
# demonstrate a useful cross-platform function that has no inputs and
# returns no values.
#
# single: functions; arguments; keyword single: functions; arguments;
# positional
#
# ### Positional and keyword arguments
#
# It is often useful to have function arguments that have some default
# setting. This happens when you want an input to a function to have some
# standard value or setting most of the time, but you would like to
# reserve the possibility of giving it some value other than the default
# value.
#
# For example, in the program `circle` from the previous section, we might
# decide that under most circumstances, we want `n=12` points around the
# circle, like the points on a clock face, and we want the circle to be
# centered at the origin. In this case, we would rewrite the code to read
#
# ``` python
# def circle(r, x0=0.0, y0=0.0, n=12):
# theta = np.linspace(0., 2.*np.pi, n, endpoint=False)
# x = r * np.cos(theta)
# y = r * np.sin(theta)
# return x0+x, y0+y
# ```
#
# The default values of the arguments `x0`, `y0`, and `n` are specified in
# the argument of the function definition in the `def` line. Arguments
# whose default values are specified in this manner are called *keyword
# arguments*, and they can be omitted from the function call if the user
# is content using those values. For example, writing `circle(4)` is now a
# perfectly legal way to call the `circle` function and it would produce
# 12 $(x,y)$ coordinates centered about the origin $(x,y)=(0,0)$. On the
# other hand, if you want the values of `x0`, `y0`, and `n` to be
# something different from the default values, you can specify their
# values as you would have before.
#
# If you want to change only some of the keyword arguments, you can do so
# by using the keywords in the function call. For example, suppose you are
# content with have the circle centered on $(x,y)=(0,0)$ but you want only
# 6 points around the circle rather than 12. Then you would call the
# `circle` function as follows:
#
# ``` python
# circle(2, n=6)
# ```
#
# The unspecified keyword arguments keep their default values of zero but
# the number of points `n` around the circle is now 6 instead of the
# default value of 12.
#
# The normal arguments without keywords are called *positional arguments*;
# they have to appear *before* any keyword arguments and, when the
# function is called, must be supplied values in the same order as
# specified in the function definition. The keyword arguments, if
# supplied, can be supplied in any order providing they are supplied with
# their keywords. If supplied without their keywords, they too must be
# supplied in the order they appear in the function definition. The
# following function calls to `circle` both give the same output.
#
# ``` ipython
# In [13]: circle(3, n=3, y0=4, x0=-2)
# Out[13]: (array([ 1. , -3.5, -3.5]),
# array([ 4. , 6.59807621, 1.40192379]))
#
# In [14]: circle(3, -2, 4, 3) # w/o keywords, arguments
# # supplied in order
# Out[14]: (array([ 1. , -3.5, -3.5]), array([ 4. ,
# 6.59807621, 1.40192379]))
# ```
#
# By now you probably have noticed that we used the keyword argument
# `endpoint` in calling `linspace` in our definition of the `circle`
# function. The default value of `endpoint` is `True`, meaning that
# `linspace` includes the endpoint specified in the second argument of
# `linspace`. We set it equal to `False` so that the last point was not
# included. Do you see why?
#
# single: functions; arguments; variable number single: functions;
# arguments; \*args
#
# ### Variable number of arguments
#
# While it may seem odd, it is sometimes useful to leave the number of
# arguments unspecified. A simple example is a function that computes the
# product of an arbitrary number of numbers:
#
# ``` python
# def product(*args):
# print("args = {}".format(args))
# p = 1
# for num in args:
# p *= num
# return p
# ```
#
# ``` ipython
# In [15]: product(11., -2, 3)
# args = (11.0, -2, 3)
# Out[15]: -66.0
#
# In [16]: product(2.31, 7)
# args = (2.31, 7)
# Out[16]: 16.17
# ```
#
# The `print("args...)` statement in the function definition is not
# necessary, of course, but is put in to show that the argument `args` is
# a tuple inside the function. Here it used because one does not know
# ahead of time how many numbers are to be multiplied together.
#
# The `*args` argument is also quite useful in another context: when
# passing the name of a function as an argument in another function. In
# many cases, the function name that is passed may have a number of
# parameters that must also be passed but aren't known ahead of time. If
# this all sounds a bit confusing---functions calling other functions---a
# concrete example will help you understand.
#
# Suppose we have the following function that numerically computes the
# value of the derivative of an arbitrary function $f(x)$:
#
# ``` python
# def deriv(f, x, h=1.e-9, *params):
# return (f(x+h, *params)-f(x-h, *params))/(2.*h)
# ```
#
# The argument `*params` is an optional positional argument. We begin by
# demonstrating the use of the function `deriv` without using the optional
# `*params` argument. Suppose we want to compute the derivative of the
# function $f_0(x)=4x^5$. First, we define the function
#
# ``` python
# def f0(x):
# return 4.*x**5
# ```
#
# Now let's find the derivative of $f_0(x)=4x^5$ at $x=3$ using the
# function `deriv`:
#
# ``` ipython
# In [17]: deriv(f0, 3)
# Out[17]: 1620.0001482502557
# ```
#
# The exact result, given by evaluating $f_0^\prime(x)=20x^4$ at $x=3$ is
# 1620, so our function to numerically calculate the derivative works
# pretty well.
#
# Suppose we had defined a more general function $f_1(x)=ax^p$ as follows:
#
# ``` python
# def f1(x, a, p):
# return a*x**p
# ```
#
# Suppose we want to calculate the derivative of this function for a
# particular set of parameters $a$ and $p$. Now we face a problem, because
# it might seem that there is no way to pass the parameters $a$ and $p$ to
# the `deriv` function. Moreover, this is a generic problem for functions
# such as `deriv` that use a function as an input, because different
# functions you want to use as inputs generally come with different
# parameters. Therefore, we would like to write our program `deriv` so
# that it works, irrespective of how many parameters are needed to specify
# a particular function.
#
# This is what the optional positional argument `*params` defined in
# `deriv` is for: to pass parameters of `f1`, like $a$ and $b$, through
# `deriv`. To see how this works, let's set $a$ and $b$ to be 4 and 5,
# respectively, the same values we used in the definition of `f0`, so that
# we can compare the results:
#
# ``` ipython
# In [16]: deriv(f1, 3, 1.e-9, 4, 5)
# Out[16]: 1620.0001482502557
# ```
#
# We get the same answer as before, but this time we have used `deriv`
# with a more general form of the function $f_1(x)=ax^p$.
#
# The order of the parameters is important. The function `deriv` uses `x`,
# the first argument of `f1`, as its principal argument, and then uses `a`
# and `p`, in the same order that they are defined in the function `f1`,
# to fill in the additional arguments---the parameters---of the function
# `f1`.
#
# single: functions; arguments; \*\*kwargs
#
# Optional arguments must appear after the regular positional and keyword
# arguments in a function call. The order of the arguments must adhere to
# the following convention:
#
# ``` python
# def func(pos1, pos2, ..., keywd1, keywd2, ..., *args, **kwargs):
# ```
#
# That is, the order of arguments is: positional arguments first, then
# keyword arguments, then optional positional arguments (`*args`), then
# optional keyword arguments (`**kwargs`). Note that to use the `*params`
# argument, we had to explicitly include the keyword argument `h` even
# though we didn't need to change it from its default value.
#
# Python also allows for a variable number of keyword
# arguments---`**kwargs`---in a function call. While `*args` is a tuple,
# `kwargs` is a dictionary, so that the value of an optional keyword
# argument is accessed through its dictionary key.
#
# ### Passing data to and from functions
#
# Functions are like mini-programs within the larger programs that call
# them. Each function has a set of variables with certain names that are
# to some degree or other isolated from the calling program. We shall get
# more specific about just how isolated those variables are below, but
# before we do, we introduce the concept of a *namespace*. Each function
# has its own namespace, which is essentially a mapping of variable names
# to objects, like numerics, strings, lists, and so forth. It's a kind of
# dictionary. The calling program has its own namespace, distinct from
# that of any functions it calls. The distinctiveness of these namespaces
# plays an important role in how functions work, as we shall see below.
#
# #### Variables and arrays created entirely within a function
#
# An important feature of functions is that variables and arrays created
# *entirely within* a function cannot be seen by the program that calls
# the function unless the variable or array is explicitly passed to the
# calling program in the `return` statement. This is important because it
# means you can create and manipulate variables and arrays, giving them
# any name you please, without affecting any variables or arrays outside
# the function, even if the variables and arrays inside and outside a
# function share the same name.
#
# To see what how this works, let's rewrite our program to plot the sinc
# function using the sinc function definition that uses the `where`
# function.
#
# ``` python
# def sinc(x):
# z = np.where(x==0.0, 1.0, np.sin(x)/x)
# return z
#
# import numpy as np
# import matplotlib.pyplot as plt
#
# x = np.linspace(-10, 10, 256)
# y = sinc(x)
#
# plt.plot(x, y)
# plt.axhline(color="gray", zorder=-1)
# plt.axvline(color="gray", zorder=-1)
# plt.show()
# ```
#
# Running this program produces a plot like the plot of sinc shown in the
# previous section. Notice that the array variable `z` is only defined
# within the function definition of sinc. If we run the program from the
# IPython terminal, it produces the plot, of course. Then if we ask
# IPython to print out the arrays, `x`, `y`, and `z`, we get some
# interesting and informative results, as shown below.
#
# ``` ipython
# In [15]: run sinc3.py
#
# In [16]: x
# Out[16]: array([-10. , -9.99969482, -9.99938964, ...,
# 9.99938964, 9.99969482, 10. ])
#
# In [17]: y
# Out[17]: array([-0.05440211, -0.05437816, -0.0543542 , ...,
# -0.0543542 , -0.05437816, -0.05440211])
#
# In [18]: z
# ---------------------------------------------------------
# NameError Traceback (most recent call last)
#
# NameError: name 'z' is not defined
# ```
#
# When we type in `x` at the `In [16]:` prompt, IPython prints out the
# array `x` (some of the output is suppressed because the array `x` has
# many elements); similarly for `y`. But when we type `z` at the
# `In [18]:` prompt, IPython returns a `NameError` because `z` is not
# defined. The IPython terminal is working in the same *namespace* as the
# program. But the namespace of the sinc function is isolated from the
# namespace of the program that calls it, and therefore isolated from
# IPython. This also means that when the sinc function ends with
# `return z`, it doesn't return the name `z`, but instead assigns the
# values in the array `z` to the array `y`, as directed by the main
# program in line 9.
#
# #### Passing variables and arrays to functions: mutable and immutable objects
#
# What happens to a variable or an array passed to a function when the
# variable or array is *changed* within the function? It turns out that
# the answers are different depending on whether the variable passed is a
# simple numeric variable, string, or tuple, or whether it is an array or
# list. The program below illustrates the different ways that Python
# handles single variables *vs* the way it handles lists and arrays.
#
# ``` python
# def test(s, v, t, l, a):
# s = "I am doing fine"
# v = np.pi**2
# t = (1.1, 2.9)
# l[-1] = 'end'
# a[0] = 963.2
# return s, v, t, l, a
#
# import numpy as np
#
# s = "How do you do?"
# v = 5.0
# t = (97.5, 82.9, 66.7)
# l = [3.9, 5.7, 7.5, 9.3]
# a = np.array(l)
#
# print('*************')
# print("s = {0:s}".format(s))
# print("v = {0:5.2f}".format(v))
# print("t = {0:s}".format(t))
# print("l = {0:s}".format(l))
# print("a = "), # comma suppresses line feed
# print(a)
# print('*************')
# print('*call "test"*')
#
# s1, v1, t1, l1, a1 = test(s, v, t, l, a)
#
# print('*************')
# print("s1 = {0:s}".format(s1))
# print("v1 = {0:5.2f}".format(v1))
# print("t1 = {0:s}".format(t1))
# print("l1 = {0:s}".format(l1))
# print("a1 = "),
# print(a1)
# print('*************')
# print("s = {0:s}".format(s))
# print("v = {0:5.2f}".format(v))
# print("t = {0:s}".format(t))
# print("l = {0:s}".format(l))
# print("a = "), # comma suppresses line feed
# print(a)
# print('*************')
# ```
#
# The function `test` has five arguments, a string `s`, a numerical
# variable `v`, a tuple `t`, a list `l`, and a NumPy array `a`. `test`
# modifies each of these arguments and then returns the modified `s`, `v`,
# `t`, `l`, `a`. Running the program produces the following output.
#
# ``` ipython
# In [17]: run passingVars.py
# *************
# s = How do you do?
# v = 5.00
# t = (97.5, 82.9, 66.7)
# l = [3.9, 5.7, 7.5, 9.3]
# a = [ 3.9 5.7 7.5 9.3]
# *************
# *call "test"*
# *************
# s1 = I am doing fine
# v1 = 9.87
# t1 = (1.1, 2.9)
# l1 = [3.9, 5.7, 7.5, 'end']
# a1 = [ 963.2 5.7 7.5 9.3]
# *************
# s = How do you do?
# v = 5.00
# t = (97.5, 82.9, 66.7)
# l = [3.9, 5.7, 7.5, 'end']
# a = [ 963.2 5.7 7.5 9.3]
# *************
# ```
#
# The program prints out three blocks of variables separated by asterisks.
# The first block merely verifies that the contents of `s`, `v`, `t`, `l`,
# and `a` are those assigned in lines 10-13. Then the function `test` is
# called. The next block prints the output of the call to the function
# `test`, namely the variables `s1`, `v1`, `t1`, `l1`, and `a1`. The
# results verify that the function modified the inputs as directed by the
# `test` function.
#
# The third block prints out the variables `s`, `v`, `t`, `l`, and `a`
# from the calling program *after* the function `test` was called. These
# variables served as the inputs to the function `test`. Examining the
# output from the third printing block, we see that the values of the
# string `s`, the numeric variable `v`, and the contents of `t` are
# unchanged after the function call. This is probably what you would
# expect. On the other hand, we see that the list `l` and the array `a`
# are changed after the function call. This might surprise you! But these
# are important points to remember, so important that we summarize them in
# two bullet points here:
#
# > - Changes to string, variable, and tuple arguments of a function
# > within the function do not affect their values in the calling
# > program.
# > - Changes to values of elements in list and array arguments of a
# > function within the function are reflected in the values of the
# > same list and array elements in the calling function.
#
# The point is that simple numerics, strings and tuples are immutable
# while lists and arrays are mutable. Because immutable objects can't be
# changed, changing them within a function creates new objects with the
# same name inside of the function, but the old immutable objects that
# were used as arguments in the function call remain unchanged in the
# calling program. On the other hand, if elements of mutable objects like
# those in lists or arrays are changed, then those elements that are
# changed inside the function are also changed in the calling program.
#
# Methods and attributes
# ----------------------
#
# You have already encountered quite a number of functions that are part
# of either NumPy or Python or Matplotlib. But there is another way in
# which Python implements things that act like functions. To understand
# what they are, you need to understand that variables, strings, arrays,
# lists, and other such data structures in Python are not merely the
# numbers or strings we have defined them to be. They are *objects*. In
# general, an object in Python has associated with it a number of
# *attributes* and a number of specialized functions called *methods* that
# act on the object. How attributes and methods work with objects is best
# illustrated by example.
#
# Let's start with the NumPy array. A NumPy array is a Python object and
# therefore has associated with it a number of attributes and methods.
# Suppose, for example, we write `a = random.random(10)`, which creates an
# array of 10 uniformly distributed random numbers between 0 and 1. An
# example of an attribute of an array is the size or number of elements in
# the array. An attribute of an object in Python is accessed by typing the
# object name followed by a period followed by the attribute name. The
# code below illustrates how to access two different attributes of an
# array, it's size and its data type.
#
# ``` ipython
# In [18]: a = random.random(10)
#
# In [19]: a.size
# Out[19]: 10
#
# In [20]: a.dtype
# Out[20]: dtype('float64')
# ```
#
# Any object in Python can and in general does have a number of attributes
# that are accessed in just the way demonstrated above, with a period and
# the attribute name following the name of the particular object. In
# general, attributes involve properties of the object that are stored by
# Python with the object and require no computation. Python just looks up
# the attribute and returns its value.
#
# Objects in Python also have associated with them a number of specialized
# functions called *methods* that act on the object. In contrast to
# attributes, methods generally involve Python performing some kind of
# computation. Methods are accessed in a fashion similar to attributes, by
# appending a period followed the method's name, which is followed by a
# pair of open-close parentheses, consistent with methods being a kind of
# function that acts on the object. Often methods are used with no
# arguments, as methods by default act on the object whose name they
# follow. In some cases. however, methods can take arguments. Examples of
# methods for NumPy arrays are sorting, calculating the mean, or standard
# deviation of the array. The code below illustrates a few array methods.
#
# ``` ipython
# In [21]: a
# Out[21]:
# array([ 0.859057 , 0.27228037, 0.87780026, 0.14341207,
# 0.05067356, 0.83490135, 0.54844515, 0.33583966,
# 0.31527767, 0.15868803])
#
# In [22]: a.sum() # sum
# Out[22]: 4.3963751104791005
#
# In [23]: a.mean() # mean or average
# Out[23]: 0.43963751104791005
#
# In [24]: a.var() # variance
# Out[24]: 0.090819477333711512
#
# In [25]: a.std() # standard deviation
# Out[25]: 0.30136270063448711
#
# In [26]: a.sort() # sort small to large
#
# In [27]: a
# Out[27]:
# array([ 0.05067356, 0.14341207, 0.15868803, 0.27228037,
# 0.31527767, 0.33583966, 0.54844515, 0.83490135,
# 0.859057 , 0.87780026])
#
# In [28]: a.clip(0.3, 0.8)
# Out[29]:
# array([ 0.3 , 0.3 , 0.3 , 0.3 ,
# 0.31527767, 0.33583966, 0.54844515, 0.8 ,
# 0.8 , 0.8 ])
# ```
#
# The `clip()` method provides an example of a method that takes an
# argument, in this case the arguments are the lower and upper values to
# which array elements are cutoff if their values are outside the range
# set by these values.
#
# single: curve fitting; linear
#
# Example: linear least squares fitting
# -------------------------------------
#
# In this section we illustrate how to use functions and methods in the
# context of modeling experimental data.
#
# In science and engineering we often have some theoretical curve or
# *fitting function* that we would like to fit to some experimental data.
# In general, the fitting function is of the form $f(x; a, b, c, ...)$,
# where $x$ is the independent variable and $a$, $b$, $c$, ... are
# parameters to be adjusted so that the function $f(x; a, b, c, ...)$ best
# fits the experimental data. For example, suppose we had some data of the
# velocity *vs* time for a falling mass. If the mass falls only a short
# distance such that its velocity remains well below its terminal
# velocity, we can ignore air resistance. In this case, we expect the
# acceleration to be constant and the velocity to change linearly in time
# according to the equation
#
# $$v(t) = v_{0} - g t \;,$$
#
# where $g$ is the local gravitational acceleration. We can fit the data
# graphically, say by plotting it as shown below in Fig.
# `4.6<fig:FallingMassDataPlot>` and then drawing a line through the data.
# When we draw a straight line through a data, we try to minimize the
# distance between the points and the line, globally averaged over the
# whole data set.
#
# <figure>
# <img src="attachment:VelocityVsTimePlot.png" class="align-center" alt="" /><figcaption>Velocity <em>vs</em> time for falling mass.</figcaption>
# </figure>
#
# While this can give a reasonable estimate of the best fit to the data,
# the procedure is rather *ad hoc*. We would prefer to have a more
# well-defined analytical method for determining what constitutes a "best
# fit". One way to do that is to consider the sum
#
# $$S = \sum_{i}^{n} [y_{i} - f(x_{i}; a, b, c, ...)]^2 \;,$$
#
# where $y_{i}$ and $f(x_{i}; a, b, c, ...)$ are the values of the
# experimental data and the fitting function, respectively, at $x_{i}$,
# and $S$ is the square of their difference summed over all $n$ data
# points. The quantity $S$ is a sort of global measure of how much the the
# fit $f(x_{i}; a, b, c, ...)$ differs from the experimental data $y_{i}$.
#
# Notice that for a given set of data points $\{x_i, y_i\}$, $S$ is a
# function only of the fitting parameters $a, b, ...$, that is,
# $S=S(a, b, c, ...)$. One way of defining a *best* fit, then, is to find
# the values of the fitting parameters $a, b, ...$ that minimize the $S$.
#
# In principle, finding the values of the fitting parameters $a, b, ...$
# that minimize the $S$ is a simple matter. Just set the partial
# derivatives of $S$ with respect to the fitting parameter equal to zero
# and solve the resulting system of equations:
#
# $$\frac{\partial S}{\partial a} = 0 \;, \quad
# \frac{\partial S}{\partial b} = 0 \;, ...$$
#
# Because there are as many equations as there are fitting paramters, we
# should be able to solve the system of equations and find the values of
# the fitting parameters that minimize $S$. Solving those systems of
# equations is straightforward if the fitting function $f(x; a, b, ...)$
# is linear in the fitting parameters. Some examples of fitting functions
# linear in the fitting parameters are:
#
# $$\begin{aligned}
# f(x; a, b) &= a + bx \\
# f(x; a, b, c) &= a + bx + cx^2 \\
# f(x; a, b, c) &= a \sin x + b e^x + c e^{-x^2} \;.
# \end{aligned}$$
#
# For fitting functions such as these, taking the partial derivatives with
# respect to the fitting parameters, as proposed in `eq:sysSzero`, results
# in a set of algebraic equations that are linear in the fitting paramters
# $a, b, ...$ Because they are linear, these equations can be solved in a
# straightforward manner.
#
# For cases in which the fitting function is not linear in the fitting
# parameters, one can generally still find the values of the fitting
# parameters that minimize $S$ but finding them requires more work, which
# goes beyond our immediate interests here.
#
# ### Linear regression
#
# We start by considering the simplest case, fitting a straight line to a
# data set, such as the one shown in Fig. `4.6 <fig:FallingMassDataPlot>`
# above. Here the fitting function is $f(x) = a + bx$, which is linear in
# the fitting parameters $a$ and $b$. For a straight line, the sum in
# `eq:lsqrsum` becomes
#
# $$S(a,b) = \sum_{i} (y_{i} - a - bx_{i})^2 \;.$$
#
# Finding the best fit in this case corresponds to finding the values of
# the fitting parameters $a$ and $b$ for which $S(a,b)$ is a minimum. To
# find the minimum, we set the derivatives of $S(a,b)$ equal to zero:
#
# $$\begin{aligned}
# \frac{\partial S}{\partial a} &= \sum_{i}-2(y_{i}-a-bx_{i}) = 2 \left(na + b\sum_{i}x_{i} - \sum_{i}y_{i} \right) = 0 \\
# \frac{\partial S}{\partial b} &= \sum_{i}-2(y_{i}-a-bx_{i})\,x_{i} = 2 \left(a\sum_{i}x_{i} + b\sum_{i}x_{i}^2 - \sum_{i}x_{i}y_{i} \right) = 0
# \end{aligned}$$
#
# Dividing both equations by $2n$ leads to the equations
#
# $$\begin{aligned}
# a + b\bar{x} &= \bar{y}\\
# a\bar{x} + b\frac{1}{n}\sum_{i}x_{i}^2 &= \frac{1}{n}\sum_{i}x_{i}y_{i}
# \end{aligned}$$
#
# where
#
# $$\begin{aligned}
# \bar{x} &= \frac{1}{n}\sum_{i}x_{i}\\
# \bar{y} &= \frac{1}{n}\sum_{i}y_{i}\;.
# \end{aligned}$$
#
# Solving Eq. `eq:ablinreg` for the fitting parameters gives
#
# $$\begin{aligned}
# b &= \frac{\sum_{i}x_{i}y_{i} - n\bar{x}\bar{y}} {\sum_{i}x_{i}^2 - n \bar{x}^2}\\
# a &= \bar{y} - b\bar{x} \;.
# \end{aligned}$$
#
# Noting that $n\bar{y}=\sum_{i}y$ and $n\bar{x}=\sum_{i}x$, the results
# can be written as
#
# $$\begin{aligned}
# b &= \frac{\sum_{i}(x_{i}- \bar{x})\,y_{i}} {\sum_{i}(x_{i}- \bar{x})\,x_{i}} \\
# a &= \bar{y} - b\bar{x} \;.
# \end{aligned}$$
#
# While Eqs. `eq:b1` and `eq:b2` are equivalent analytically, Eq. `eq:b2`
# is preferred for numerical calculations because Eq. `eq:b2` is less
# sensitive to roundoff errors. Here is a Python function implementing
# this algorithm:
#
# def LineFit(x, y):
# ''' Returns slope and y-intercept of linear fit to (x,y)
# data set'''
# xavg = x.mean()
# slope = (y*(x-xavg)).sum()/(x*(x-xavg)).sum()
# yint = y.mean()-slope*xavg
# return slope, yint
#
# It's hard to imagine a simpler implementation of the linear regression
# algorithm.
#
# single: curve fitting; linear; with weighting
#
# ### Linear regression with weighting: $\chi^2$
#
# The linear regression routine of the previous section weights all data
# points equally. That is fine if the absolute uncertainty is the same for
# all data points. In many cases, however, the uncertainty is different
# for different points in a data set. In such cases, we would like to
# weight the data that has smaller uncertainty more heavily than those
# data that have greater uncertainty. For this case, there is a standard
# method of weighting and fitting data that is known as $\chi^2$ (or
# *chi-squared*) fitting. In this method we suppose that associated with
# each $(x_{i},y_{i})$ data point is an uncertainty in the value of
# $y_{i}$ of $\pm\sigma_{i}$. In this case, the "best fit" is defined as
# the the one with the set of fitting parameters that minimizes the sum
#
# $$\chi^2 = \sum_{i} \left(\frac{y_{i} - f(x_{i})} {\sigma_{i}}\right)^2 \;.$$
#
# Setting the uncertainties $\sigma_{i}=1$ for all data points yields the
# same sum $S$ we introduced in the previous section. In this case, all
# data points are weighted equally. However, if $\sigma_{i}$ varies from
# point to point, it is clear that those points with large $\sigma_{i}$
# contribute less to the sum than those with small $\sigma_{i}$. Thus,
# data points with large $\sigma_{i}$ are weighted less than those with
# small $\sigma_{i}$.
#
# To fit data to a straight line, we set $f(x) = a + bx$ and write
#
# $$\chi^2(a,b) = \sum_{i} \left(\frac{y_{i} - a -bx_{i}} {\sigma_{i}}\right)^2 \;.$$
#
# Finding the minimum for $\chi^2(a,b)$ follows the same procedure used
# for finding the minimum of $S(a,b)$ in the previous section. The result
# is
#
# $$\begin{aligned}
# b &= \frac{\sum_{i}(x_{i} - \hat{x})\,y_{i}/\sigma_{i}^2} {\sum_{i}(x_{i} - \hat{x})\,x_{i}/\sigma_{i}^2}\\
# a &= \hat{y} - b\hat{x} \;.
# \end{aligned}$$
#
# where
#
# $$\begin{aligned}
# \hat{x} &= \frac{\sum_{i}x_{i}/\sigma_{i}^2} {\sum_{i}1/\sigma_{i}^2}\\
# \hat{y} &= \frac{\sum_{i}y_{i}/\sigma_{i}^2} {\sum_{i}1/\sigma_{i}^2}\;.
# \end{aligned}$$
#
# For a fit to a straight line, the overall quality of the fit can be
# measured by the reduced chi-squared parameter
#
# $$\chi_{r}^2 = \frac{\chi^2}{n-2}$$
#
# where $\chi^2$ is given by Eq. `eq:chisq` evaluated at the optimal
# values of $a$ and $b$ given by Eq. `eq:abwchisq`. A good fit is
# characterized by $\chi_{r}^2 \approx 1$. This makes sense because if the
# uncertainties $\sigma_{i}$ have been properly estimated, then
# $[y_{i}-f(x_{i})]^2$ should on average be roughly equal to
# $\sigma_{i}^2$, so that the sum in Eq. `eq:chisq` should consist of $n$
# terms approximately equal to 1. Of course, if there were only 2 terms
# (<span class="title-ref">n=2</span>), then $\chi^2$ would be zero as the
# best straight line fit to two points is a perfect fit. That is
# essentially why $\chi_{r}^2$ is normalized using $n-2$ instead of $n$.
# If $\chi_{r}^2$ is significantly greater than 1, this indicates a poor
# fit to the fitting function (or an underestimation of the uncertainties
# $\sigma_{i}$). If $\chi_{r}^2$ is significantly less than 1, then it
# indicates that the uncertainties were probably overestimated (the fit
# and fitting function may or may not be good).
#
# <figure>
# <img src="attachment:VelocityVsTimeFit.png" class="align-center" alt="" /><figcaption>Fit using <span class="math inline"><em>χ</em><sup>2</sup></span> least squares fitting routine with data weighted by error bars.</figcaption>
# </figure>
#
# We can also get estimates of the uncertainties in our determination of
# the fitting parameters $a$ and $b$, although deriving the formulas is a
# bit more involved that we want to get into here. Therefore, we just give
# the results:
#
# $$\begin{aligned}
# \sigma_{b}^2 &= \frac{1} {\sum_{i}(x_{i} - \hat{x})\,x_{i}/\sigma_{i}^2}\\
# \sigma_{a}^2 &= \sigma_{b}^2 \frac{\sum_{i}x_{i}^2/\sigma_{i}^2} {\sum_{i}1/\sigma_{i}^2}\;.
# \end{aligned}$$
#
# The estimates of uncertainties in the fitting parameters depend
# explicitly on $\{\sigma_{i}\}$ and will only be meaningful if (*i*)
# $\chi_{r}^2 \approx 1$ and (*ii*) the estimates of the uncertainties
# $\sigma_{i}$ are accurate.
#
# You can find more information, including a derivation of Eq.
# `eq:absigma`, in *Data Reduction and Error Analysis for the Physical
# Sciences, 3rd ed* by NAME & NAME NAME New
# York, 2003.
#
# single: anonymous functions single: anonymous functions; lambda
# expressions single: lambda expressions
#
# Anonymous functions (lambda)
# ----------------------------
#
# Python provides another way to generate functions called *lambda*
# expressions. A lambda expression is a kind of in-line function that can
# be generated on the fly to accomplish some small task. You can assign
# lambda functions a name, but you don't need to; hence, they are often
# called *anonymous* functions. A lambda uses the keyword `lambda` and has
# the general form
#
# ``` ipython
# lambda arg1, arg2, ... : output
# ```
#
# The arguments `arg1, arg2, ...` are inputs to a lambda, just as for a
# functions, and the output is an expression using the arguments.
#
# While lambda expressions need not be named, we illustrate their use by
# comparing a conventional Python function definition to a lambda
# expression to which we give a name. First, we define a conventional
# python function
#
# ``` ipython
# In [1]: def f(a, b):
# ...: return 3*a+b**2
#
# In [2]: f(2,3)
# Out[2]: 15
# ```
#
# Next, we define a lambda that does the same thing
#
# ``` ipython
# In [3]: g = lambda a, b : 3*a+b**2
#
# In [4]: g(2,3)
# Out[4]: 15
# ```
#
# The `lambda` defined by `g` does the same thing as the function `f`.
# Such `lambda` expressions are useful when you need a very short function
# definition, usually to be used locally only once or a few times.
#
# Sometimes lambda expressions are used in function arguments that call
# for a function *name*, as opposed to the function itself. Moreover, in
# cases where a the function to be integrated is already defined but is a
# function one independent variable and several parameters, the lambda
# expression can be a convenient way of fashioning a single variable
# function. Don't worry if this doesn't quite make sense to you right now.
# You will see examples of lambda expressions used in just this way in the
# section `numericalIntegration`.
#
# There are also a number of nifty programming tricks that can be
# implemented using `lambda` expressions, but we will not go into them
# here. Look up `lambdas` on the web if you are curious about their more
# exotic uses.
#
# Exercises
# ---------
#
# 1. Write a function that can return each of the first three spherical
# Bessel functions $j_n(x)$:
#
# $$\begin{aligned}
# j_0(x) &= \frac{\sin x}{x}\\
# j_1(x) &= \frac{\sin x}{x^2} - \frac{\cos x}{x}\\
# j_2(x) &= \left(\frac{3}{x^2}-1\right)\frac{\sin x}{x} - \frac{3\cos x}{x^2}
# \end{aligned}$$
#
# Your function should take as arguments a NumPy array $x$ and the
# order $n$, and should return an array of the designated order $n$
# spherical Bessel function. Take care to make sure that your
# functions behave properly at $x=0$.
#
# Demonstrate the use of your function by writing a Python routine
# that plots the three Bessel functions for $0 \le x \le 20$. Your
# plot should look like the one below. Something to think about: You
# might note that $j_1(x)$ can be written in terms of $j_0(x)$, and
# that $j_2(x)$ can be written in terms of $j_1(x)$ and $j_0(x)$. Can
# you take advantage of this to write a more efficient function for
# the calculations of $j_1(x)$ and $j_2(x)$?
#
# <figure>
# <img src="attachment:besselSph.png" class="align-center" alt="" />
# </figure>
#
# 2. 1. Write a function that simulates the rolling of $n$ dice. Use the
# NumPy function `random.random_integers(6)`, which generates a
# random integer between 1 and 6 with equal probability (like
# rolling fair dice). The input of your function should be the
# number of dice thrown each roll and the output should be the sum
# of the $n$ dice.
# 2. "Roll" 2 dice 10,000 times keeping track of all the sums of each
# set of rolls in a list. Then use your program to generate a
# histogram summarizing the rolls of two dice 10,000 times. The
# result should look like the histogram plotted below. Use the
# MatPlotLib function `hist` (see
# <http://matplotlib.org/api/pyplot_summary.html>) and set the
# number of bins in the histogram equal to the number of different
# possible outcomes of a roll of your dice. For example, the sum
# of two dice can be anything between 2 and 12, which corresponds
# to 11 possible outcomes. You should get a histogram that looks
# like the one below.
# 3. "Repeat part (b) using 3 dice and plot the resulting histogram.
#
# <figure>
# <img src="attachment:diceRoll2.png" class="align-center" alt="" />
# </figure>
#
# 3. Write a function to draw a circular smiley face with eyes, a nose,
# and a mouth. One argument should set the overall size of the face
# (the circle radius). Optional arguments should allow the user to
# specify the $(x,y)$ position of the face, whether the face is
# smiling or frowning, and the color of the lines. The default should
# be a smiling blue face centered at $(0,0)$. Once you write your
# function, write a program that calls it several times to produce a
# plot like the one below (creative improvisation is encouraged!). In
# producing your plot, you may find the call
# `plt.axes().set_aspect(1)` useful so that circles appear as circles
# and not ovals. You should only use MatPlotLib functions introduced
# in this text. To create a circle you can create an array of angles
# that goes from 0 to $2\pi$ and then produce the $x$ and $y$ arrays
# for your circle by taking the cosine and sine, respectively, of the
# array. Hint: You can use the same $(x,y)$ arrays to make the smile
# and frown as you used to make the circle by plotting appropriate
# slices of those arrays. You do not need to create new arrays.
#
# <figure>
# <img src="attachment:smiley.png" class="align-center" alt="" />
# </figure>
#
# 4. In the section `linfitfunc`, we showed that the best fit of a line
# $y = a + bx$ to a set of data $\{(x_i,y_i)\}$ is obtained for the
# values of $a$ and $b$ given by Eq. `eq:b2`. Those formulas were
# obtained by finding the values of $a$ and $b$ that minimized the sum
# in Eq. `eq:linreg1`. This approach and these formulas are valid when
# the uncertainties in the data are the same for all data points. The
# Python function `LineFit(x, y)` in the section `linfitfunc`
# implements Eq. `eq:b2`.
#
# 1. Write a new fitting function `LineFitWt(x, y)` that implements
# the formulas given in Eq. `eq:xychisq` that minimize the
# $\chi^2$ function give by Eq. `eq:chisqlin`. This more general
# approach is valid when the individual data points have different
# weightings *or* when they all have the same weighting. You
# should also write a function to calculate the reduced
# chi-squared $\chi_r^2$ defined by Eq. `eq:chisqlin`.
#
# 2. Write a Python program that reads in the data below, plots it,
# and fits it using the two fitting functions `LineFit(x, y)` and
# `LineFitWt(x, y)`. Your program should plot the data with error
# bars and with *both* fits with and without weighting, that is
# from `LineFit(x, y)` and `LineFitWt(x, y, dy)`. It should also
# report the results for both fits on the plot, similar to the
# output of the supplied program above, as well as the values of
# $\chi_r^2$, the reduce chi-squared value, for both fits. Explain
# why weighting the data gives a steeper or less steep slope than
# the fit without weighting.
#
# Velocity vs time data
# for a falling mass
# time (s) velocity (m/s) uncertainty (m/s)
# 2.23 139 16
# 4.78 123 16
# 7.21 115 4
# 9.37 96 9
# 11.64 62 17
# 14.23 54 17
# 16.55 10 12
# 18.70 -3 15
# 21.05 -13 18
# 23.21 -55 10
#
# 5. Modify the function `LineFitWt(x, y)` you wrote in Exercise 4 above
# so that in addition to returning the fitting parameters $a$ and $b$,
# it also returns the uncertainties in the fitting parameters
# $\sigma_a$ and $\sigma_b$ using the formulas given by Eq.
# `eq:absigma`. Use your new fitting function to find the
# uncertainties in the fitted slope and $y$-intercept for the data
# provided with Exercise 4.
|
# vim: set fileencoding=utf-8 :
# ***********************IMPORTANT NMAP LICENSE TERMS************************
# * *
# * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is *
# * also a registered trademark of Insecure.Com LLC. This program is free *
# * software; you may redistribute and/or modify it under the terms of the *
# * GNU General Public License as published by the Free Software *
# * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS *
# * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, *
# * modify, and redistribute this software under certain conditions. If *
# * you wish to embed Nmap technology into proprietary software, we sell *
# * alternative licenses (contact EMAIL Dozens of software *
# * vendors already license Nmap technology such as host discovery, port *
# * scanning, OS detection, version detection, and the Nmap Scripting *
# * Engine. *
# * *
# * Note that the GPL places important restrictions on "derivative works", *
# * yet it does not provide a detailed definition of that term. To avoid *
# * misunderstandings, we interpret that term as broadly as copyright law *
# * allows. For example, we consider an application to constitute a *
# * derivative work for the purpose of this license if it does any of the *
# * following with any software or content covered by this license *
# * ("Covered Software"): *
# * *
# * o Integrates source code from Covered Software. *
# * *
# * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db *
# * or nmap-service-probes. *
# * *
# * o Is designed specifically to execute Covered Software and parse the *
# * results (as opposed to typical shell or execution-menu apps, which will *
# * execute anything you tell them to). *
# * *
# * o Includes Covered Software in a proprietary executable installer. The *
# * installers produced by InstallShield are an example of this. Including *
# * Nmap with other software in compressed or archival form does not *
# * trigger this provision, provided appropriate open source decompression *
# * or de-archiving software is widely available for no charge. For the *
# * purposes of this license, an installer is considered to include Covered *
# * Software even if it actually retrieves a copy of Covered Software from *
# * another source during runtime (such as by downloading it from the *
# * Internet). *
# * *
# * o Links (statically or dynamically) to a library which does any of the *
# * above. *
# * *
# * o Executes a helper program, module, or script to do any of the above. *
# * *
# * This list is not exclusive, but is meant to clarify our interpretation *
# * of derived works with some common examples. Other people may interpret *
# * the plain GPL differently, so we consider this a special exception to *
# * the GPL that we apply to Covered Software. Works which meet any of *
# * these conditions must conform to all of the terms of this license, *
# * particularly including the GPL Section 3 requirements of providing *
# * source code and allowing free redistribution of the work as a whole. *
# * *
# * As another special exception to the GPL terms, Insecure.Com LLC grants *
# * permission to link the code of this program with any version of the *
# * OpenSSL library which is distributed under a license identical to that *
# * listed in the included docs/licenses/OpenSSL.txt file, and distribute *
# * linked combinations including the two. *
# * *
# * Any redistribution of Covered Software, including any derived works, *
# * must obey and carry forward all of the terms of this license, including *
# * obeying all GPL rules and restrictions. For example, source code of *
# * the whole work must be provided and free redistribution must be *
# * allowed. All GPL references to "this License", are to be treated as *
# * including the terms and conditions of this license text as well. *
# * *
# * Because this license imposes special exceptions to the GPL, Covered *
# * Work may not be combined (even as part of a larger work) with plain GPL *
# * software. The terms, conditions, and exceptions of this license must *
# * be included as well. This license is incompatible with some other open *
# * source licenses as well. In some cases we can relicense portions of *
# * Nmap or grant special permissions to use it in other open source *
# * software. Please contact EMAIL with any such requests. *
# * Similarly, we don't incorporate incompatible open source software into *
# * Covered Software without special permission from the copyright holders. *
# * *
# * If you have any questions about the licensing restrictions on using *
# * Nmap in other works, are happy to help. As mentioned above, we also *
# * offer alternative license to integrate Nmap into proprietary *
# * applications and appliances. These contracts have been sold to dozens *
# * of software vendors, and generally include a perpetual license as well *
# * as providing for priority support and updates. They also fund the *
# * continued development of Nmap. Please email EMAIL for further *
# * information. *
# * *
# * If you have received a written license agreement or contract for *
# * Covered Software stating terms other than these, you may choose to use *
# * and redistribute Covered Software under those terms instead of these. *
# * *
# * Source is provided to this software because we believe users have a *
# * right to know exactly what a program is going to do before they run it. *
# * This also allows you to audit the software for security holes (none *
# * have been found so far). *
# * *
# * Source code also allows you to port Nmap to new platforms, fix bugs, *
# * and add new features. You are highly encouraged to send your changes *
# * to the EMAIL mailing list for possible incorporation into the *
# * main distribution. By sending these changes to Fyodor or one of the *
# * Insecure.Org development mailing lists, or checking them into the Nmap *
# * source code repository, it is understood (unless you specify otherwise) *
# * that you are offering the Nmap Project (Insecure.Com LLC) the *
# * unlimited, non-exclusive right to reuse, modify, and relicense the *
# * code. Nmap will always be available Open Source, but this is important *
# * because the inability to relicense code has caused devastating problems *
# * for other Free Software projects (such as KDE and NASM). We also *
# * occasionally relicense the code to third parties as discussed above. *
# * If you wish to specify special license conditions of your *
# * contributions, just say so when you send them. *
# * *
# * This program is distributed in the hope that it will be useful, but *
# * WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap *
# * license file for more details (it's in a COPYING file included with *
# * Nmap, and also available from https://svn.nmap.org/nmap/COPYING *
# * *
# ***************************************************************************/
|
#
# XML-RPC CLIENT LIBRARY
# $Id$
#
# an XML-RPC client interface for Python.
#
# the marshalling and response parser code can also be used to
# implement XML-RPC servers.
#
# Notes:
# this version is designed to work with Python 2.1 or newer.
#
# History:
# 1999-01-14 fl Created
# 1999-01-15 fl Changed dateTime to use localtime
# 1999-01-16 fl Added Binary/base64 element, default to RPC2 service
# 1999-01-19 fl Fixed array data element (from Skip Montanaro)
# 1999-01-21 fl Fixed dateTime constructor, etc.
# 1999-02-02 fl Added fault handling, handle empty sequences, etc.
# 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro)
# 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8)
# 2000-11-28 fl Changed boolean to check the truth value of its argument
# 2001-02-24 fl Added encoding/Unicode/SafeTransport patches
# 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1)
# 2001-03-28 fl Make sure response tuple is a singleton
# 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2)
# 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME
# 2001-09-03 fl Allow Transport subclass to override getparser
# 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup)
# 2001-10-01 fl Remove containers from memo cache when done with them
# 2001-10-01 fl Use faster escape method (80% dumps speedup)
# 2001-10-02 fl More dumps microtuning
# 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow
# 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems)
# 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments
# 2002-04-16 fl Added __str__ methods to datetime/binary wrappers
# 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version
# 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type
# 2003-02-27 gvr Remove apply calls
# 2003-04-24 sm Use cStringIO if available
# 2003-04-25 ak Add support for nil
# 2003-06-15 gn Add support for time.struct_time
# 2003-07-12 gp Correct marshalling of Faults
# 2003-10-31 mvl Add multicall support
# 2004-08-20 mvl Bump minimum supported Python version to 2.1
#
# Copyright (c) 1999-2002 by Secret Labs AB.
# Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com
#
# --------------------------------------------------------------------
# The XML-RPC client interface is
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
#
# things to look into some day:
# TODO: sort out True/False/boolean issues for Python 2.3
|
"""
===================
Universal Functions
===================
Ufuncs are, generally speaking, mathematical functions or operations that are
applied element-by-element to the contents of an array. That is, the result
in each output array element only depends on the value in the corresponding
input array (or arrays) and on no other array elements. Numpy comes with a
large suite of ufuncs, and scipy extends that suite substantially. The simplest
example is the addition operator: ::
>>> np.array([0,2,3,4]) + np.array([1,1,-1,2])
array([1, 3, 2, 6])
The unfunc module lists all the available ufuncs in numpy. Documentation on
the specific ufuncs may be found in those modules. This documentation is
intended to address the more general aspects of unfuncs common to most of
them. All of the ufuncs that make use of Python operators (e.g., +, -, etc.)
have equivalent functions defined (e.g. add() for +)
Type coercion
=============
What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of
two different types? What is the type of the result? Typically, the result is
the higher of the two types. For example: ::
float32 + float64 -> float64
int8 + int32 -> int32
int16 + float32 -> float32
float32 + complex64 -> complex64
There are some less obvious cases generally involving mixes of types
(e.g. uints, ints and floats) where equal bit sizes for each are not
capable of saving all the information in a different type of equivalent
bit size. Some examples are int32 vs float32 or uint32 vs int32.
Generally, the result is the higher type of larger size than both
(if available). So: ::
int32 + float32 -> float64
uint32 + int32 -> int64
Finally, the type coercion behavior when expressions involve Python
scalars is different than that seen for arrays. Since Python has a
limited number of types, combining a Python int with a dtype=np.int8
array does not coerce to the higher type but instead, the type of the
array prevails. So the rules for Python scalars combined with arrays is
that the result will be that of the array equivalent the Python scalar
if the Python scalar is of a higher 'kind' than the array (e.g., float
vs. int), otherwise the resultant type will be that of the array.
For example: ::
Python int + int8 -> int8
Python float + int8 -> float64
ufunc methods
=============
Binary ufuncs support 4 methods.
**.reduce(arr)** applies the binary operator to elements of the array in
sequence. For example: ::
>>> np.add.reduce(np.arange(10)) # adds all elements of array
45
For multidimensional arrays, the first dimension is reduced by default: ::
>>> np.add.reduce(np.arange(10).reshape(2,5))
array([ 5, 7, 9, 11, 13])
The axis keyword can be used to specify different axes to reduce: ::
>>> np.add.reduce(np.arange(10).reshape(2,5),axis=1)
array([10, 35])
**.accumulate(arr)** applies the binary operator and generates an an
equivalently shaped array that includes the accumulated amount for each
element of the array. A couple examples: ::
>>> np.add.accumulate(np.arange(10))
array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45])
>>> np.multiply.accumulate(np.arange(1,9))
array([ 1, 2, 6, 24, 120, 720, 5040, 40320])
The behavior for multidimensional arrays is the same as for .reduce(),
as is the use of the axis keyword).
**.reduceat(arr,indices)** allows one to apply reduce to selected parts
of an array. It is a difficult method to understand. See the documentation
at:
**.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and
arr2. It will work on multidimensional arrays (the shape of the result is
the concatenation of the two input shapes.: ::
>>> np.multiply.outer(np.arange(3),np.arange(4))
array([[0, 0, 0, 0],
[0, 1, 2, 3],
[0, 2, 4, 6]])
Output arguments
================
All ufuncs accept an optional output array. The array must be of the expected
output shape. Beware that if the type of the output array is of a different
(and lower) type than the output result, the results may be silently truncated
or otherwise corrupted in the downcast to the lower type. This usage is useful
when one wants to avoid creating large temporary arrays and instead allows one
to reuse the same array memory repeatedly (at the expense of not being able to
use more convenient operator notation in expressions). Note that when the
output argument is used, the ufunc still returns a reference to the result.
>>> x = np.arange(2)
>>> np.add(np.arange(2),np.arange(2.),x)
array([0, 2])
>>> x
array([0, 2])
and & or as ufuncs
==================
Invariably people try to use the python 'and' and 'or' as logical operators
(and quite understandably). But these operators do not behave as normal
operators since Python treats these quite differently. They cannot be
overloaded with array equivalents. Thus using 'and' or 'or' with an array
results in an error. There are two alternatives:
1) use the ufunc functions logical_and() and logical_or().
2) use the bitwise operators & and \\|. The drawback of these is that if
the arguments to these operators are not boolean arrays, the result is
likely incorrect. On the other hand, most usages of logical_and and
logical_or are with boolean arrays. As long as one is careful, this is
a convenient way to apply these operators.
""" |
"""Drag-and-drop support for Tkinter.
This is very preliminary. I currently only support dnd *within* one
application, between different windows (or within the same window).
I an trying to make this as generic as possible -- not dependent on
the use of a particular widget or icon type, etc. I also hope that
this will work with Pmw.
To enable an object to be dragged, you must create an event binding
for it that starts the drag-and-drop process. Typically, you should
bind <ButtonPress> to a callback function that you write. The function
should call Tkdnd.dnd_start(source, event), where 'source' is the
object to be dragged, and 'event' is the event that invoked the call
(the argument to your callback function). Even though this is a class
instantiation, the returned instance should not be stored -- it will
be kept alive automatically for the duration of the drag-and-drop.
When a drag-and-drop is already in process for the Tk interpreter, the
call is *ignored*; this normally averts starting multiple simultaneous
dnd processes, e.g. because different button callbacks all
dnd_start().
The object is *not* necessarily a widget -- it can be any
application-specific object that is meaningful to potential
drag-and-drop targets.
Potential drag-and-drop targets are discovered as follows. Whenever
the mouse moves, and at the start and end of a drag-and-drop move, the
Tk widget directly under the mouse is inspected. This is the target
widget (not to be confused with the target object, yet to be
determined). If there is no target widget, there is no dnd target
object. If there is a target widget, and it has an attribute
dnd_accept, this should be a function (or any callable object). The
function is called as dnd_accept(source, event), where 'source' is the
object being dragged (the object passed to dnd_start() above), and
'event' is the most recent event object (generally a <Motion> event;
it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept()
function returns something other than None, this is the new dnd target
object. If dnd_accept() returns None, or if the target widget has no
dnd_accept attribute, the target widget's parent is considered as the
target widget, and the search for a target object is repeated from
there. If necessary, the search is repeated all the way up to the
root widget. If none of the target widgets can produce a target
object, there is no target object (the target object is None).
The target object thus produced, if any, is called the new target
object. It is compared with the old target object (or None, if there
was no old target widget). There are several cases ('source' is the
source object, and 'event' is the most recent event object):
- Both the old and new target objects are None. Nothing happens.
- The old and new target objects are the same object. Its method
dnd_motion(source, event) is called.
- The old target object was None, and the new target object is not
None. The new target object's method dnd_enter(source, event) is
called.
- The new target object is None, and the old target object is not
None. The old target object's method dnd_leave(source, event) is
called.
- The old and new target objects differ and neither is None. The old
target object's method dnd_leave(source, event), and then the new
target object's method dnd_enter(source, event) is called.
Once this is done, the new target object replaces the old one, and the
Tk mainloop proceeds. The return value of the methods mentioned above
is ignored; if they raise an exception, the normal exception handling
mechanisms take over.
The drag-and-drop processes can end in two ways: a final target object
is selected, or no final target object is selected. When a final
target object is selected, it will always have been notified of the
potential drop by a call to its dnd_enter() method, as described
above, and possibly one or more calls to its dnd_motion() method; its
dnd_leave() method has not been called since the last call to
dnd_enter(). The target is notified of the drop by a call to its
method dnd_commit(source, event).
If no final target object is selected, and there was an old target
object, its dnd_leave(source, event) method is called to complete the
dnd sequence.
Finally, the source object is notified that the drag-and-drop process
is over, by a call to source.dnd_end(target, event), specifying either
the selected target object, or None if no target object was selected.
The source object can use this to implement the commit action; this is
sometimes simpler than to do it in the target's dnd_commit(). The
target's dnd_commit() method could then simply be aliased to
dnd_leave().
At any time during a dnd sequence, the application can cancel the
sequence by calling the cancel() method on the object returned by
dnd_start(). This will call dnd_leave() if a target is currently
active; it will never call dnd_commit().
""" |
# https://www.cl.cam.ac.uk/~mgk25/ucs/keysyms.txt
# Mapping of X11 keysyms to ISO 10646 / Unicode
#
# The "X11 Window System Protocol" standard (Release 6.4) defines in
# Appendix A the keysym codes. These 29-bit integer values identify
# characters or functions associated with each key (e.g., via the
# visible engraving) of a keyboard layout. In addition, mnemonic macro
# names are provided for the keysyms in the C header file
# <X11/keysymdef.h>. These are compiled (by xc/lib/X11/util/
# makekeys.c) into a hash table that can be accessed with X11 library
# functions such as XStringToKeysym() and XKeysymToString().
#
# The creation of the keysym codes predates ISO 10646 / Unicode, but
# they represent a similar attempt to merge several existing coded
# character sets (mostly early drafts of ISO 8859, as well as some --
# long since forgotten -- DEC font encodings). X.Org and XFree86 have
# agreed that for any future extension of the keysyms with characters
# already found in ISO 10646 / Unicode, the following algorithm will
# be used. The new keysym code position will simply be the character's
# Unicode number plus 0x01000000. The keysym codes in the range
# 0x01000100 0x0110ffff are now reserved to represent Unicode
# characters in the range U0100 to U10FFFF. (Note that the ISO 8859-1
# characters that make up Unicode positions below U0100 are excluded
# from this rule, as they are already covered by the keysyms of the
# same value.)
#
# While most newer Unicode-based X11 clients do already accept
# Unicode-mapped keysyms in the range 0x01000100 to 0x0110ffff, it
# will remain necessary for clients -- in the interest of
# compatibility with existing servers -- to also understand the
# existing keysym values. Clients can use the table below to map the
# pre-Unicode keysym values (0x0100 to 0x20ff) to the corresponding
# Unicode characters for further processing.
#
# The following fields are used in this mapping table:
#
# 1 The hexadecimal X11 keysym number (as defined in Appendix A of
# the X11 protocol specification and as listed in <X11/keysymdef.h>)
#
# 2 The corresponding Unicode position
# (U0000 means that there is no equivalent Unicode character)
#
# 3 Status of this keysym and its Unicode mapping
#
# . regular -- This is a regular well-established keysym with
# a straightforward Unicode equivalent (e.g., any keysym
# derived from ISO 8859). There can be at most one regular
# keysym associated with each Unicode character.
#
# d duplicate -- This keysym has the same Unicode mapping as
# another one with status 'regular'. It represents a case
# where keysyms distinguish between several characters that
# Unicode has unified into a single one (examples are
# several APL symbols)
#
# o obsolete -- While it may be possible to find a Unicode of
# similar name, the exact semantics of this keysym are
# unclear, because the font or character set from which it
# came has never been widely used. Examples are various
# symbols from the DEC Publishing character set, which may
# have been used in a special font shipped with the
# DECwrite product. Where no similar Unicode character
# can be identified, U0000 is used in column 2.
#
# f function -- While it may be possible to find a Unicode
# of similar name, this keysym differs semantically
# substantially from the corresponding Unicode character,
# because it describes a particular function key or will
# first have to be processed by an input method that will
# translate it into a proper stream of Unicode characters.
#
# r remove -- This is a bogus keysym that was added in error,
# is not used in any known keyboard layout, and should be
# removed from both <X11/keysymdef.h> and the standard.
#
# u unicode-remap -- This keysym was added rather recently to
# the <X11/keysymdef.h> of XFree86, but into a number range
# reserved for future extensions of the standard by
# X.Org. It is not widely used at present, but its name
# appears to be sufficiently useful and it should therefore
# be directly mapped to Unicode in the 0x1xxxxxx range in
# future versions of <X11/keysymdef.h>. This way, the macro
# name will be preserved, but the standard will not have to
# be extended.
#
# Recommendations for using the keysym status:
#
# - All keysyms with status regular, duplicate, obsolete and
# function should be listed in Appendix A of the X11 protocol
# spec.
#
# - All keysyms except for those with status remove should be
# listed in <X11/keysymdef.h>.
#
# - Keysyms with status duplicate, obsolete, and remove should
# not be used in future keyboard layouts, as there are other
# keysyms with status regular, function and unicode-remap
# that give access to the same Unicode characters.
#
# - Keysym to Unicode conversion tables in clients should include
# all mappings except those with status function and those
# with U0000.
#
# # comment marker
#
# 4 the name of the X11 keysym macro without the leading XK_,
# as defined in <X11/keysymdef.h>
#
# The last columns may be followed by comments copied from <X11/keysymdef.h>.
# A keysym may be listed several times, if there are several macro names
# associated with it in <X11/keysymdef.h>.
#
# Author: NAME <http://www.cl.cam.ac.uk/~mgk25/>
# Date: 2004-08-08
#
# This table evolved out of an earlier one by NAME TU Eindhoven.
|
"""CPStats, a package for collecting and reporting on program statistics.
Overview
========
Statistics about program operation are an invaluable monitoring and debugging
tool. Unfortunately, the gathering and reporting of these critical values is
usually ad-hoc. This package aims to add a centralized place for gathering
statistical performance data, a structure for recording that data which
provides for extrapolation of that data into more useful information,
and a method of serving that data to both human investigators and
monitoring software. Let's examine each of those in more detail.
Data Gathering
--------------
Just as Python's `logging` module provides a common importable for gathering
and sending messages, performance statistics would benefit from a similar
common mechanism, and one that does *not* require each package which wishes
to collect stats to import a third-party module. Therefore, we choose to
re-use the `logging` module by adding a `statistics` object to it.
That `logging.statistics` object is a nested dict. It is not a custom class,
because that would:
1. require libraries and applications to import a third-party module in
order to participate
2. inhibit innovation in extrapolation approaches and in reporting tools, and
3. be slow.
There are, however, some specifications regarding the structure of the dict.::
{
+----"SQLAlchemy": {
| "Inserts": 4389745,
| "Inserts per Second":
| lambda s: s["Inserts"] / (time() - s["Start"]),
| C +---"Table Statistics": {
| o | "widgets": {-----------+
N | l | "Rows": 1.3M, | Record
a | l | "Inserts": 400, |
m | e | },---------------------+
e | c | "froobles": {
s | t | "Rows": 7845,
p | i | "Inserts": 0,
a | o | },
c | n +---},
e | "Slow Queries":
| [{"Query": "SELECT * FROM widgets;",
| "Processing Time": 47.840923343,
| },
| ],
+----},
}
The `logging.statistics` dict has four levels. The topmost level is nothing
more than a set of names to introduce modularity, usually along the lines of
package names. If the SQLAlchemy project wanted to participate, for example,
it might populate the item `logging.statistics['SQLAlchemy']`, whose value
would be a second-layer dict we call a "namespace". Namespaces help multiple
packages to avoid collisions over key names, and make reports easier to read,
to boot. The maintainers of SQLAlchemy should feel free to use more than one
namespace if needed (such as 'SQLAlchemy ORM'). Note that there are no case
or other syntax constraints on the namespace names; they should be chosen
to be maximally readable by humans (neither too short nor too long).
Each namespace, then, is a dict of named statistical values, such as
'Requests/sec' or 'Uptime'. You should choose names which will look
good on a report: spaces and capitalization are just fine.
In addition to scalars, values in a namespace MAY be a (third-layer)
dict, or a list, called a "collection". For example, the CherryPy
:class:`StatsTool` keeps track of what each request is doing (or has most
recently done) in a 'Requests' collection, where each key is a thread ID; each
value in the subdict MUST be a fourth dict (whew!) of statistical data about
each thread. We call each subdict in the collection a "record". Similarly,
the :class:`StatsTool` also keeps a list of slow queries, where each record
contains data about each slow query, in order.
Values in a namespace or record may also be functions, which brings us to:
Extrapolation
-------------
The collection of statistical data needs to be fast, as close to unnoticeable
as possible to the host program. That requires us to minimize I/O, for example,
but in Python it also means we need to minimize function calls. So when you
are designing your namespace and record values, try to insert the most basic
scalar values you already have on hand.
When it comes time to report on the gathered data, however, we usually have
much more freedom in what we can calculate. Therefore, whenever reporting
tools (like the provided :class:`StatsPage` CherryPy class) fetch the contents
of `logging.statistics` for reporting, they first call
`extrapolate_statistics` (passing the whole `statistics` dict as the only
argument). This makes a deep copy of the statistics dict so that the
reporting tool can both iterate over it and even change it without harming
the original. But it also expands any functions in the dict by calling them.
For example, you might have a 'Current Time' entry in the namespace with the
value "lambda scope: time.time()". The "scope" parameter is the current
namespace dict (or record, if we're currently expanding one of those
instead), allowing you access to existing static entries. If you're truly
evil, you can even modify more than one entry at a time.
However, don't try to calculate an entry and then use its value in further
extrapolations; the order in which the functions are called is not guaranteed.
This can lead to a certain amount of duplicated work (or a redesign of your
schema), but that's better than complicating the spec.
After the whole thing has been extrapolated, it's time for:
Reporting
---------
The :class:`StatsPage` class grabs the `logging.statistics` dict, extrapolates
it all, and then transforms it to HTML for easy viewing. Each namespace gets
its own header and attribute table, plus an extra table for each collection.
This is NOT part of the statistics specification; other tools can format how
they like.
You can control which columns are output and how they are formatted by updating
StatsPage.formatting, which is a dict that mirrors the keys and nesting of
`logging.statistics`. The difference is that, instead of data values, it has
formatting values. Use None for a given key to indicate to the StatsPage that a
given column should not be output. Use a string with formatting
(such as '%.3f') to interpolate the value(s), or use a callable (such as
lambda v: v.isoformat()) for more advanced formatting. Any entry which is not
mentioned in the formatting dict is output unchanged.
Monitoring
----------
Although the HTML output takes pains to assign unique id's to each <td> with
statistical data, you're probably better off fetching /cpstats/data, which
outputs the whole (extrapolated) `logging.statistics` dict in JSON format.
That is probably easier to parse, and doesn't have any formatting controls,
so you get the "original" data in a consistently-serialized format.
Note: there's no treatment yet for datetime objects. Try time.time() instead
for now if you can. Nagios will probably thank you.
Turning Collection Off
----------------------
It is recommended each namespace have an "Enabled" item which, if False,
stops collection (but not reporting) of statistical data. Applications
SHOULD provide controls to pause and resume collection by setting these
entries to False or True, if present.
Usage
=====
To collect statistics on CherryPy applications::
from cherrypy.lib import cpstats
appconfig['/']['tools.cpstats.on'] = True
To collect statistics on your own code::
import logging
# Initialize the repository
if not hasattr(logging, 'statistics'): logging.statistics = {}
# Initialize my namespace
mystats = logging.statistics.setdefault('My Stuff', {})
# Initialize my namespace's scalars and collections
mystats.update({
'Enabled': True,
'Start Time': time.time(),
'Important Events': 0,
'Events/Second': lambda s: (
(s['Important Events'] / (time.time() - s['Start Time']))),
})
...
for event in events:
...
# Collect stats
if mystats.get('Enabled', False):
mystats['Important Events'] += 1
To report statistics::
root.cpstats = cpstats.StatsPage()
To format statistics reports::
See 'Reporting', above.
""" |
# Copyright 2011,2012 NAME Copyright 2008 (C) Nicira, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is derived from the packet library in NOX, which was
# developed by Nicira, Inc.
#======================================================================
#
# DNS Message Format
#
# 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | ID |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# |QR| Opcode |AA|TC|RD|RA|Z |AD|CD| RCODE |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | Total Questions |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | Total Answerrs |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | Total Authority RRs |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | Total Additional RRs |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | Questions ... |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | Answer RRs ... |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | Authority RRs.. |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | Additional RRs. |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
#
# Question format:
#
# 1 1 1 1 1 1
# 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | |
# / QNAME /
# / /
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | QTYPE |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | QCLASS |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
#
#
#
# All RRs have the following format:
# 1 1 1 1 1 1
# 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | |
# / /
# / NAME /
# | |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | TYPE |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | CLASS |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | TTL |
# | |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
# | RDLENGTH |
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--|
# / RDATA /
# / /
# +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
#
#
#======================================================================
# TODO:
# SOA data
# General cleaup/rewrite (code is/has gotten pretty bad)
|
"""
# ggame
The simple cross-platform sprite and game platform for Brython Server (Pygame, Tkinter to follow?).
Ggame stands for a couple of things: "good game" (of course!) and also "git game" or "github game"
because it is designed to operate with [Brython Server](http://runpython.com) in concert with
Github as a backend file store.
Ggame is **not** intended to be a full-featured gaming API, with every bell and whistle. Ggame is
designed primarily as a tool for teaching computer programming, recognizing that the ability
to create engaging and interactive games is a powerful motivator for many progamming students.
Accordingly, any functional or performance enhancements that *can* be reasonably implemented
by the user are left as an exercise.
## Functionality Goals
The ggame library is intended to be trivially easy to use. For example:
from ggame import App, ImageAsset, Sprite
# Create a displayed object at 100,100 using an image asset
Sprite(ImageAsset("ggame/bunny.png"), (100,100))
# Create the app, with a 500x500 pixel stage
app = App(500,500)
# Run the app
app.run()
## Overview
There are three major components to the `ggame` system: Assets, Sprites and the App.
### Assets
Asset objects (i.e. `ggame.ImageAsset`, etc.) typically represent separate files that
are provided by the "art department". These might be background images, user interface
images, or images that represent objects in the game. In addition, `ggame.SoundAsset`
is used to represent sound files (`.wav` or `.mp3` format) that can be played in the
game.
Ggame also extends the asset concept to include graphics that are generated dynamically
at run-time, such as geometrical objects, e.g. rectangles, lines, etc.
### Sprites
All of the visual aspects of the game are represented by instances of `ggame.Sprite` or
subclasses of it.
### App
Every ggame application must create a single instance of the `ggame.App` class (or
a sub-class of it). Creating an instance of the `ggame.App` class will initiate
creation of a pop-up window on your browser. Executing the app's `run` method will
begin the process of refreshing the visual assets on the screen.
### Events
No game is complete without a player and players produce events. Your code handles user
input by registering to receive keyboard and mouse events using `ggame.App.listenKeyEvent` and
`ggame.App.listenMouseEvent` methods.
## Execution Environment
Ggame is designed to be executed in a web browser using [Brython](http://brython.info/),
[Pixi.js](http://www.pixijs.com/) and [Buzz](http://buzz.jaysalvat.com/). The easiest
way to do this is by executing from [runpython](http://runpython.com), with source
code residing on [github](http://github.com).
When using [runpython](http://runpython.com), you will have to configure your browser
to allow popup windows.
To use Ggame in your own application, you will minimally need to create a folder called
`ggame` in your project. Within `ggame`, copy the `ggame.py`, `sysdeps.py` and
`__init__.py` files from the [ggame project](https://github.com/BrythonServer/ggame).
### Include Ggame as a Git Subtree
From the same directory as your own python sources (note: you must have an existing git
repository with committed files in order for the following to work properly),
execute the following terminal commands:
git remote add -f ggame https://github.com/BrythonServer/ggame.git
git merge -s ours --no-commit ggame/master
mkdir ggame
git read-tree --prefix=ggame/ -u ggame/master
git commit -m "Merge ggame project as our subdirectory"
If you want to pull in updates from ggame in the future:
git pull -s subtree ggame master
You can see an example of how a ggame subtree is used by examining the
[Brython Server Spacewar](https://github.com/BrythonServer/Spacewar) repo on Github.
## Geometry
When referring to screen coordinates, note that the x-axis of the computer screen
is *horizontal* with the zero position on the left hand side of the screen. The
y-axis is *vertical* with the zero position at the **top** of the screen.
Increasing positive y-coordinates correspond to the downward direction on the
computer screen. Note that this is **different** from the way you may have learned
about x and y coordinates in math class!
""" |
# import os
# import re
# import hashlib
# import yaml
# class TestFile(object):
# def test_all_files_are_snake_case(self, path_helper):
# for subdir, dirs, files in os.walk(path_helper.root_path()):
# for file in filter(lambda file: file.endswith(".py") or file.endswith(".sls"), files):
# assert file.islower(), "%s/%s name needs to be lower case" % (subdir, file)
# assert re.compile('^[a-z0-9_\.]+$').match(file) is not None, "%s/%s needs to be snake case" % (subdir, file)
# def test_all_pillars_have_root_key_same_as_file_name(self, path_helper):
# for pillar_env_path in path_helper.pillar().paths_with_env():
# for subdir, dirs, files in os.walk(pillar_env_path):
# for file in filter(lambda file: file.endswith(".sls") and file != "top.sls", files):
# folder = os.path.basename(subdir.split(pillar_env_path)[-1])
# path = "%s/%s" % (subdir, file)
# if folder is not '':
# key = folder
# assert SyntaxChecker(file=path, key=key).check(), "Top level pillar is not same as base name for %s" % path
# else:
# key = KEY assert SyntaxChecker(file=path, key=key).check(), "Top level pillar is not same as base name for %s" % path
# def test_common_pillars_in_multiple_env(self, path_helper):
# md5_hash = {}
# for pillar_env_path in path_helper.pillar().paths_with_env():
# for subdir, dirs, files in os.walk(pillar_env_path):
# for file in files:
# path = "%s/%s" % (subdir, file)
# md5 = hashlib.md5(open(path, 'rb').read()).hexdigest()
# if md5 in md5_hash.keys():
# assert False, "%s file and %s file is exactly the same, move them to higher env" % (path, md5_hash[md5])
# else:
# md5_hash[md5] = path
# def test_top_files_has_all_pillars(self, __envs__, path_helper):
# yml = yaml.load(open(path_helper.pillar().top_file(), 'r'))
# assert yml.keys().sort() == __envs__.sort(), "Environment mismatch!"
# for env, target_dict in yml.iteritems():
# pillars = path_helper.pillar().names(env=env)
# for target, target_pillars in target_dict.iteritems():
# assert set(target_pillars).intersection(set(pillars)) == set(pillars), "%s not listed in top file in env %s" % (set(pillars).difference(set(target_pillars)), env)
# assert set(pillars).intersection(set(target_pillars)) == set(target_pillars), "%s not listed in pillar directory in env %s" % (set(target_pillars).difference(set(pillars)), env)
# def test_common_states_in_multiple_env(self, path_helper):
# md5_hash = {}
# for state_env_path in path_helper.state().paths_with_env():
# for subdir, dirs, files in os.walk(state_env_path):
# for file in files:
# path = "%s/%s" % (subdir, file)
# md5 = hashlib.md5(open(path, 'rb').read()).hexdigest()
# if md5 in md5_hash.keys():
# assert False, "%s file and %s file is exactly the same, move them to higher env" % (path, md5_hash[md5])
# else:
# md5_hash[md5] = path
# # # def test_ensure_state_has_core_files(self, __envs__, path_helper):
# # # core_files = ('init.sls', 'verify.sls', 'map.jinja', 'defaults.yaml', 'metadata.yaml', 'README.md', 'requisite.sls')
# # # for state in path_helper.states():
# # # for state in os.listdir(state_path):
# # # for version in os.listdir(os.path.join(env_path, state)):
# # # if version != "latest.sls":
# # # files = os.listdir(os.path.join(env_path, state, version))
# # # assert set(core_files).intersection(set(files)) == set(core_files)
# # def test_ensure_external_module_has_test_file(self, path_helper):
# # for ext_type_path in path_helper.ext().paths_with_env(env="ext"):
# # for subdir, dirs, files in os.walk(ext_type_path):
# # for file in filter(lambda file: file.endswith(".py") and file != "__init__.py", files):
# # ext_module_file_path = "%s/test_%s" % (subdir, file)
# # test_module_file_path = ext_module_file_path.replace(path_helper.ext_path(), path_helper.test_path())
# # assert os.path.exists(test_module_file_path), "%s file is missing" % test_module_file_path
# class SyntaxChecker(object):
# def __init__(self, file, key):
# self.file = file
# self.key = key
# def check(self):
# with open(self.file, "r") as file:
# lines = file.readlines()
# lines = filter(lambda line: self.is_top_level(line), lines)
# lines = filter(lambda line: not self.is_jinja(line), lines)
# lines = filter(lambda line: not self.is_include(line), lines)
# if len(lines) == 1 and self.top_level_key(lines[0]):
# return True
# elif len(lines) == 0:
# return True
# else:
# return False
# def is_include(self, line):
# return line.startswith('include:')
# def is_jinja(self, line):
# return line.startswith('{{') or line.startswith('{%')
# def is_top_level(self, line):
# return not line.startswith(' ')
# def top_level_key(self, line):
# return line.startswith("%s:" % self.key)
|
"""
=================
Structured Arrays
=================
Introduction
============
Numpy provides powerful capabilities to create arrays of structured datatype.
These arrays permit one to manipulate the data by named fields. A simple
example will show what is meant.: ::
>>> x = np.array([(1,2.,'Hello'), (2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
>>> x
array([(1, 2.0, 'Hello'), (2, 3.0, 'World')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
Here we have created a one-dimensional array of length 2. Each element of
this array is a structure that contains three items, a 32-bit integer, a 32-bit
float, and a string of length 10 or less. If we index this array at the second
position we get the second structure: ::
>>> x[1]
(2,3.,"World")
Conveniently, one can access any field of the array by indexing using the
string that names that field. ::
>>> y = x['foo']
>>> y
array([ 2., 3.], dtype=float32)
>>> y[:] = 2*y
>>> y
array([ 4., 6.], dtype=float32)
>>> x
array([(1, 4.0, 'Hello'), (2, 6.0, 'World')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
In these examples, y is a simple float array consisting of the 2nd field
in the structured type. But, rather than being a copy of the data in the structured
array, it is a view, i.e., it shares exactly the same memory locations.
Thus, when we updated this array by doubling its values, the structured
array shows the corresponding values as doubled as well. Likewise, if one
changes the structured array, the field view also changes: ::
>>> x[1] = (-1,-1.,"Master")
>>> x
array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
>>> y
array([ 4., -1.], dtype=float32)
Defining Structured Arrays
==========================
One defines a structured array through the dtype object. There are
**several** alternative ways to define the fields of a record. Some of
these variants provide backward compatibility with Numeric, numarray, or
another module, and should not be used except for such purposes. These
will be so noted. One specifies record structure in
one of four alternative ways, using an argument (as supplied to a dtype
function keyword or a dtype object constructor itself). This
argument must be one of the following: 1) string, 2) tuple, 3) list, or
4) dictionary. Each of these is briefly described below.
1) String argument.
In this case, the constructor expects a comma-separated list of type
specifiers, optionally with extra shape information. The fields are
given the default names 'f0', 'f1', 'f2' and so on.
The type specifiers can take 4 different forms: ::
a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n>
(representing bytes, ints, unsigned ints, floats, complex and
fixed length strings of specified byte lengths)
b) int8,...,uint8,...,float16, float32, float64, complex64, complex128
(this time with bit sizes)
c) older Numeric/numarray type specifications (e.g. Float32).
Don't use these in new code!
d) Single character type specifiers (e.g H for unsigned short ints).
Avoid using these unless you must. Details can be found in the
Numpy book
These different styles can be mixed within the same string (but why would you
want to do that?). Furthermore, each type specifier can be prefixed
with a repetition number, or a shape. In these cases an array
element is created, i.e., an array within a record. That array
is still referred to as a single field. An example: ::
>>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64')
>>> x
array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])],
dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))])
By using strings to define the record structure, it precludes being
able to name the fields in the original definition. The names can
be changed as shown later, however.
2) Tuple argument: The only relevant tuple case that applies to record
structures is when a structure is mapped to an existing data type. This
is done by pairing in a tuple, the existing data type with a matching
dtype definition (using any of the variants being described here). As
an example (using a definition using a list, so see 3) for further
details): ::
>>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')]))
>>> x
array([0, 0, 0])
>>> x['r']
array([0, 0, 0], dtype=uint8)
In this case, an array is produced that looks and acts like a simple int32 array,
but also has definitions for fields that use only one byte of the int32 (a bit
like Fortran equivalencing).
3) List argument: In this case the record structure is defined with a list of
tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field
('' is permitted), 2) the type of the field, and 3) the shape (optional).
For example::
>>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
>>> x
array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])],
dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))])
4) Dictionary argument: two different forms are permitted. The first consists
of a dictionary with two required keys ('names' and 'formats'), each having an
equal sized list of values. The format list contains any type/shape specifier
allowed in other contexts. The names must be strings. There are two optional
keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to
the required two where offsets contain integer offsets for each field, and
titles are objects containing metadata for each field (these do not have
to be strings), where the value of None is permitted. As an example: ::
>>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[('col1', '>i4'), ('col2', '>f4')])
The other dictionary form permitted is a dictionary of name keys with tuple
values specifying type, offset, and an optional title. ::
>>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')])
Accessing and modifying field names
===================================
The field names are an attribute of the dtype object defining the structure.
For the last example: ::
>>> x.dtype.names
('col1', 'col2')
>>> x.dtype.names = ('x', 'y')
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')])
>>> x.dtype.names = ('x', 'y', 'z') # wrong number of names
<type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2
Accessing field titles
====================================
The field titles provide a standard place to put associated info for fields.
They do not have to be strings. ::
>>> x.dtype.fields['x'][2]
'title 1'
Accessing multiple fields at once
====================================
You can access multiple fields at once using a list of field names: ::
>>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))],
dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
Notice that `x` is created with a list of tuples. ::
>>> x[['x','y']]
array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)],
dtype=[('x', '<f4'), ('y', '<f4')])
>>> x[['x','value']]
array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]),
(1.0, [[2.0, 6.0], [2.0, 6.0]])],
dtype=[('x', '<f4'), ('value', '<f4', (2, 2))])
The fields are returned in the order they are asked for.::
>>> x[['y','x']]
array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)],
dtype=[('y', '<f4'), ('x', '<f4')])
Filling structured arrays
=========================
Structured arrays can be filled by field or row by row. ::
>>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')])
>>> arr['var1'] = np.arange(5)
If you fill it in row by row, it takes a take a tuple
(but not a list or array!)::
>>> arr[0] = (10,20)
>>> arr
array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)],
dtype=[('var1', '<f8'), ('var2', '<f8')])
Record Arrays
=============
For convenience, numpy provides "record arrays" which allow one to access
fields of structured arrays by attribute rather than by index. Record arrays
are structured arrays wrapped using a subclass of ndarray,
:class:`numpy.recarray`, which allows field access by attribute on the array
object, and record arrays also use a special datatype, :class:`numpy.record`,
which allows field access by attribute on the individual elements of the array.
The simplest way to create a record array is with :func:`numpy.rec.array`: ::
>>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
>>> recordarr.bar
array([ 2., 3.], dtype=float32)
>>> recordarr[1:2]
rec.array([(2, 3.0, 'World')],
dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])
>>> recordarr[1:2].foo
array([2], dtype=int32)
>>> recordarr.foo[1:2]
array([2], dtype=int32)
>>> recordarr[1].baz
'World'
numpy.rec.array can convert a wide variety of arguments into record arrays,
including normal structured arrays: ::
>>> arr = array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')])
>>> recordarr = np.rec.array(arr)
The numpy.rec module provides a number of other convenience functions for
creating record arrays, see :ref:`record array creation routines
<routines.array-creation.rec>`.
A record array representation of a structured array can be obtained using the
appropriate :ref:`view`: ::
>>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')])
>>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)),
... type=np.recarray)
For convenience, viewing an ndarray as type `np.recarray` will automatically
convert to `np.record` datatype, so the dtype can be left out of the view: ::
>>> recordarr = arr.view(np.recarray)
>>> recordarr.dtype
dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]))
To get back to a plain ndarray both the dtype and type must be reset. The
following view does so, taking into account the unusual case that the
recordarr was not a structured type: ::
>>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray)
Record array fields accessed by index or by attribute are returned as a record
array if the field has a structured type but as a plain ndarray otherwise. ::
>>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))],
... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])])
>>> type(recordarr.foo)
<type 'numpy.ndarray'>
>>> type(recordarr.bar)
<class 'numpy.core.records.recarray'>
Note that if a field has the same name as an ndarray attribute, the ndarray
attribute takes precedence. Such fields will be inaccessible by attribute but
may still be accessed by index.
""" |
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
# Resolve KUIT markup in i18n strings into plain or rich text,
# or switch them to xi18n calls.
#
# Usage:
# resolve_kuit.py [OPTIONS] FILE_OR_DIRECTORY...
#
# By default, KUIT markup is resolved into plain or rich text.
# To switch strings containing any KUIT markup to xi18n calls instead,
# use -x option; to switch all strings to xi18n calls, use -X option.
# For non-code files (.ui, .rc, etc.) -x behaves same like -X,
# since there is no way to specify by string whether it is to be
# passed through i18n or xi18n call at runtime. Instead this is specified
# on the top level (per file, but normally for all such files in a project),
# as described in the "Connecting Calls to Catalogs" section
# of the ki18n Programmer's Guide.
#
# Files are modified in-place. Modified file paths are written to stdout.
# If an argument is a directory, files from it are recursivelly collected.
# Only files with known extensions are processed (even if file with unknown
# extension is given directly in the command line, it will be ignored).
# The list of known extensions by resolution type can be listed with
# -k option. Option -s RESTYPE:EXT1[,EXT2...] can be used to register
# additional extensions (without leading dot, case ignored) for given
# resolution type. One extension may have several resolution types.
# Files in version control bookkeeping directories are skipped.
#
# In C-like function call files (resolution type 'ccall'),
# i18n strings are detected as arguments in calls with
# *i18n, *i18nc, *i18np, and *i18ncp function names.
# By default detection considers string arguments to be either single or
# double quoted, call arguments can be split into several lines, and
# strings are concatenated when separated only by whitespace.
# Default set of quotes can be replaced by repeating the -q QUOTE option.
#
# In XML-like markup files (resolution type 'xml'),
# i18n strings are detected as element texts, for a certain set of tags.
# i18n contexts are detected as attributes to those elements, for a certain
# set of attributes. These sets can be expanded using -T TAG1[,TAG2...]
# and -A ATTR1[,ATTR2...] options. Case is ignored for both.
# Markup inside the element text is expected to be XML-escaped (<, etc.),
# i.e. the element text is first unescaped before resolution.
#
# In PO files (resolution type 'po'), i18n strings are detected
# according to PO format.
# To process PO files, the Pology library must be ready for use.
# In msgstr fields, KUIT markup transformations for given language
# are looked up in its kdelibs4.po. The pattern path to kdelibs4.po files,
# which contains @lang@ placeholder, is given with -t PATTERN option.
# This can be a local path or a HTTP URL (e.g.
# http://websvn.kde.org/*checkout*/trunk/l10n-kde4/@lang@/messages/kdelibs/kdelibs4.po ).
# Language of processed PO file is determined from its Language: header field.
# If only PO files of one language are processed and they do not reliably
# contain this field, the language can be forced with -l LANG option.
# By default both the original and the translation fields are resolved,
# which is appropriate when the PO file is being resolved before
# it has been merged with new template resulting from the resolved code.
# If an unresolved PO file has been merged with new template first,
# then option -m should be issued to resolve only the translation fields.
# In this case, on fuzzy messages, if previous original fields (which are
# also resolved) and current original fields match after resolution,
# the message is unfuzzied.
#
# For a given i18n string, the decision of whether to resolve KUIT markup
# into plain or Qt rich text is made based on the context marker,
# as described in KUIT documentation at
# http://techbase.kde.org/Development/Tutorials/Localization/i18n_Semantics .
# Target formats can also be manually specified for certain context markers
# by repeating the -f option. E.g. -f @info:progress=rich would override
# the default resolution into plain text for @info:progress i18n strings.
#
# NOTE: [INTERNAL]
# If <html> tags are added on rich text(see top_tag_res variable),
# then resolution must not be run over already resolved files.
# Context markers will remain but format modifiers will be removed from them,
# which may cause further modification in the second run.
#
# NOTE: [INTERNAL]
# If <numid> tags are simply removed (see numid_tag_res variable),
# a warning is issued on each removal to do something manually with
# its associated argument, e.g. wrap it in QString::number().
# It is probably best to look for <numid> tags and handle their arguments
# before running the resolution.
|
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
#################################################################################
## $HeadURL $
#################################################################################
#__RCSID__ = "$Id$"
#
#import sys
#
#from DIRAC.Core.Base import Script
#Script.parseCommandLine()
#
#from DIRAC.ResourceStatusSystem.Utilities.Utils import *
#from DIRAC.ResourceStatusSystem import *
#from DIRAC.ResourceStatusSystem.PolicySystem.PDP import PDP
#from DIRAC import gConfig
#import DIRAC.ResourceStatusSystem.test.fake_rsDB
#
#VO = gConfig.getValue("DIRAC/Extensions")
#if 'LHCb' in VO:
# VO = 'LHCb'
#
#sito = {'name':'LCG.UKI-LT2-QMUL.uk', 'siteType':'T2'} #OK
##sito = {'name':'LCG.CERN.ch', 'siteType':'T0'} #OK
##sito = {'name':'LCG.CNAF.it', 'siteType':'T1'} #OK
#servizio = {'name':'EMAIL', 'siteType':'T2', 'serviceType':'Storage'} #OK
#servizio2 = {'name':'EMAIL', 'siteType':'T2', 'serviceType':'Computing'} #OK
#servizio3 = {'name':'EMAIL', 'siteType':'T1', 'serviceType':'VO-BOX'} #OK
#servizio4 = {'name':'EMAIL', 'siteType':'T0', 'serviceType':'VOMS'} #OK
#risorsa = {'name':'gridgate.cs.tcd.ie', 'siteType':'T2', 'resourceType':'CE'} #OK
##risorsa = {'name':'ce111.cern.ch', 'siteType':'T0', 'resourceType':'CE'} #OK
##risorsa3 = {'name':'tblb01.nipne.ro', 'siteType':'T2', 'resourceType':'CE'} #OK
#risorsa4 = {'name':'fts.cr.cnaf.infn.it', 'siteType':'T1', 'resourceType':'FTS'} #OK
#risorsa5 = {'name':'voms.cern.ch', 'siteType':'T0', 'resourceType':'VOMS'} #OK
##risorsa = {'name':'hepgrid3.ph.liv.ac.uk', 'siteType':'T1', 'resourceType':'CE'} #OK, ma messo CE al posto di CREAMCE
#risorsa2 = {'name':'ccsrm.in2p3.fr', 'siteType':'T1', 'resourceType':'SE'} #OK
#risorsa3 = {'name':'lhcb-lfc.gridpp.rl.ac.uk', 'siteType':'T1', 'resourceType':'LFC_L'} #OK
##risorsa = {'name':'ce.gina.sara.nl', 'siteType':'T2', 'resourceType':'CE'} #OK
##risorsa3 = {'name':'prod-lfc-lhcb-central.cern.ch', 'siteType':'T1', 'resourceType':'LFC_C'} #OK
#se = {'name':'CERN-RAW', 'siteType':'T0'} #OK
##se = {'name':'PIC_MC_M-DST', 'siteType':'T1'} #OK
#
#useNewRes = False
#
##sito = {'name':'LCG.CERN2.ch', 'siteType':'T0'} #WRONG
##servizio = {'name':'Computing@LCG.#Ferrara.it', 'siteType':'T2', 'serviceType':'Computing'} #WRONG
##risorsa = {'name':'ce106.pic.es', 'siteType':'T1', 'resourceType':'CE'} #WRONG
##risorsa2 = {'name':'srm-lhcb.cern.ch#', 'siteType':'T0', 'resourceType':'SE'} #WRONG
##se = {'name':'CERN_MC_M-DST#', 'siteType':'T0'} #WRONG
#
#print "\n\n ~~~~~~~ SITO ~~~~~~~ %s \n" %(sito)
#
#for status in ValidStatus:
## for oldStatus in ValidStatus:
## if status == oldStatus:
## continue
# print "############################"
# print " "
# print 'dans le test:', status#, oldStatus
# pdp = PDP(VO, granularity = 'Site', name = sito['name'], status = status,
## formerStatus = oldStatus,
# reason = 'XXXXX', siteType = sito['siteType'],
# useNewRes = useNewRes
# )
# res = pdp.takeDecision()
# print res
#
##print "\n\n ~~~~~~~ SERVICE 1 ~~~~~~~ : %s \n " %servizio
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print 'nel test:', status#, oldStatus
## pdp = PDP(VO, granularity = 'Service', name = servizio['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = servizio['siteType'],
## serviceType = servizio['serviceType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
##
##print "\n\n ~~~~~~~ SERVICE 2 ~~~~~~~ : %s \n " %servizio2
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print 'nel test:', status#, oldStatus
## pdp = PDP(VO, granularity = 'Service', name = servizio2['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = servizio2['siteType'],
## serviceType = servizio2['serviceType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
##
##print "\n\n ~~~~~~~ SERVICE 3 ~~~~~~~ : %s \n " %servizio3
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print 'nel test:', status#, oldStatus
## pdp = PDP(VO, granularity = 'Service', name = servizio3['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = servizio3['siteType'],
## serviceType = servizio3['serviceType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
##
##print "\n\n ~~~~~~~ SERVICE 4 ~~~~~~~ : %s \n " %servizio4
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print 'nel test:', status#, oldStatus
## pdp = PDP(VO, granularity = 'Service', name = servizio4['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = servizio4['siteType'],
## serviceType = servizio4['serviceType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
##
##
##
#print "\n\n ~~~~~~~ RISORSA 1 ~~~~~~~ : %s \n " %risorsa
#
#for status in ValidStatus:
## for oldStatus in ValidStatus:
## if status == oldStatus:
## continue
# print "############################"
# print " "
# print status#, oldStatus
# pdp = PDP(VO, granularity = 'Resource', name = risorsa['name'], status = status,
## formerStatus = oldStatus,
# reason = 'XXXXX', siteType = risorsa['siteType'],
# resourceType = risorsa['resourceType'],
# useNewRes = useNewRes
# )
# res = pdp.takeDecision()
# print res
#
##print "\n\n ~~~~~~~ RISORSA 2 ~~~~~~~ : %s \n " %risorsa2
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print status#, oldStatus
## pdp = PDP(VO, granularity = 'Resource', name = risorsa2['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = risorsa2['siteType'],
## resourceType = risorsa2['resourceType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
##
##print "\n\n ~~~~~~~ RISORSA 3 ~~~~~~~ : %s \n " %risorsa3
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print status#, oldStatus
## pdp = PDP(VO, granularity = 'Resource', name = risorsa3['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = risorsa3['siteType'],
## resourceType = risorsa3['resourceType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
##
##print "\n\n ~~~~~~~ RISORSA 4 ~~~~~~~ : %s \n " %risorsa4
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print status#, oldStatus
## pdp = PDP(VO, granularity = 'Resource', name = risorsa4['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = risorsa3['siteType'],
## resourceType = risorsa4['resourceType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
##
##print "\n\n ~~~~~~~ RISORSA 5 ~~~~~~~ : %s \n " %risorsa5
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print status#, oldStatus
## pdp = PDP(VO, granularity = 'Resource', name = risorsa5['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = risorsa5['siteType'],
## resourceType = risorsa5['resourceType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
##
##
##print "\n\n ~~~~~~~ StorageElement ~~~~~~~ : %s \n " %se
##
##for status in ValidStatus:
### for oldStatus in ValidStatus:
### if status == oldStatus:
### continue
## print "############################"
## print " "
## print status#, oldStatus
## pdp = PDP(VO, granularity = 'StorageElement', name = se['name'], status = status,
### formerStatus = oldStatus,
## reason = 'XXXXX', siteType = se['siteType'],
## useNewRes = useNewRes
## )
## res = pdp.takeDecision()
## print res
#
#################################################################################
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#################################################################################
#
#'''
# HOW DOES THIS WORK.
#
# will come soon...
#'''
#
#################################################################################
|
"""
==============
Array indexing
==============
Array indexing refers to any use of the square brackets ([]) to index
array values. There are many options to indexing, which give numpy
indexing great power, but with power comes some complexity and the
potential for confusion. This section is just an overview of the
various options and issues related to indexing. Aside from single
element indexing, the details on most of these options are to be
found in related sections.
Assignment vs referencing
=========================
Most of the following examples show the use of indexing when
referencing data in an array. The examples work just as well
when assigning to an array. See the section at the end for
specific examples and explanations on how assignments work.
Single element indexing
=======================
Single element indexing for a 1-D array is what one expects. It work
exactly like that for other standard Python sequences. It is 0-based,
and accepts negative indices for indexing from the end of the array. ::
>>> x = np.arange(10)
>>> x[2]
2
>>> x[-2]
8
Unlike lists and tuples, numpy arrays support multidimensional indexing
for multidimensional arrays. That means that it is not necessary to
separate each dimension's index into its own set of square brackets. ::
>>> x.shape = (2,5) # now x is 2-dimensional
>>> x[1,3]
8
>>> x[1,-1]
9
Note that if one indexes a multidimensional array with fewer indices
than dimensions, one gets a subdimensional array. For example: ::
>>> x[0]
array([0, 1, 2, 3, 4])
That is, each index specified selects the array corresponding to the
rest of the dimensions selected. In the above example, choosing 0
means that remaining dimension of lenth 5 is being left unspecified,
and that what is returned is an array of that dimensionality and size.
It must be noted that the returned array is not a copy of the original,
but points to the same values in memory as does the original array.
In this case, the 1-D array at the first position (0) is returned.
So using a single index on the returned array, results in a single
element being returned. That is: ::
>>> x[0][2]
2
So note that ``x[0,2] = x[0][2]`` though the second case is more
inefficient a new temporary array is created after the first index
that is subsequently indexed by 2.
Note to those used to IDL or Fortran memory order as it relates to
indexing. Numpy uses C-order indexing. That means that the last
index usually represents the most rapidly changing memory location,
unlike Fortran or IDL, where the first index represents the most
rapidly changing location in memory. This difference represents a
great potential for confusion.
Other indexing options
======================
It is possible to slice and stride arrays to extract arrays of the
same number of dimensions, but of different sizes than the original.
The slicing and striding works exactly the same way it does for lists
and tuples except that they can be applied to multiple dimensions as
well. A few examples illustrates best: ::
>>> x = np.arange(10)
>>> x[2:5]
array([2, 3, 4])
>>> x[:-7]
array([0, 1, 2])
>>> x[1:7:2]
array([1, 3, 5])
>>> y = np.arange(35).reshape(5,7)
>>> y[1:5:2,::3]
array([[ 7, 10, 13],
[21, 24, 27]])
Note that slices of arrays do not copy the internal array data but
also produce new views of the original data.
It is possible to index arrays with other arrays for the purposes of
selecting lists of values out of arrays into new arrays. There are
two different ways of accomplishing this. One uses one or more arrays
of index values. The other involves giving a boolean array of the proper
shape to indicate the values to be selected. Index arrays are a very
powerful tool that allow one to avoid looping over individual elements in
arrays and thus greatly improve performance.
It is possible to use special features to effectively increase the
number of dimensions in an array through indexing so the resulting
array aquires the shape needed for use in an expression or with a
specific function.
Index arrays
============
Numpy arrays may be indexed with other arrays (or any other sequence-
like object that can be converted to an array, such as lists, with the
exception of tuples; see the end of this document for why this is). The
use of index arrays ranges from simple, straightforward cases to
complex, hard-to-understand cases. For all cases of index arrays, what
is returned is a copy of the original data, not a view as one gets for
slices.
Index arrays must be of integer type. Each value in the array indicates
which value in the array to use in place of the index. To illustrate: ::
>>> x = np.arange(10,1,-1)
>>> x
array([10, 9, 8, 7, 6, 5, 4, 3, 2])
>>> x[np.array([3, 3, 1, 8])]
array([7, 7, 9, 2])
The index array consisting of the values 3, 3, 1 and 8 correspondingly
create an array of length 4 (same as the index array) where each index
is replaced by the value the index array has in the array being indexed.
Negative values are permitted and work as they do with single indices
or slices: ::
>>> x[np.array([3,3,-3,8])]
array([7, 7, 4, 2])
It is an error to have index values out of bounds: ::
>>> x[np.array([3, 3, 20, 8])]
<type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9
Generally speaking, what is returned when index arrays are used is
an array with the same shape as the index array, but with the type
and values of the array being indexed. As an example, we can use a
multidimensional index array instead: ::
>>> x[np.array([[1,1],[2,3]])]
array([[9, 9],
[8, 7]])
Indexing Multi-dimensional arrays
=================================
Things become more complex when multidimensional arrays are indexed,
particularly with multidimensional index arrays. These tend to be
more unusal uses, but theyare permitted, and they are useful for some
problems. We'll start with thesimplest multidimensional case (using
the array y from the previous examples): ::
>>> y[np.array([0,2,4]), np.array([0,1,2])]
array([ 0, 15, 30])
In this case, if the index arrays have a matching shape, and there is
an index array for each dimension of the array being indexed, the
resultant array has the same shape as the index arrays, and the values
correspond to the index set for each position in the index arrays. In
this example, the first index value is 0 for both index arrays, and
thus the first value of the resultant array is y[0,0]. The next value
is y[2,1], and the last is y[4,2].
If the index arrays do not have the same shape, there is an attempt to
broadcast them to the same shape. If they cannot be broadcast to the
same shape, an exception is raised: ::
>>> y[np.array([0,2,4]), np.array([0,1])]
<type 'exceptions.ValueError'>: shape mismatch: objects cannot be
broadcast to a single shape
The broadcasting mechanism permits index arrays to be combined with
scalars for other indices. The effect is that the scalar value is used
for all the corresponding values of the index arrays: ::
>>> y[np.array([0,2,4]), 1]
array([ 1, 15, 29])
Jumping to the next level of complexity, it is possible to only
partially index an array with index arrays. It takes a bit of thought
to understand what happens in such cases. For example if we just use
one index array with y: ::
>>> y[np.array([0,2,4])]
array([[ 0, 1, 2, 3, 4, 5, 6],
[14, 15, 16, 17, 18, 19, 20],
[28, 29, 30, 31, 32, 33, 34]])
What results is the construction of a new array where each value of
the index array selects one row from the array being indexed and the
resultant array has the resulting shape (size of row, number index
elements).
An example of where this may be useful is for a color lookup table
where we want to map the values of an image into RGB triples for
display. The lookup table could have a shape (nlookup, 3). Indexing
such an array with an image with shape (ny, nx) with dtype=np.uint8
(or any integer type so long as values are with the bounds of the
lookup table) will result in an array of shape (ny, nx, 3) where a
triple of RGB values is associated with each pixel location.
In general, the shape of the resulant array will be the concatenation
of the shape of the index array (or the shape that all the index arrays
were broadcast to) with the shape of any unused dimensions (those not
indexed) in the array being indexed.
Boolean or "mask" index arrays
==============================
Boolean arrays used as indices are treated in a different manner
entirely than index arrays. Boolean arrays must be of the same shape
as the array being indexed, or broadcastable to the same shape. In the
most straightforward case, the boolean array has the same shape: ::
>>> b = y>20
>>> y[b]
array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34])
The result is a 1-D array containing all the elements in the indexed
array corresponding to all the true elements in the boolean array. As
with index arrays, what is returned is a copy of the data, not a view
as one gets with slices.
With broadcasting, multidimensional arrays may be the result. For
example: ::
>>> b[:,5] # use a 1-D boolean that broadcasts with y
array([False, False, False, True, True], dtype=bool)
>>> y[b[:,5]]
array([[21, 22, 23, 24, 25, 26, 27],
[28, 29, 30, 31, 32, 33, 34]])
Here the 4th and 5th rows are selected from the indexed array and
combined to make a 2-D array.
Combining index arrays with slices
==================================
Index arrays may be combined with slices. For example: ::
>>> y[np.array([0,2,4]),1:3]
array([[ 1, 2],
[15, 16],
[29, 30]])
In effect, the slice is converted to an index array
np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array
to produce a resultant array of shape (3,2).
Likewise, slicing can be combined with broadcasted boolean indices: ::
>>> y[b[:,5],1:3]
array([[22, 23],
[29, 30]])
Structural indexing tools
=========================
To facilitate easy matching of array shapes with expressions and in
assignments, the np.newaxis object can be used within array indices
to add new dimensions with a size of 1. For example: ::
>>> y.shape
(5, 7)
>>> y[:,np.newaxis,:].shape
(5, 1, 7)
Note that there are no new elements in the array, just that the
dimensionality is increased. This can be handy to combine two
arrays in a way that otherwise would require explicitly reshaping
operations. For example: ::
>>> x = np.arange(5)
>>> x[:,np.newaxis] + x[np.newaxis,:]
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8]])
The ellipsis syntax maybe used to indicate selecting in full any
remaining unspecified dimensions. For example: ::
>>> z = np.arange(81).reshape(3,3,3,3)
>>> z[1,...,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
This is equivalent to: ::
>>> z[1,:,:,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
Assigning values to indexed arrays
==================================
As mentioned, one can select a subset of an array to assign to using
a single index, slices, and index and mask arrays. The value being
assigned to the indexed array must be shape consistent (the same shape
or broadcastable to the shape the index produces). For example, it is
permitted to assign a constant to a slice: ::
>>> x = np.arange(10)
>>> x[2:7] = 1
or an array of the right size: ::
>>> x[2:7] = np.arange(5)
Note that assignments may result in changes if assigning
higher types to lower types (like floats to ints) or even
exceptions (assigning complex to floats or ints): ::
>>> x[1] = 1.2
>>> x[1]
1
>>> x[1] = 1.2j
<type 'exceptions.TypeError'>: can't convert complex to long; use
long(abs(z))
Unlike some of the references (such as array and mask indices)
assignments are always made to the original data in the array
(indeed, nothing else would make sense!). Note though, that some
actions may not work as one may naively expect. This particular
example is often surprising to people: ::
>>> x = np.arange(0, 50, 10)
>>> x
array([ 0, 10, 20, 30, 40])
>>> x[np.array([1, 1, 3, 1])] += 1
>>> x
array([ 0, 11, 20, 31, 40])
Where people expect that the 1st location will be incremented by 3.
In fact, it will only be incremented by 1. The reason is because
a new array is extracted from the original (as a temporary) containing
the values at 1, 1, 3, 1, then the value 1 is added to the temporary,
and then the temporary is assigned back to the original array. Thus
the value of the array at x[1]+1 is assigned to x[1] three times,
rather than being incremented 3 times.
Dealing with variable numbers of indices within programs
========================================================
The index syntax is very powerful but limiting when dealing with
a variable number of indices. For example, if you want to write
a function that can handle arguments with various numbers of
dimensions without having to write special case code for each
number of possible dimensions, how can that be done? If one
supplies to the index a tuple, the tuple will be interpreted
as a list of indices. For example (using the previous definition
for the array z): ::
>>> indices = (1,1,1,1)
>>> z[indices]
40
So one can use code to construct tuples of any number of indices
and then use these within an index.
Slices can be specified within programs by using the slice() function
in Python. For example: ::
>>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
>>> z[indices]
array([39, 40])
Likewise, ellipsis can be specified by code by using the Ellipsis
object: ::
>>> indices = (1, Ellipsis, 1) # same as [1,...,1]
>>> z[indices]
array([[28, 31, 34],
[37, 40, 43],
[46, 49, 52]])
For this reason it is possible to use the output from the np.where()
function directly as an index since it always returns a tuple of index
arrays.
Because the special treatment of tuples, they are not automatically
converted to an array as a list would be. As an example: ::
>>> z[[1,1,1,1]] # produces a large array
array([[[[27, 28, 29],
[30, 31, 32], ...
>>> z[(1,1,1,1)] # returns a single value
40
""" |
"""
#!/usr/bin/env python
#
# Copyright under the latest Apache License 2.0
'''A class the inherits everything from python-twitter and allows oauth based access
Requires:
python-twitter
simplejson
oauth
'''
__author__ = "Hameedullah NAME <EMAIL>"
__version__ = "0.2"
from twitter import Api, User
try:
import json
except ImportError:
import simplejson as json
from oauth import oauth
# Taken from oauth implementation at: http://github.com/harperreed/twitteroauth-python/tree/master
REQUEST_TOKEN_URL = 'https://twitter.com/oauth/request_token'
ACCESS_TOKEN_URL = 'https://twitter.com/oauth/access_token'
AUTHORIZATION_URL = 'http://twitter.com/oauth/authorize'
SIGNIN_URL = 'http://twitter.com/oauth/authenticate'
class OAuthApi(Api):
def __init__(self, consumer_key, consumer_secret, access_token=None):
if access_token:
Api.__init__(self,access_token.key, access_token.secret)
else:
Api.__init__(self)
self._Consumer = oauth.OAuthConsumer(consumer_key, consumer_secret)
self._signature_method = oauth.OAuthSignatureMethod_HMAC_SHA1()
self._access_token = access_token
def _GetOpener(self):
opener = self._urllib.build_opener()
return opener
def _FetchUrl(self,
url,
post_data=None,
parameters=None,
no_cache=None):
'''Fetch a URL, optionally caching for a specified time.
Args:
url: The URL to retrieve
post_data:
A dict of (str, unicode) key/value pairs. If set, POST will be used.
parameters:
A dict whose key/value pairs should encoded and added
to the query string. [OPTIONAL]
no_cache: If true, overrides the cache on the current request
Returns:
A string containing the body of the response.
'''
# Build the extra parameters dict
extra_params = {}
if self._default_params:
extra_params.update(self._default_params)
if parameters:
extra_params.update(parameters)
# Add key/value parameters to the query string of the url
#url = self._BuildUrl(url, extra_params=extra_params)
if post_data:
http_method = "POST"
extra_params.update(post_data)
else:
http_method = "GET"
req = self._makeOAuthRequest(url, parameters=extra_params,
http_method=http_method)
self._signRequest(req, self._signature_method)
# Get a url opener that can handle Oauth basic auth
opener = self._GetOpener()
#encoded_post_data = self._EncodePostData(post_data)
if post_data:
encoded_post_data = req.to_postdata()
url = req.get_normalized_http_url()
else:
url = req.to_url()
encoded_post_data = ""
no_cache=True
# Open and return the URL immediately if we're not going to cache
# OR we are posting data
if encoded_post_data or no_cache:
if encoded_post_data:
url_data = opener.open(url, encoded_post_data).read()
else:
url_data = opener.open(url).read()
opener.close()
else:
# Unique keys are a combination of the url and the username
if self._username:
key = self._username + ':' + url
else:
key = url
# See if it has been cached before
last_cached = self._cache.GetCachedTime(key)
# If the cached version is outdated then fetch another and store it
if not last_cached or time.time() >= last_cached + self._cache_timeout:
url_data = opener.open(url).read()
opener.close()
self._cache.Set(key, url_data)
else:
url_data = self._cache.Get(key)
# Always return the latest version
return url_data
def _makeOAuthRequest(self, url, token=None,
parameters=None, http_method="GET"):
'''Make a OAuth request from url and parameters
Args:
url: The Url to use for creating OAuth Request
parameters:
The URL parameters
http_method:
The HTTP method to use
Returns:
A OAauthRequest object
'''
if not token:
token = self._access_token
request = oauth.OAuthRequest.from_consumer_and_token(
self._Consumer, token=token,
http_url=url, parameters=parameters,
http_method=http_method)
return request
def _signRequest(self, req, signature_method=oauth.OAuthSignatureMethod_HMAC_SHA1()):
'''Sign a request
Reminder: Created this function so incase
if I need to add anything to request before signing
Args:
req: The OAuth request created via _makeOAuthRequest
signate_method:
The oauth signature method to use
'''
req.sign_request(signature_method, self._Consumer, self._access_token)
def getAuthorizationURL(self, token, url=AUTHORIZATION_URL):
'''Create a signed authorization URL
Returns:
A signed OAuthRequest authorization URL
'''
req = self._makeOAuthRequest(url, token=token)
self._signRequest(req)
return req.to_url()
def getSigninURL(self, token, url=SIGNIN_URL):
'''Create a signed Sign-in URL
Returns:
A signed OAuthRequest Sign-in URL
'''
signin_url = self.getAuthorizationURL(token, url)
return signin_url
def getAccessToken(self, url=ACCESS_TOKEN_URL):
token = self._FetchUrl(url, no_cache=True)
return oauth.OAuthToken.from_string(token)
def getRequestToken(self, url=REQUEST_TOKEN_URL):
'''Get a Request Token from Twitter
Returns:
A OAuthToken object containing a request token
'''
resp = self._FetchUrl(url, no_cache=True)
token = oauth.OAuthToken.from_string(resp)
return token
def GetUserInfo(self, url='https://twitter.com/account/verify_credentials.json'):
'''Get user information from twitter
Returns:
Returns the twitter.User object
'''
json_data = self._FetchUrl(url)
data = json.loads(json_data)
self._CheckForTwitterError(data)
return User.NewFromJsonDict(data)
""" |
"""automatically manage newlines in repository files
This extension allows you to manage the type of line endings (CRLF or
LF) that are used in the repository and in the local working
directory. That way you can get CRLF line endings on Windows and LF on
Unix/Mac, thereby letting everybody use their OS native line endings.
The extension reads its configuration from a versioned ``.hgeol``
configuration file found in the root of the working copy. The
``.hgeol`` file use the same syntax as all other Mercurial
configuration files. It uses two sections, ``[patterns]`` and
``[repository]``.
The ``[patterns]`` section specifies how line endings should be
converted between the working copy and the repository. The format is
specified by a file pattern. The first match is used, so put more
specific patterns first. The available line endings are ``LF``,
``CRLF``, and ``BIN``.
Files with the declared format of ``CRLF`` or ``LF`` are always
checked out and stored in the repository in that format and files
declared to be binary (``BIN``) are left unchanged. Additionally,
``native`` is an alias for checking out in the platform's default line
ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on
Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's
default behaviour; it is only needed if you need to override a later,
more general pattern.
The optional ``[repository]`` section specifies the line endings to
use for files stored in the repository. It has a single setting,
``native``, which determines the storage line endings for files
declared as ``native`` in the ``[patterns]`` section. It can be set to
``LF`` or ``CRLF``. The default is ``LF``. For example, this means
that on Windows, files configured as ``native`` (``CRLF`` by default)
will be converted to ``LF`` when stored in the repository. Files
declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section
are always stored as-is in the repository.
Example versioned ``.hgeol`` file::
[patterns]
**.py = native
**.vcproj = CRLF
**.txt = native
Makefile = LF
**.jpg = BIN
[repository]
native = LF
.. note::
The rules will first apply when files are touched in the working
copy, e.g. by updating to null and back to tip to touch all files.
The extension uses an optional ``[eol]`` section read from both the
normal Mercurial configuration files and the ``.hgeol`` file, with the
latter overriding the former. You can use that section to control the
overall behavior. There are three settings:
- ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or
``CRLF`` to override the default interpretation of ``native`` for
checkout. This can be used with :hg:`archive` on Unix, say, to
generate an archive where files have line endings for Windows.
- ``eol.only-consistent`` (default True) can be set to False to make
the extension convert files with inconsistent EOLs. Inconsistent
means that there is both ``CRLF`` and ``LF`` present in the file.
Such files are normally not touched under the assumption that they
have mixed EOLs on purpose.
- ``eol.fix-trailing-newline`` (default False) can be set to True to
ensure that converted files end with a EOL character (either ``\\n``
or ``\\r\\n`` as per the configured patterns).
The extension provides ``cleverencode:`` and ``cleverdecode:`` filters
like the deprecated win32text extension does. This means that you can
disable win32text and enable eol and your filters will still work. You
only need to these filters until you have prepared a ``.hgeol`` file.
The ``win32text.forbid*`` hooks provided by the win32text extension
have been unified into a single hook named ``eol.checkheadshook``. The
hook will lookup the expected line endings from the ``.hgeol`` file,
which means you must migrate to a ``.hgeol`` file first before using
the hook. ``eol.checkheadshook`` only checks heads, intermediate
invalid revisions will be pushed. To forbid them completely, use the
``eol.checkallhook`` hook. These hooks are best used as
``pretxnchangegroup`` hooks.
See :hg:`help patterns` for more information about the glob patterns
used.
""" |
# -*- encoding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2009 Veritos - NAME - www.veritos.nl
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsability of assessing all potential
# consequences resulting from its eventual inadequacies and bugs.
# End users who are looking for a ready-to-use solution with commercial
# garantees and support are strongly adviced to contract a Free Software
# Service Company like Veritos.
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
##############################################################################
#
# Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger).
# Deze module werkt niet in OpenERP versie 4 en lager.
#
# Status 1.0 - getest op OpenERP 5.0.3
#
# Versie IP_ADDRESS
# account.account.type
# Basis gelegd voor alle account type
#
# account.account.template
# Basis gelegd met alle benodigde grootboekrekeningen welke via een menu-
# structuur gelinkt zijn aan rubrieken 1 t/m 9.
# De grootboekrekeningen gelinkt aan de account.account.type
# Deze links moeten nog eens goed nagelopen worden.
#
# account.chart.template
# Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren,
# bank, inkoop en verkoop boeken en de BTW configuratie.
#
# Versie IP_ADDRESS
# account.tax.code.template
# Basis gelegd voor de BTW configuratie (structuur)
# Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt?
#
# account.tax.template
# De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende
# grootboekrekeningen
#
# Versie IP_ADDRESS
# Opschonen van de code en verwijderen van niet gebruikte componenten.
# Versie IP_ADDRESS
# Aanpassen a_expense van 3000 -> 7000
# record id='btw_code_5b' op negatieve waarde gezet
# Versie IP_ADDRESS
# BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde.
# Versie IP_ADDRESS
# Account Receivable en Payable goed gedefinieerd.
# Versie IP_ADDRESS
# Alle user_type_xxx velden goed gedefinieerd.
# Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren.
# Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren.
# Versie IP_ADDRESS
# Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging)
# versie IP_ADDRESS
# Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity
# versie IP_ADDRESS
# Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht.
# versie IP_ADDRESS
# BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd.
# versie IP_ADDRESS - Switch to English
# Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense
# Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
|
"""
=============================
Subclassing ndarray in python
=============================
Credits
-------
This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses.
Introduction
------------
Subclassing ndarray is relatively simple, but it has some complications
compared to other Python objects. On this page we explain the machinery
that allows you to subclass ndarray, and the implications for
implementing a subclass.
ndarrays and object creation
============================
Subclassing ndarray is complicated by the fact that new instances of
ndarray classes can come about in three different ways. These are:
#. Explicit constructor call - as in ``MySubClass(params)``. This is
the usual route to Python instance creation.
#. View casting - casting an existing ndarray as a given subclass
#. New from template - creating a new instance from a template
instance. Examples include returning slices from a subclassed array,
creating return types from ufuncs, and copying arrays. See
:ref:`new-from-template` for more details
The last two are characteristics of ndarrays - in order to support
things like array slicing. The complications of subclassing ndarray are
due to the mechanisms numpy has to support these latter two routes of
instance creation.
.. _view-casting:
View casting
------------
*View casting* is the standard ndarray mechanism by which you take an
ndarray of any subclass, and return a view of the array as another
(specified) subclass:
>>> import numpy as np
>>> # create a completely useless ndarray subclass
>>> class C(np.ndarray): pass
>>> # create a standard ndarray
>>> arr = np.zeros((3,))
>>> # take a view of it, as our useless subclass
>>> c_arr = arr.view(C)
>>> type(c_arr)
<class 'C'>
.. _new-from-template:
Creating new from template
--------------------------
New instances of an ndarray subclass can also come about by a very
similar mechanism to :ref:`view-casting`, when numpy finds it needs to
create a new instance from a template instance. The most obvious place
this has to happen is when you are taking slices of subclassed arrays.
For example:
>>> v = c_arr[1:]
>>> type(v) # the view is of type 'C'
<class 'C'>
>>> v is c_arr # but it's a new instance
False
The slice is a *view* onto the original ``c_arr`` data. So, when we
take a view from the ndarray, we return a new ndarray, of the same
class, that points to the data in the original.
There are other points in the use of ndarrays where we need such views,
such as copying arrays (``c_arr.copy()``), creating ufunc output arrays
(see also :ref:`array-wrap`), and reducing methods (like
``c_arr.mean()``.
Relationship of view casting and new-from-template
--------------------------------------------------
These paths both use the same machinery. We make the distinction here,
because they result in different input to your methods. Specifically,
:ref:`view-casting` means you have created a new instance of your array
type from any potential subclass of ndarray. :ref:`new-from-template`
means you have created a new instance of your class from a pre-existing
instance, allowing you - for example - to copy across attributes that
are particular to your subclass.
Implications for subclassing
----------------------------
If we subclass ndarray, we need to deal not only with explicit
construction of our array type, but also :ref:`view-casting` or
:ref:`new-from-template`. Numpy has the machinery to do this, and this
machinery that makes subclassing slightly non-standard.
There are two aspects to the machinery that ndarray uses to support
views and new-from-template in subclasses.
The first is the use of the ``ndarray.__new__`` method for the main work
of object initialization, rather then the more usual ``__init__``
method. The second is the use of the ``__array_finalize__`` method to
allow subclasses to clean up after the creation of views and new
instances from templates.
A brief Python primer on ``__new__`` and ``__init__``
=====================================================
``__new__`` is a standard Python method, and, if present, is called
before ``__init__`` when we create a class instance. See the `python
__new__ documentation
<http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail.
For example, consider the following Python code:
.. testcode::
class C(object):
def __new__(cls, *args):
print 'Cls in __new__:', cls
print 'Args in __new__:', args
return object.__new__(cls, *args)
def __init__(self, *args):
print 'type(self) in __init__:', type(self)
print 'Args in __init__:', args
meaning that we get:
>>> c = C('hello')
Cls in __new__: <class 'C'>
Args in __new__: ('hello',)
type(self) in __init__: <class 'C'>
Args in __init__: ('hello',)
When we call ``C('hello')``, the ``__new__`` method gets its own class
as first argument, and the passed argument, which is the string
``'hello'``. After python calls ``__new__``, it usually (see below)
calls our ``__init__`` method, with the output of ``__new__`` as the
first argument (now a class instance), and the passed arguments
following.
As you can see, the object can be initialized in the ``__new__``
method or the ``__init__`` method, or both, and in fact ndarray does
not have an ``__init__`` method, because all the initialization is
done in the ``__new__`` method.
Why use ``__new__`` rather than just the usual ``__init__``? Because
in some cases, as for ndarray, we want to be able to return an object
of some other class. Consider the following:
.. testcode::
class D(C):
def __new__(cls, *args):
print 'D cls is:', cls
print 'D args in __new__:', args
return C.__new__(C, *args)
def __init__(self, *args):
# we never get here
print 'In D __init__'
meaning that:
>>> obj = D('hello')
D cls is: <class 'D'>
D args in __new__: ('hello',)
Cls in __new__: <class 'C'>
Args in __new__: ('hello',)
>>> type(obj)
<class 'C'>
The definition of ``C`` is the same as before, but for ``D``, the
``__new__`` method returns an instance of class ``C`` rather than
``D``. Note that the ``__init__`` method of ``D`` does not get
called. In general, when the ``__new__`` method returns an object of
class other than the class in which it is defined, the ``__init__``
method of that class is not called.
This is how subclasses of the ndarray class are able to return views
that preserve the class type. When taking a view, the standard
ndarray machinery creates the new ndarray object with something
like::
obj = ndarray.__new__(subtype, shape, ...
where ``subdtype`` is the subclass. Thus the returned view is of the
same class as the subclass, rather than being of class ``ndarray``.
That solves the problem of returning views of the same type, but now
we have a new problem. The machinery of ndarray can set the class
this way, in its standard methods for taking views, but the ndarray
``__new__`` method knows nothing of what we have done in our own
``__new__`` method in order to set attributes, and so on. (Aside -
why not call ``obj = subdtype.__new__(...`` then? Because we may not
have a ``__new__`` method with the same call signature).
The role of ``__array_finalize__``
==================================
``__array_finalize__`` is the mechanism that numpy provides to allow
subclasses to handle the various ways that new instances get created.
Remember that subclass instances can come about in these three ways:
#. explicit constructor call (``obj = MySubClass(params)``). This will
call the usual sequence of ``MySubClass.__new__`` then (if it exists)
``MySubClass.__init__``.
#. :ref:`view-casting`
#. :ref:`new-from-template`
Our ``MySubClass.__new__`` method only gets called in the case of the
explicit constructor call, so we can't rely on ``MySubClass.__new__`` or
``MySubClass.__init__`` to deal with the view casting and
new-from-template. It turns out that ``MySubClass.__array_finalize__``
*does* get called for all three methods of object creation, so this is
where our object creation housekeeping usually goes.
* For the explicit constructor call, our subclass will need to create a
new ndarray instance of its own class. In practice this means that
we, the authors of the code, will need to make a call to
``ndarray.__new__(MySubClass,...)``, or do view casting of an existing
array (see below)
* For view casting and new-from-template, the equivalent of
``ndarray.__new__(MySubClass,...`` is called, at the C level.
The arguments that ``__array_finalize__`` recieves differ for the three
methods of instance creation above.
The following code allows us to look at the call sequences and arguments:
.. testcode::
import numpy as np
class C(np.ndarray):
def __new__(cls, *args, **kwargs):
print 'In __new__ with class %s' % cls
return np.ndarray.__new__(cls, *args, **kwargs)
def __init__(self, *args, **kwargs):
# in practice you probably will not need or want an __init__
# method for your subclass
print 'In __init__ with class %s' % self.__class__
def __array_finalize__(self, obj):
print 'In array_finalize:'
print ' self type is %s' % type(self)
print ' obj type is %s' % type(obj)
Now:
>>> # Explicit constructor
>>> c = C((10,))
In __new__ with class <class 'C'>
In array_finalize:
self type is <class 'C'>
obj type is <type 'NoneType'>
In __init__ with class <class 'C'>
>>> # View casting
>>> a = np.arange(10)
>>> cast_a = a.view(C)
In array_finalize:
self type is <class 'C'>
obj type is <type 'numpy.ndarray'>
>>> # Slicing (example of new-from-template)
>>> cv = c[:1]
In array_finalize:
self type is <class 'C'>
obj type is <class 'C'>
The signature of ``__array_finalize__`` is::
def __array_finalize__(self, obj):
``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our
own class (``self``) as well as the object from which the view has been
taken (``obj``). As you can see from the output above, the ``self`` is
always a newly created instance of our subclass, and the type of ``obj``
differs for the three instance creation methods:
* When called from the explicit constructor, ``obj`` is ``None``
* When called from view casting, ``obj`` can be an instance of any
subclass of ndarray, including our own.
* When called in new-from-template, ``obj`` is another instance of our
own subclass, that we might use to update the new ``self`` instance.
Because ``__array_finalize__`` is the only method that always sees new
instances being created, it is the sensible place to fill in instance
defaults for new object attributes, among other tasks.
This may be clearer with an example.
Simple example - adding an extra attribute to ndarray
-----------------------------------------------------
.. testcode::
import numpy as np
class InfoArray(np.ndarray):
def __new__(subtype, shape, dtype=float, buffer=None, offset=0,
strides=None, order=None, info=None):
# Create the ndarray instance of our type, given the usual
# ndarray input arguments. This will call the standard
# ndarray constructor, but return an object of our type.
# It also triggers a call to InfoArray.__array_finalize__
obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides,
order)
# set the new 'info' attribute to the value passed
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# ``self`` is a new object resulting from
# ndarray.__new__(InfoArray, ...), therefore it only has
# attributes that the ndarray.__new__ constructor gave it -
# i.e. those of a standard ndarray.
#
# We could have got to the ndarray.__new__ call in 3 ways:
# From an explicit constructor - e.g. InfoArray():
# obj is None
# (we're in the middle of the InfoArray.__new__
# constructor, and self.info will be set when we return to
# InfoArray.__new__)
if obj is None: return
# From view casting - e.g arr.view(InfoArray):
# obj is arr
# (type(obj) can be InfoArray)
# From new-from-template - e.g infoarr[:3]
# type(obj) is InfoArray
#
# Note that it is here, rather than in the __new__ method,
# that we set the default value for 'info', because this
# method sees all creation of default objects - with the
# InfoArray.__new__ constructor, but also with
# arr.view(InfoArray).
self.info = getattr(obj, 'info', None)
# We do not need to return anything
Using the object looks like this:
>>> obj = InfoArray(shape=(3,)) # explicit constructor
>>> type(obj)
<class 'InfoArray'>
>>> obj.info is None
True
>>> obj = InfoArray(shape=(3,), info='information')
>>> obj.info
'information'
>>> v = obj[1:] # new-from-template - here - slicing
>>> type(v)
<class 'InfoArray'>
>>> v.info
'information'
>>> arr = np.arange(10)
>>> cast_arr = arr.view(InfoArray) # view casting
>>> type(cast_arr)
<class 'InfoArray'>
>>> cast_arr.info is None
True
This class isn't very useful, because it has the same constructor as the
bare ndarray object, including passing in buffers and shapes and so on.
We would probably prefer the constructor to be able to take an already
formed ndarray from the usual numpy calls to ``np.array`` and return an
object.
Slightly more realistic example - attribute added to existing array
-------------------------------------------------------------------
Here is a class that takes a standard ndarray that already exists, casts
as our type, and adds an extra attribute.
.. testcode::
import numpy as np
class RealisticInfoArray(np.ndarray):
def __new__(cls, input_array, info=None):
# Input array is an already formed ndarray instance
# We first cast to be our class type
obj = np.asarray(input_array).view(cls)
# add the new attribute to the created instance
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# see InfoArray.__array_finalize__ for comments
if obj is None: return
self.info = getattr(obj, 'info', None)
So:
>>> arr = np.arange(5)
>>> obj = RealisticInfoArray(arr, info='information')
>>> type(obj)
<class 'RealisticInfoArray'>
>>> obj.info
'information'
>>> v = obj[1:]
>>> type(v)
<class 'RealisticInfoArray'>
>>> v.info
'information'
.. _array-wrap:
``__array_wrap__`` for ufuncs
-------------------------------------------------------
``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy
functions, to allow a subclass to set the type of the return value
and update attributes and metadata. Let's show how this works with an example.
First we make the same subclass as above, but with a different name and
some print statements:
.. testcode::
import numpy as np
class MySubClass(np.ndarray):
def __new__(cls, input_array, info=None):
obj = np.asarray(input_array).view(cls)
obj.info = info
return obj
def __array_finalize__(self, obj):
print 'In __array_finalize__:'
print ' self is %s' % repr(self)
print ' obj is %s' % repr(obj)
if obj is None: return
self.info = getattr(obj, 'info', None)
def __array_wrap__(self, out_arr, context=None):
print 'In __array_wrap__:'
print ' self is %s' % repr(self)
print ' arr is %s' % repr(out_arr)
# then just call the parent
return np.ndarray.__array_wrap__(self, out_arr, context)
We run a ufunc on an instance of our new array:
>>> obj = MySubClass(np.arange(5), info='spam')
In __array_finalize__:
self is MySubClass([0, 1, 2, 3, 4])
obj is array([0, 1, 2, 3, 4])
>>> arr2 = np.arange(5)+1
>>> ret = np.add(arr2, obj)
In __array_wrap__:
self is MySubClass([0, 1, 2, 3, 4])
arr is array([1, 3, 5, 7, 9])
In __array_finalize__:
self is MySubClass([1, 3, 5, 7, 9])
obj is MySubClass([0, 1, 2, 3, 4])
>>> ret
MySubClass([1, 3, 5, 7, 9])
>>> ret.info
'spam'
Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the
input with the highest ``__array_priority__`` value, in this case
``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and
``out_arr`` as the (ndarray) result of the addition. In turn, the
default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the
result to class ``MySubClass``, and called ``__array_finalize__`` -
hence the copying of the ``info`` attribute. This has all happened at the C level.
But, we could do anything we wanted:
.. testcode::
class SillySubClass(np.ndarray):
def __array_wrap__(self, arr, context=None):
return 'I lost your data'
>>> arr1 = np.arange(5)
>>> obj = arr1.view(SillySubClass)
>>> arr2 = np.arange(5)
>>> ret = np.multiply(obj, arr2)
>>> ret
'I lost your data'
So, by defining a specific ``__array_wrap__`` method for our subclass,
we can tweak the output from ufuncs. The ``__array_wrap__`` method
requires ``self``, then an argument - which is the result of the ufunc -
and an optional parameter *context*. This parameter is returned by some
ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc,
domain of the ufunc). ``__array_wrap__`` should return an instance of
its containing class. See the masked array subclass for an
implementation.
In addition to ``__array_wrap__``, which is called on the way out of the
ufunc, there is also an ``__array_prepare__`` method which is called on
the way into the ufunc, after the output arrays are created but before any
computation has been performed. The default implementation does nothing
but pass through the array. ``__array_prepare__`` should not attempt to
access the array data or resize the array, it is intended for setting the
output array type, updating attributes and metadata, and performing any
checks based on the input that may be desired before computation begins.
Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or
subclass thereof or raise an error.
Extra gotchas - custom ``__del__`` methods and ndarray.base
-----------------------------------------------------------
One of the problems that ndarray solves is keeping track of memory
ownership of ndarrays and their views. Consider the case where we have
created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``.
The two objects are looking at the same memory. Numpy keeps track of
where the data came from for a particular array or view, with the
``base`` attribute:
>>> # A normal ndarray, that owns its own data
>>> arr = np.zeros((4,))
>>> # In this case, base is None
>>> arr.base is None
True
>>> # We take a view
>>> v1 = arr[1:]
>>> # base now points to the array that it derived from
>>> v1.base is arr
True
>>> # Take a view of a view
>>> v2 = v1[1:]
>>> # base points to the view it derived from
>>> v2.base is v1
True
In general, if the array owns its own memory, as for ``arr`` in this
case, then ``arr.base`` will be None - there are some exceptions to this
- see the numpy book for more details.
The ``base`` attribute is useful in being able to tell whether we have
a view or the original array. This in turn can be useful if we need
to know whether or not to do some specific cleanup when the subclassed
array is deleted. For example, we may only want to do the cleanup if
the original array is deleted, but not the views. For an example of
how this can work, have a look at the ``memmap`` class in
``numpy.core``.
""" |
"""
Simple config
=============
Although CherryPy uses the :mod:`Python logging module <logging>`, it does so
behind the scenes so that simple logging is simple, but complicated logging
is still possible. "Simple" logging means that you can log to the screen
(i.e. console/stdout) or to a file, and that you can easily have separate
error and access log files.
Here are the simplified logging settings. You use these by adding lines to
your config file or dict. You should set these at either the global level or
per application (see next), but generally not both.
* ``log.screen``: Set this to True to have both "error" and "access" messages
printed to stdout.
* ``log.access_file``: Set this to an absolute filename where you want
"access" messages written.
* ``log.error_file``: Set this to an absolute filename where you want "error"
messages written.
Many events are automatically logged; to log your own application events, call
:func:`cherrypy.log`.
Architecture
============
Separate scopes
---------------
CherryPy provides log managers at both the global and application layers.
This means you can have one set of logging rules for your entire site,
and another set of rules specific to each application. The global log
manager is found at :func:`cherrypy.log`, and the log manager for each
application is found at :attr:`app.log<cherrypy._cptree.Application.log>`.
If you're inside a request, the latter is reachable from
``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain
a reference to the ``app``: either the return value of
:func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used
:func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``.
By default, the global logs are named "cherrypy.error" and "cherrypy.access",
and the application logs are named "cherrypy.error.2378745" and
"cherrypy.access.2378745" (the number is the id of the Application object).
This means that the application logs "bubble up" to the site logs, so if your
application has no log handlers, the site-level handlers will still log the
messages.
Errors vs. Access
-----------------
Each log manager handles both "access" messages (one per HTTP request) and
"error" messages (everything else). Note that the "error" log is not just for
errors! The format of access messages is highly formalized, but the error log
isn't--it receives messages from a variety of sources (including full error
tracebacks, if enabled).
Custom Handlers
===============
The simple settings above work by manipulating Python's standard :mod:`logging`
module. So when you need something more complex, the full power of the standard
module is yours to exploit. You can borrow or create custom handlers, formats,
filters, and much more. Here's an example that skips the standard FileHandler
and uses a RotatingFileHandler instead:
::
#python
log = app.log
# Remove the default FileHandlers if present.
log.error_file = ""
log.access_file = ""
maxBytes = getattr(log, "rot_maxBytes", 10000000)
backupCount = getattr(log, "rot_backupCount", 1000)
# Make a new RotatingFileHandler for the error log.
fname = getattr(log, "rot_error_file", "error.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(DEBUG)
h.setFormatter(_cplogging.logfmt)
log.error_log.addHandler(h)
# Make a new RotatingFileHandler for the access log.
fname = getattr(log, "rot_access_file", "access.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(DEBUG)
h.setFormatter(_cplogging.logfmt)
log.access_log.addHandler(h)
The ``rot_*`` attributes are pulled straight from the application log object.
Since "log.*" config entries simply set attributes on the log object, you can
add custom attributes to your heart's content. Note that these handlers are
used ''instead'' of the default, simple handlers outlined above (so don't set
the "log.error_file" config entry, for example).
""" |
"""
=================
Structured Arrays
=================
Introduction
============
Numpy provides powerful capabilities to create arrays of structured datatype.
These arrays permit one to manipulate the data by named fields. A simple
example will show what is meant.: ::
>>> x = np.array([(1,2.,'Hello'), (2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
>>> x
array([(1, 2.0, 'Hello'), (2, 3.0, 'World')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
Here we have created a one-dimensional array of length 2. Each element of
this array is a structure that contains three items, a 32-bit integer, a 32-bit
float, and a string of length 10 or less. If we index this array at the second
position we get the second structure: ::
>>> x[1]
(2,3.,"World")
Conveniently, one can access any field of the array by indexing using the
string that names that field. ::
>>> y = x['bar']
>>> y
array([ 2., 3.], dtype=float32)
>>> y[:] = 2*y
>>> y
array([ 4., 6.], dtype=float32)
>>> x
array([(1, 4.0, 'Hello'), (2, 6.0, 'World')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
In these examples, y is a simple float array consisting of the 2nd field
in the structured type. But, rather than being a copy of the data in the structured
array, it is a view, i.e., it shares exactly the same memory locations.
Thus, when we updated this array by doubling its values, the structured
array shows the corresponding values as doubled as well. Likewise, if one
changes the structured array, the field view also changes: ::
>>> x[1] = (-1,-1.,"Master")
>>> x
array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
>>> y
array([ 4., -1.], dtype=float32)
Defining Structured Arrays
==========================
One defines a structured array through the dtype object. There are
**several** alternative ways to define the fields of a record. Some of
these variants provide backward compatibility with Numeric, numarray, or
another module, and should not be used except for such purposes. These
will be so noted. One specifies record structure in
one of four alternative ways, using an argument (as supplied to a dtype
function keyword or a dtype object constructor itself). This
argument must be one of the following: 1) string, 2) tuple, 3) list, or
4) dictionary. Each of these is briefly described below.
1) String argument.
In this case, the constructor expects a comma-separated list of type
specifiers, optionally with extra shape information. The fields are
given the default names 'f0', 'f1', 'f2' and so on.
The type specifiers can take 4 different forms: ::
a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n>
(representing bytes, ints, unsigned ints, floats, complex and
fixed length strings of specified byte lengths)
b) int8,...,uint8,...,float16, float32, float64, complex64, complex128
(this time with bit sizes)
c) older Numeric/numarray type specifications (e.g. Float32).
Don't use these in new code!
d) Single character type specifiers (e.g H for unsigned short ints).
Avoid using these unless you must. Details can be found in the
Numpy book
These different styles can be mixed within the same string (but why would you
want to do that?). Furthermore, each type specifier can be prefixed
with a repetition number, or a shape. In these cases an array
element is created, i.e., an array within a record. That array
is still referred to as a single field. An example: ::
>>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64')
>>> x
array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])],
dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))])
By using strings to define the record structure, it precludes being
able to name the fields in the original definition. The names can
be changed as shown later, however.
2) Tuple argument: The only relevant tuple case that applies to record
structures is when a structure is mapped to an existing data type. This
is done by pairing in a tuple, the existing data type with a matching
dtype definition (using any of the variants being described here). As
an example (using a definition using a list, so see 3) for further
details): ::
>>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')]))
>>> x
array([0, 0, 0])
>>> x['r']
array([0, 0, 0], dtype=uint8)
In this case, an array is produced that looks and acts like a simple int32 array,
but also has definitions for fields that use only one byte of the int32 (a bit
like Fortran equivalencing).
3) List argument: In this case the record structure is defined with a list of
tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field
('' is permitted), 2) the type of the field, and 3) the shape (optional).
For example::
>>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
>>> x
array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])],
dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))])
4) Dictionary argument: two different forms are permitted. The first consists
of a dictionary with two required keys ('names' and 'formats'), each having an
equal sized list of values. The format list contains any type/shape specifier
allowed in other contexts. The names must be strings. There are two optional
keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to
the required two where offsets contain integer offsets for each field, and
titles are objects containing metadata for each field (these do not have
to be strings), where the value of None is permitted. As an example: ::
>>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[('col1', '>i4'), ('col2', '>f4')])
The other dictionary form permitted is a dictionary of name keys with tuple
values specifying type, offset, and an optional title. ::
>>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')])
Accessing and modifying field names
===================================
The field names are an attribute of the dtype object defining the structure.
For the last example: ::
>>> x.dtype.names
('col1', 'col2')
>>> x.dtype.names = ('x', 'y')
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')])
>>> x.dtype.names = ('x', 'y', 'z') # wrong number of names
<type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2
Accessing field titles
====================================
The field titles provide a standard place to put associated info for fields.
They do not have to be strings. ::
>>> x.dtype.fields['x'][2]
'title 1'
Accessing multiple fields at once
====================================
You can access multiple fields at once using a list of field names: ::
>>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))],
dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
Notice that `x` is created with a list of tuples. ::
>>> x[['x','y']]
array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)],
dtype=[('x', '<f4'), ('y', '<f4')])
>>> x[['x','value']]
array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]),
(1.0, [[2.0, 6.0], [2.0, 6.0]])],
dtype=[('x', '<f4'), ('value', '<f4', (2, 2))])
The fields are returned in the order they are asked for.::
>>> x[['y','x']]
array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)],
dtype=[('y', '<f4'), ('x', '<f4')])
Filling structured arrays
=========================
Structured arrays can be filled by field or row by row. ::
>>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')])
>>> arr['var1'] = np.arange(5)
If you fill it in row by row, it takes a take a tuple
(but not a list or array!)::
>>> arr[0] = (10,20)
>>> arr
array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)],
dtype=[('var1', '<f8'), ('var2', '<f8')])
Record Arrays
=============
For convenience, numpy provides "record arrays" which allow one to access
fields of structured arrays by attribute rather than by index. Record arrays
are structured arrays wrapped using a subclass of ndarray,
:class:`numpy.recarray`, which allows field access by attribute on the array
object, and record arrays also use a special datatype, :class:`numpy.record`,
which allows field access by attribute on the individual elements of the array.
The simplest way to create a record array is with :func:`numpy.rec.array`: ::
>>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
>>> recordarr.bar
array([ 2., 3.], dtype=float32)
>>> recordarr[1:2]
rec.array([(2, 3.0, 'World')],
dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])
>>> recordarr[1:2].foo
array([2], dtype=int32)
>>> recordarr.foo[1:2]
array([2], dtype=int32)
>>> recordarr[1].baz
'World'
numpy.rec.array can convert a wide variety of arguments into record arrays,
including normal structured arrays: ::
>>> arr = array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')])
>>> recordarr = np.rec.array(arr)
The numpy.rec module provides a number of other convenience functions for
creating record arrays, see :ref:`record array creation routines
<routines.array-creation.rec>`.
A record array representation of a structured array can be obtained using the
appropriate :ref:`view`: ::
>>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')])
>>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)),
... type=np.recarray)
For convenience, viewing an ndarray as type `np.recarray` will automatically
convert to `np.record` datatype, so the dtype can be left out of the view: ::
>>> recordarr = arr.view(np.recarray)
>>> recordarr.dtype
dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]))
To get back to a plain ndarray both the dtype and type must be reset. The
following view does so, taking into account the unusual case that the
recordarr was not a structured type: ::
>>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray)
Record array fields accessed by index or by attribute are returned as a record
array if the field has a structured type but as a plain ndarray otherwise. ::
>>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))],
... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])])
>>> type(recordarr.foo)
<type 'numpy.ndarray'>
>>> type(recordarr.bar)
<class 'numpy.core.records.recarray'>
Note that if a field has the same name as an ndarray attribute, the ndarray
attribute takes precedence. Such fields will be inaccessible by attribute but
may still be accessed by index.
""" |
# -*- coding: utf-8 -*-
#'''
#QE pw.x Parsers
#=======================================
#Parsers for pw.x inputs and outputs
#'''
#from io import StringIO
#from exa import _pd as pd
#from exa import _np as np
#from exa.config import Config
#if Config.numba:
# from exa.jitted.indexing import idxs_from_starts_and_counts
#else:
# from exa.algorithms.indexing import idxs_from_starts_and_counts
#from atomic import Length
#from atomic.atom import Atom
#from atomic.frame import _min_frame_from_atom, Frame
#from qe.types import to_py_type, get_length
#from qe.classes import classes, Timings
#from qe.pw.classes import PWInput, PWOutput
#
#
#def parse_pw_input(path):
# '''
# '''
# kwargs = {}
# current = None
# with open(path) as f:
# for line in f:
# low = line.lower()
# split = low.split()
# block = split[0].replace('&', '')
# if block in classes.keys() and '=' not in line:
# current = block
# kwargs[current] = classes[block]()
# if len(split) > 1:
# kwargs[current]._info = ' '.join(split[1:])
# if '=' in line:
# k, v = line.split('=')
# k = k.strip()
# v = to_py_type(v)
# kwargs[current][k] = v
# elif '&' not in line and '/' not in line and current not in low:
# kwargs[current] = kwargs[current]._append(line)
# return PWInput(**kwargs)
#
#
#def parse_pw_output(path):
# '''
# Args:
# path (str): Output file path
#
# Returns:
# obj ()
# '''
# header = []
# footer = []
# scf = []
# eigs = []
# frames = []
# positions = []
# block = 'header'
# kwargs = {'header': [], 'footer': [], 'scf': [], 'eigs': [],
# 'frames': [], 'atom': [], 'forces': [], 'meta': []}
# with open(path) as f:
# for line in f:
# if 'Self-consistent Calculation' in line:
# block = 'scf'
# elif 'End of self-consistent calculation' in line:
# block = 'eigs'
# elif 'highest occupied level' in line or 'Fermi energy' in line:
# block = 'frames'
# elif 'Forces acting on atoms' in line:
# block = 'forces'
# elif 'ATOMIC_POSITIONS' in line:
# block = 'atom'
# elif 'Writing output' in line:
# block = 'meta'
# elif 'init_run' in line:
# block = 'footer'
# kwargs[block].append(line)
# timings = _parse_footer(kwargs['footer'])
# atom = _parse_atom(''.join(kwargs['atom']), ''.join(kwargs['forces']))
# frame = _parse_frame(kwargs['frames'], atom)
# out = PWOutput(timings=timings, atom=atom, frame=frame)
# return out, kwargs
#
#
#def _parse_footer(footer):
# '''
# '''
# data = {'catagory': [], 'called_by': [], 'name': [], 'cpu': [], 'wall': [], 'ncalls': []}
# called_by = ''
# catagory = 'summary'
# for line in footer:
# if 'Called by' in line:
# called_by = line.replace('Called by ', '').replace(':\n', '')
# if called_by != 'init_run':
# catagory = 'individual'
# elif 'routines' in line:
# called_by = line.split()[0].lower()
# catagory = 'individual'
# elif 'calls' in line:
# ls = line.split()
# name = ls[0]
# cpu = float(ls[2].replace('s', ''))
# wall = float(ls[4].replace('s', ''))
# ncalls = int(ls[7])
# data['catagory'].append(catagory)
# data['called_by'].append(called_by)
# data['name'].append(name)
# data['cpu'].append(cpu)
# data['wall'].append(wall)
# data['ncalls'].append(ncalls)
# df = Timings.from_dict(data)
# df.set_index(['catagory', 'name'], inplace=True)
# df['cpu'] = df['cpu'].map(lambda x: pd.Timedelta(seconds=x.astype(np.float64)))
# df['wall'] = df['wall'].map(lambda x: pd.Timedelta(seconds=x.astype(np.float64)))
# df.sort_index(inplace=True)
# return df
#
#
#def _parse_atom(atoms, forces, label=True):
# '''
# '''
# forces = StringIO(forces)
# forces = pd.read_csv(forces, delim_whitespace=True, header=None,
# names=['n0', 'n1', 'n2', 'n3', 'n4', 'n5', 'fx', 'fy', 'fz'])
# forces = forces.ix[(forces['n0'] == 'atom'), ['fx', 'fy', 'fz']].reset_index(drop=True)
# atom = StringIO(atoms)
# atom = pd.read_csv(atom, delim_whitespace=True, header=None,
# names=['symbol', 'x', 'y', 'z'])
# frames = atom[atom['symbol'] == 'ATOMIC_POSITIONS']
# unit = get_length(frames.iloc[0, 1])
# starts = frames.index.values + 1
# counts = (frames.index[1:] - frames.index[:-1]).values - 1
# counts = np.append(counts, atom.index.values[-1] - frames.index[-1] - 1)
# atom.dropna(inplace=True)
# atom.reset_index(inplace=True, drop=True)
# frame, lbl, indices = idxs_from_starts_and_counts(starts, counts)
# atom['symbol'] = atom['symbol'].astype('category')
# atom['frame'] = frame
# atom['label'] = lbl
# atom['fx'] = forces['fx']
# atom['fy'] = forces['fy']
# atom['fz'] = forces['fz']
# atom[['x', 'y', 'z', 'fx', 'fy', 'fz']] = atom[['x', 'y', 'z', 'fx', 'fy', 'fz']].astype(np.float64)
# conv = Length[unit, 'au']
# atom['x'] *= conv
# atom['y'] *= conv
# atom['z'] *= conv
# atom['fx'] *= conv
# atom['fy'] *= conv
# atom['fz'] *= conv
# atom.index.names = ['atom']
# return Atom(atom)
#
#
#def _parse_frame(data, atom):
# '''
# '''
# df = _min_frame_from_atom(atom)
# rows = {'energy': [], 'one_electron_energy': [], 'hartree_energy': [], 'xc_energy': [],
# 'ewald': [], 'smearing': []}
# for line in data:
# split = line.split()
# if 'total energy' in line and '!' in line:
# rows['energy'].append(split[4])
# elif 'one-electron' in line:
# rows['one_electron_energy'].append(split[3])
# elif 'hartree contribution' in line:
# rows['hartree_energy'].append(split[3])
# elif 'xc contribution' in line:
# rows['xc_energy'].append(split[3])
# elif 'ewald contribution' in line:
# rows['ewald'].append(split[3])
# elif 'smearing contrib' in line:
# rows['smearing'].append(split[4])
# frame = pd.DataFrame.from_dict(rows)
# frame = frame.astype(np.float64)
# for col in df.columns:
# frame[col] = df[col]
# return Frame(frame)
#
|
"""
==============
Array indexing
==============
Array indexing refers to any use of the square brackets ([]) to index
array values. There are many options to indexing, which give numpy
indexing great power, but with power comes some complexity and the
potential for confusion. This section is just an overview of the
various options and issues related to indexing. Aside from single
element indexing, the details on most of these options are to be
found in related sections.
Assignment vs referencing
=========================
Most of the following examples show the use of indexing when
referencing data in an array. The examples work just as well
when assigning to an array. See the section at the end for
specific examples and explanations on how assignments work.
Single element indexing
=======================
Single element indexing for a 1-D array is what one expects. It work
exactly like that for other standard Python sequences. It is 0-based,
and accepts negative indices for indexing from the end of the array. ::
>>> x = np.arange(10)
>>> x[2]
2
>>> x[-2]
8
Unlike lists and tuples, numpy arrays support multidimensional indexing
for multidimensional arrays. That means that it is not necessary to
separate each dimension's index into its own set of square brackets. ::
>>> x.shape = (2,5) # now x is 2-dimensional
>>> x[1,3]
8
>>> x[1,-1]
9
Note that if one indexes a multidimensional array with fewer indices
than dimensions, one gets a subdimensional array. For example: ::
>>> x[0]
array([0, 1, 2, 3, 4])
That is, each index specified selects the array corresponding to the
rest of the dimensions selected. In the above example, choosing 0
means that remaining dimension of lenth 5 is being left unspecified,
and that what is returned is an array of that dimensionality and size.
It must be noted that the returned array is not a copy of the original,
but points to the same values in memory as does the original array.
In this case, the 1-D array at the first position (0) is returned.
So using a single index on the returned array, results in a single
element being returned. That is: ::
>>> x[0][2]
2
So note that ``x[0,2] = x[0][2]`` though the second case is more
inefficient a new temporary array is created after the first index
that is subsequently indexed by 2.
Note to those used to IDL or Fortran memory order as it relates to
indexing. Numpy uses C-order indexing. That means that the last
index usually represents the most rapidly changing memory location,
unlike Fortran or IDL, where the first index represents the most
rapidly changing location in memory. This difference represents a
great potential for confusion.
Other indexing options
======================
It is possible to slice and stride arrays to extract arrays of the
same number of dimensions, but of different sizes than the original.
The slicing and striding works exactly the same way it does for lists
and tuples except that they can be applied to multiple dimensions as
well. A few examples illustrates best: ::
>>> x = np.arange(10)
>>> x[2:5]
array([2, 3, 4])
>>> x[:-7]
array([0, 1, 2])
>>> x[1:7:2]
array([1, 3, 5])
>>> y = np.arange(35).reshape(5,7)
>>> y[1:5:2,::3]
array([[ 7, 10, 13],
[21, 24, 27]])
Note that slices of arrays do not copy the internal array data but
also produce new views of the original data.
It is possible to index arrays with other arrays for the purposes of
selecting lists of values out of arrays into new arrays. There are
two different ways of accomplishing this. One uses one or more arrays
of index values. The other involves giving a boolean array of the proper
shape to indicate the values to be selected. Index arrays are a very
powerful tool that allow one to avoid looping over individual elements in
arrays and thus greatly improve performance.
It is possible to use special features to effectively increase the
number of dimensions in an array through indexing so the resulting
array aquires the shape needed for use in an expression or with a
specific function.
Index arrays
============
Numpy arrays may be indexed with other arrays (or any other sequence-
like object that can be converted to an array, such as lists, with the
exception of tuples; see the end of this document for why this is). The
use of index arrays ranges from simple, straightforward cases to
complex, hard-to-understand cases. For all cases of index arrays, what
is returned is a copy of the original data, not a view as one gets for
slices.
Index arrays must be of integer type. Each value in the array indicates
which value in the array to use in place of the index. To illustrate: ::
>>> x = np.arange(10,1,-1)
>>> x
array([10, 9, 8, 7, 6, 5, 4, 3, 2])
>>> x[np.array([3, 3, 1, 8])]
array([7, 7, 9, 2])
The index array consisting of the values 3, 3, 1 and 8 correspondingly
create an array of length 4 (same as the index array) where each index
is replaced by the value the index array has in the array being indexed.
Negative values are permitted and work as they do with single indices
or slices: ::
>>> x[np.array([3,3,-3,8])]
array([7, 7, 4, 2])
It is an error to have index values out of bounds: ::
>>> x[np.array([3, 3, 20, 8])]
<type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9
Generally speaking, what is returned when index arrays are used is
an array with the same shape as the index array, but with the type
and values of the array being indexed. As an example, we can use a
multidimensional index array instead: ::
>>> x[np.array([[1,1],[2,3]])]
array([[9, 9],
[8, 7]])
Indexing Multi-dimensional arrays
=================================
Things become more complex when multidimensional arrays are indexed,
particularly with multidimensional index arrays. These tend to be
more unusal uses, but theyare permitted, and they are useful for some
problems. We'll start with thesimplest multidimensional case (using
the array y from the previous examples): ::
>>> y[np.array([0,2,4]), np.array([0,1,2])]
array([ 0, 15, 30])
In this case, if the index arrays have a matching shape, and there is
an index array for each dimension of the array being indexed, the
resultant array has the same shape as the index arrays, and the values
correspond to the index set for each position in the index arrays. In
this example, the first index value is 0 for both index arrays, and
thus the first value of the resultant array is y[0,0]. The next value
is y[2,1], and the last is y[4,2].
If the index arrays do not have the same shape, there is an attempt to
broadcast them to the same shape. If they cannot be broadcast to the
same shape, an exception is raised: ::
>>> y[np.array([0,2,4]), np.array([0,1])]
<type 'exceptions.ValueError'>: shape mismatch: objects cannot be
broadcast to a single shape
The broadcasting mechanism permits index arrays to be combined with
scalars for other indices. The effect is that the scalar value is used
for all the corresponding values of the index arrays: ::
>>> y[np.array([0,2,4]), 1]
array([ 1, 15, 29])
Jumping to the next level of complexity, it is possible to only
partially index an array with index arrays. It takes a bit of thought
to understand what happens in such cases. For example if we just use
one index array with y: ::
>>> y[np.array([0,2,4])]
array([[ 0, 1, 2, 3, 4, 5, 6],
[14, 15, 16, 17, 18, 19, 20],
[28, 29, 30, 31, 32, 33, 34]])
What results is the construction of a new array where each value of
the index array selects one row from the array being indexed and the
resultant array has the resulting shape (size of row, number index
elements).
An example of where this may be useful is for a color lookup table
where we want to map the values of an image into RGB triples for
display. The lookup table could have a shape (nlookup, 3). Indexing
such an array with an image with shape (ny, nx) with dtype=np.uint8
(or any integer type so long as values are with the bounds of the
lookup table) will result in an array of shape (ny, nx, 3) where a
triple of RGB values is associated with each pixel location.
In general, the shape of the resulant array will be the concatenation
of the shape of the index array (or the shape that all the index arrays
were broadcast to) with the shape of any unused dimensions (those not
indexed) in the array being indexed.
Boolean or "mask" index arrays
==============================
Boolean arrays used as indices are treated in a different manner
entirely than index arrays. Boolean arrays must be of the same shape
as the initial dimensions of the array being indexed. In the
most straightforward case, the boolean array has the same shape: ::
>>> b = y>20
>>> y[b]
array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34])
The result is a 1-D array containing all the elements in the indexed
array corresponding to all the true elements in the boolean array. As
with index arrays, what is returned is a copy of the data, not a view
as one gets with slices.
The result will be multidimensional if y has more dimensions than b.
For example: ::
>>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y
array([False, False, False, True, True], dtype=bool)
>>> y[b[:,5]]
array([[21, 22, 23, 24, 25, 26, 27],
[28, 29, 30, 31, 32, 33, 34]])
Here the 4th and 5th rows are selected from the indexed array and
combined to make a 2-D array.
In general, when the boolean array has fewer dimensions than the array
being indexed, this is equivalent to y[b, ...], which means
y is indexed by b followed by as many : as are needed to fill
out the rank of y.
Thus the shape of the result is one dimension containing the number
of True elements of the boolean array, followed by the remaining
dimensions of the array being indexed.
For example, using a 2-D boolean array of shape (2,3)
with four True elements to select rows from a 3-D array of shape
(2,3,5) results in a 2-D result of shape (4,5): ::
>>> x = np.arange(30).reshape(2,3,5)
>>> x
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
>>> b = np.array([[True, True, False], [False, True, True]])
>>> x[b]
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]])
For further details, consult the numpy reference documentation on array indexing.
Combining index arrays with slices
==================================
Index arrays may be combined with slices. For example: ::
>>> y[np.array([0,2,4]),1:3]
array([[ 1, 2],
[15, 16],
[29, 30]])
In effect, the slice is converted to an index array
np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array
to produce a resultant array of shape (3,2).
Likewise, slicing can be combined with broadcasted boolean indices: ::
>>> y[b[:,5],1:3]
array([[22, 23],
[29, 30]])
Structural indexing tools
=========================
To facilitate easy matching of array shapes with expressions and in
assignments, the np.newaxis object can be used within array indices
to add new dimensions with a size of 1. For example: ::
>>> y.shape
(5, 7)
>>> y[:,np.newaxis,:].shape
(5, 1, 7)
Note that there are no new elements in the array, just that the
dimensionality is increased. This can be handy to combine two
arrays in a way that otherwise would require explicitly reshaping
operations. For example: ::
>>> x = np.arange(5)
>>> x[:,np.newaxis] + x[np.newaxis,:]
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8]])
The ellipsis syntax maybe used to indicate selecting in full any
remaining unspecified dimensions. For example: ::
>>> z = np.arange(81).reshape(3,3,3,3)
>>> z[1,...,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
This is equivalent to: ::
>>> z[1,:,:,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
Assigning values to indexed arrays
==================================
As mentioned, one can select a subset of an array to assign to using
a single index, slices, and index and mask arrays. The value being
assigned to the indexed array must be shape consistent (the same shape
or broadcastable to the shape the index produces). For example, it is
permitted to assign a constant to a slice: ::
>>> x = np.arange(10)
>>> x[2:7] = 1
or an array of the right size: ::
>>> x[2:7] = np.arange(5)
Note that assignments may result in changes if assigning
higher types to lower types (like floats to ints) or even
exceptions (assigning complex to floats or ints): ::
>>> x[1] = 1.2
>>> x[1]
1
>>> x[1] = 1.2j
<type 'exceptions.TypeError'>: can't convert complex to long; use
long(abs(z))
Unlike some of the references (such as array and mask indices)
assignments are always made to the original data in the array
(indeed, nothing else would make sense!). Note though, that some
actions may not work as one may naively expect. This particular
example is often surprising to people: ::
>>> x = np.arange(0, 50, 10)
>>> x
array([ 0, 10, 20, 30, 40])
>>> x[np.array([1, 1, 3, 1])] += 1
>>> x
array([ 0, 11, 20, 31, 40])
Where people expect that the 1st location will be incremented by 3.
In fact, it will only be incremented by 1. The reason is because
a new array is extracted from the original (as a temporary) containing
the values at 1, 1, 3, 1, then the value 1 is added to the temporary,
and then the temporary is assigned back to the original array. Thus
the value of the array at x[1]+1 is assigned to x[1] three times,
rather than being incremented 3 times.
Dealing with variable numbers of indices within programs
========================================================
The index syntax is very powerful but limiting when dealing with
a variable number of indices. For example, if you want to write
a function that can handle arguments with various numbers of
dimensions without having to write special case code for each
number of possible dimensions, how can that be done? If one
supplies to the index a tuple, the tuple will be interpreted
as a list of indices. For example (using the previous definition
for the array z): ::
>>> indices = (1,1,1,1)
>>> z[indices]
40
So one can use code to construct tuples of any number of indices
and then use these within an index.
Slices can be specified within programs by using the slice() function
in Python. For example: ::
>>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
>>> z[indices]
array([39, 40])
Likewise, ellipsis can be specified by code by using the Ellipsis
object: ::
>>> indices = (1, Ellipsis, 1) # same as [1,...,1]
>>> z[indices]
array([[28, 31, 34],
[37, 40, 43],
[46, 49, 52]])
For this reason it is possible to use the output from the np.where()
function directly as an index since it always returns a tuple of index
arrays.
Because the special treatment of tuples, they are not automatically
converted to an array as a list would be. As an example: ::
>>> z[[1,1,1,1]] # produces a large array
array([[[[27, 28, 29],
[30, 31, 32], ...
>>> z[(1,1,1,1)] # returns a single value
40
""" |
"""
=============================
Byteswapping and byte order
=============================
Introduction to byte ordering and ndarrays
==========================================
The ``ndarray`` is an object that provide a python array interface to data
in memory.
It often happens that the memory that you want to view with an array is
not of the same byte ordering as the computer on which you are running
Python.
For example, I might be working on a computer with a little-endian CPU -
such as an Intel Pentium, but I have loaded some data from a file
written by a computer that is big-endian. Let's say I have loaded 4
bytes from a file written by a Sun (big-endian) computer. I know that
these 4 bytes represent two 16-bit integers. On a big-endian machine, a
two-byte integer is stored with the Most Significant Byte (MSB) first,
and then the Least Significant Byte (LSB). Thus the bytes are, in memory order:
#. MSB integer 1
#. LSB integer 1
#. MSB integer 2
#. LSB integer 2
Let's say the two integers were in fact 1 and 770. Because 770 = 256 *
3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2.
The bytes I have loaded from the file would have these contents:
>>> big_end_str = chr(0) + chr(1) + chr(3) + chr(2)
>>> big_end_str
'\\x00\\x01\\x03\\x02'
We might want to use an ``ndarray`` to access these integers. In that
case, we can create an array around this memory, and tell numpy that
there are two integers, and that they are 16 bit and big-endian:
>>> import numpy as np
>>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_str)
>>> big_end_arr[0]
1
>>> big_end_arr[1]
770
Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian'
(``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For
example, if our data represented a single unsigned 4-byte little-endian
integer, the dtype string would be ``<u4``.
In fact, why don't we try that?
>>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_str)
>>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3
True
Returning to our ``big_end_arr`` - in this case our underlying data is
big-endian (data endianness) and we've set the dtype to match (the dtype
is also big-endian). However, sometimes you need to flip these around.
.. warning::
Scalars currently do not include byte order information, so extracting
a scalar from an array will return an integer in native byte order.
Hence:
>>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder
True
Changing byte ordering
======================
As you can imagine from the introduction, there are two ways you can
affect the relationship between the byte ordering of the array and the
underlying memory it is looking at:
* Change the byte-ordering information in the array dtype so that it
interprets the undelying data as being in a different byte order.
This is the role of ``arr.newbyteorder()``
* Change the byte-ordering of the underlying data, leaving the dtype
interpretation as it was. This is what ``arr.byteswap()`` does.
The common situations in which you need to change byte ordering are:
#. Your data and dtype endianess don't match, and you want to change
the dtype so that it matches the data.
#. Your data and dtype endianess don't match, and you want to swap the
data so that they match the dtype
#. Your data and dtype endianess match, but you want the data swapped
and the dtype to reflect this
Data and dtype endianness don't match, change dtype to match data
-----------------------------------------------------------------
We make something where they don't match:
>>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_str)
>>> wrong_end_dtype_arr[0]
256
The obvious fix for this situation is to change the dtype so it gives
the correct endianness:
>>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder()
>>> fixed_end_dtype_arr[0]
1
Note the the array has not changed in memory:
>>> fixed_end_dtype_arr.tobytes() == big_end_str
True
Data and type endianness don't match, change data to match dtype
----------------------------------------------------------------
You might want to do this if you need the data in memory to be a certain
ordering. For example you might be writing the memory out to a file
that needs a certain byte ordering.
>>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap()
>>> fixed_end_mem_arr[0]
1
Now the array *has* changed in memory:
>>> fixed_end_mem_arr.tobytes() == big_end_str
False
Data and dtype endianness match, swap data and dtype
----------------------------------------------------
You may have a correctly specified array dtype, but you need the array
to have the opposite byte order in memory, and you want the dtype to
match so the array values make sense. In this case you just do both of
the previous operations:
>>> swapped_end_arr = big_end_arr.byteswap().newbyteorder()
>>> swapped_end_arr[0]
1
>>> swapped_end_arr.tobytes() == big_end_str
False
An easier way of casting the data to a specific dtype and byte ordering
can be achieved with the ndarray astype method:
>>> swapped_end_arr = big_end_arr.astype('<i2')
>>> swapped_end_arr[0]
1
>>> swapped_end_arr.tobytes() == big_end_str
False
""" |
# Copyright 2010, NAME Colorado State University Research
# Foundation
#
# Colorado State University Software Evaluation License Agreement
#
# This license agreement ("License"), effective today, is made by and between
# you (hereinafter referred to as the "Licensee") and the Board of Governors of
# the Colorado State University System acting by and through Colorado State
# University, an institution of higher education of the State of Colorado,
# located at Fort Collins, Colorado, 80523-2002 ("CSU"), and concerns certain
# software described as "Correlation Filters for Detection, Recognition, and
# Registration," a system of software programs for advanced video signal
# processing and analysis.
#
# 1. General. A non-exclusive, nontransferable, perpetual license is granted
# to the Licensee to install and use the Software for academic, non-profit,
# or government-sponsored research purposes. Use of the Software under this
# License is restricted to non-commercial purposes. Commercial use of the
# Software requires a separately executed written license agreement.
#
# 2. Permitted Use and Restrictions. Licensee agrees that it will use the
# Software, and any modifications, improvements, or derivatives to the
# Software that the Licensee may create (collectively, "Improvements")
# solely for internal, non-commercial purposes and shall not distribute,
# transfer, deploy, or externally expose the Software or Improvements to any
# person or third parties without prior written permission from CSU. The
# term "non-commercial," as used in this License, means academic or other
# scholarly research which (a) is not undertaken for profit, or (b) is not
# intended to produce works, services, or data for commercial use, or (c) is
# neither conducted, nor funded, by a person or an entity engaged in the
# commercial use, application or exploitation of works similar to the
# Software.
#
# 3. Ownership and Assignment of Copyright. The Licensee acknowledges that
# CSU has the right to offer this copyright in the Software and associated
# documentation only to the extent described herein, and the offered
# Software and associated documentation are the property of CSU. The
# Licensee agrees that any Improvements made by Licensee shall be subject
# to the same terms and conditions as the Software. Licensee agrees not to
# assert a claim of infringement in Licensee copyrights in Improvements in
# the event CSU prepares substantially similar modifications or derivative
# works. The Licensee agrees to use his/her reasonable best efforts to
# protect the contents of the Software and to prevent unauthorized
# disclosure by its agents, officers, employees, and consultants. If the
# Licensee receives a request to furnish all or any portion of the Software
# to a third party, Licensee will not fulfill such a request but will refer
# the third party to the CSU Computer Science website
# http://www.cs.colostate.edu/~vision/ocof.html so that the third party's
# use of this Software will be subject to the terms and conditions of this
# License. Notwithstanding the above, Licensee may disclose any
# Improvements that do not involve disclosure of the Software.
#
# 4. Copies. The Licensee may make a reasonable number of copies of the
# Software for the purposes of backup, maintenance of the Software or the
# development of derivative works based on the Software. These additional
# copies shall carry the copyright notice and shall be controlled by this
# License, and will be destroyed along with the original by the Licensee
# upon termination of the License.
#
# 5. Acknowledgement. Licensee agrees that any publication of results obtained
# with the Software will acknowledge its use by an appropriate citation as
# specified in the documentation.
#
# 6. Acknowledgment. CSU acknowledges that the Software executes patent pending
# Algorithms for the purpose of non-commercial uses. Licensee acknowledges
# that this Agreement does not constitute a commercial license to the
# Algorithms. Licensee may apply for a commercial license to the Algorithms
# from Colorado State University Research Foundation by visiting
# http://www.csuventures.org.
#
# 7. Disclaimer of Warranties and Limitation of Liability. THE SOFTWARE IS
# PROVIDED "AS IS," WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
# AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. CSU MAKES NO
# REPRESENTATION OR WARRANTY THAT THE SOFTWARE WILL NOT INFRINGE ANY PATENT
# OR OTHER PROPRIETARY RIGHT. IN NO EVENT SHALL CSU BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
#
# 8. Termination. This License is effective until terminated by either party.
# Your rights under this License will terminate automatically without notice
# from CSU if you fail to comply with any term(s) of this License. Upon
# termination of this License, you shall immediately discontinue all use of
# the Software and destroy the original and all copies, full or partial, of
# the Software, including any modifications or derivative works, and
# associated documentation.
#
# 9. Governing Law and General Provisions. This License shall be governed by
# the laws of the State of Colorado, excluding the application of its
# conflicts of law rules. This License shall not be governed by the United
# Nations Convention on Contracts for the International Sale of Goods, the
# application of which is expressly excluded. If any provisions of this
# License are held invalid or unenforceable for any reason, the remaining
# provisions shall remain in full force and effect. This License is binding
# upon any heirs and assigns of the Licensee. The License granted to
# Licensee hereunder may not be assigned or transferred to any other person
# or entity without the express consent of the Regents. This License
# constitutes the entire agreement between the parties with respect to the
# use of the Software licensed hereunder and supersedes all other previous
# or contemporaneous agreements or understandings between the parties,
# whether verbal or written, concerning the subject matter.
|
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
"""
Define a simple format for saving numpy arrays to disk with the full
information about them.
The ``.npy`` format is the standard binary file format in NumPy for
persisting a *single* arbitrary NumPy array on disk. The format stores all
of the shape and dtype information necessary to reconstruct the array
correctly even on another machine with a different architecture.
The format is designed to be as simple as possible while achieving
its limited goals.
The ``.npz`` format is the standard format for persisting *multiple* NumPy
arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy``
files, one for each array.
Capabilities
------------
- Can represent all NumPy arrays including nested record arrays and
object arrays.
- Represents the data in its native binary form.
- Supports Fortran-contiguous arrays directly.
- Stores all of the necessary information to reconstruct the array
including shape and dtype on a machine of a different
architecture. Both little-endian and big-endian arrays are
supported, and a file with little-endian numbers will yield
a little-endian array on any machine reading the file. The
types are described in terms of their actual sizes. For example,
if a machine with a 64-bit C "long int" writes out an array with
"long ints", a reading machine with 32-bit C "long ints" will yield
an array with 64-bit integers.
- Is straightforward to reverse engineer. Datasets often live longer than
the programs that created them. A competent developer should be
able to create a solution in their preferred programming language to
read most ``.npy`` files that he has been given without much
documentation.
- Allows memory-mapping of the data. See `open_memmep`.
- Can be read from a filelike stream object instead of an actual file.
- Stores object arrays, i.e. arrays containing elements that are arbitrary
Python objects. Files with object arrays are not to be mmapable, but
can be read and written to disk.
Limitations
-----------
- Arbitrary subclasses of numpy.ndarray are not completely preserved.
Subclasses will be accepted for writing, but only the array data will
be written out. A regular numpy.ndarray object will be created
upon reading the file.
.. warning::
Due to limitations in the interpretation of structured dtypes, dtypes
with fields with empty names will have the names replaced by 'f0', 'f1',
etc. Such arrays will not round-trip through the format entirely
accurately. The data is intact; only the field names will differ. We are
working on a fix for this. This fix will not require a change in the
file format. The arrays with such structures can still be saved and
restored, and the correct dtype may be restored by using the
``loadedarray.view(correct_dtype)`` method.
File extensions
---------------
We recommend using the ``.npy`` and ``.npz`` extensions for files saved
in this format. This is by no means a requirement; applications may wish
to use these file formats but use an extension specific to the
application. In the absence of an obvious alternative, however,
we suggest using ``.npy`` and ``.npz``.
Version numbering
-----------------
The version numbering of these formats is independent of NumPy version
numbering. If the format is upgraded, the code in `numpy.io` will still
be able to read and write Version 1.0 files.
Format Version 1.0
------------------
The first 6 bytes are a magic string: exactly ``\\x93NUMPY``.
The next 1 byte is an unsigned byte: the major version number of the file
format, e.g. ``\\x01``.
The next 1 byte is an unsigned byte: the minor version number of the file
format, e.g. ``\\x00``. Note: the version of the file format is not tied
to the version of the numpy package.
The next 2 bytes form a little-endian unsigned short int: the length of
the header data HEADER_LEN.
The next HEADER_LEN bytes form the header data describing the array's
format. It is an ASCII string which contains a Python literal expression
of a dictionary. It is terminated by a newline (``\\n``) and padded with
spaces (``\\x20``) to make the total length of
``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment
purposes.
The dictionary contains three keys:
"descr" : dtype.descr
An object that can be passed as an argument to the `numpy.dtype`
constructor to create the array's dtype.
"fortran_order" : bool
Whether the array data is Fortran-contiguous or not. Since
Fortran-contiguous arrays are a common form of non-C-contiguity,
we allow them to be written directly to disk for efficiency.
"shape" : tuple of int
The shape of the array.
For repeatability and readability, the dictionary keys are sorted in
alphabetic order. This is for convenience only. A writer SHOULD implement
this if possible. A reader MUST NOT depend on this.
Following the header comes the array data. If the dtype contains Python
objects (i.e. ``dtype.hasobject is True``), then the data is a Python
pickle of the array. Otherwise the data is the contiguous (either C-
or Fortran-, depending on ``fortran_order``) bytes of the array.
Consumers can figure out the number of bytes by multiplying the number
of elements given by the shape (noting that ``shape=()`` means there is
1 element) by ``dtype.itemsize``.
Format Version 2.0
------------------
The version 1.0 format only allowed the array header to have a total size of
65535 bytes. This can be exceeded by structured arrays with a large number of
columns. The version 2.0 format extends the header size to 4 GiB.
`numpy.save` will automatically save in 2.0 format if the data requires it,
else it will always use the more compatible 1.0 format.
The description of the fourth element of the header therefore has become:
"The next 4 bytes form a little-endian unsigned int: the length of the header
data HEADER_LEN."
Notes
-----
The ``.npy`` format, including reasons for creating it and a comparison of
alternatives, is described fully in the "npy-format" NEP.
""" |
#!/usr/bin/env python
# ***** BEGIN LICENSE BLOCK *****
# Version: MPL 1.1/GPL 2.0/LGPL 2.1
#
# The contents of this file are subject to the Mozilla Public License Version
# 1.1 (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
# http://www.mozilla.org/MPL/
#
# Software distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
# for the specific language governing rights and limitations under the
# License.
#
# The Original Code is font utility code.
#
# The Initial Developer of the Original Code is Mozilla Corporation.
# Portions created by the Initial Developer are Copyright (C) 2009
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# NAME <EMAIL>
#
# Alternatively, the contents of this file may be used under the terms of
# either the GNU General Public License Version 2 or later (the "GPL"), or
# the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
# in which case the provisions of the GPL or the LGPL are applicable instead
# of those above. If you wish to allow use of your version of this file only
# under the terms of either the GPL or the LGPL, and not to allow others to
# use your version of this file under the terms of the MPL, indicate your
# decision by deleting the provisions above and replace them with the notice
# and other provisions required by the GPL or the LGPL. If you do not delete
# the provisions above, a recipient may use your version of this file under
# the terms of any one of the MPL, the GPL or the LGPL.
#
# ***** END LICENSE BLOCK ***** */
# eotlitetool.py - create EOT version of OpenType font for use with IE
#
# Usage: eotlitetool.py [-o output-filename] font1 [font2 ...]
#
# OpenType file structure
# http://www.microsoft.com/typography/otspec/otff.htm
#
# Types:
#
# BYTE 8-bit unsigned integer.
# CHAR 8-bit signed integer.
# USHORT 16-bit unsigned integer.
# SHORT 16-bit signed integer.
# ULONG 32-bit unsigned integer.
# Fixed 32-bit signed fixed-point number (16.16)
# LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer.
#
# SFNT Header
#
# Fixed sfnt version // 0x00010000 for version 1.0.
# USHORT numTables // Number of tables.
# USHORT searchRange // (Maximum power of 2 <= numTables) x 16.
# USHORT entrySelector // Log2(maximum power of 2 <= numTables).
# USHORT rangeShift // NumTables x 16-searchRange.
#
# Table Directory
#
# ULONG tag // 4-byte identifier.
# ULONG checkSum // CheckSum for this table.
# ULONG offset // Offset from beginning of TrueType font file.
# ULONG length // Length of this table.
#
# OS/2 Table (Version 4)
#
# USHORT version // 0x0004
# SHORT xAvgCharWidth
# USHORT usWeightClass
# USHORT usWidthClass
# USHORT fsType
# SHORT ySubscriptXSize
# SHORT ySubscriptYSize
# SHORT ySubscriptXOffset
# SHORT ySubscriptYOffset
# SHORT ySuperscriptXSize
# SHORT ySuperscriptYSize
# SHORT ySuperscriptXOffset
# SHORT ySuperscriptYOffset
# SHORT yStrikeoutSize
# SHORT yStrikeoutPosition
# SHORT sFamilyClass
# BYTE panose[10]
# ULONG ulUnicodeRange1 // Bits 0-31
# ULONG ulUnicodeRange2 // Bits 32-63
# ULONG ulUnicodeRange3 // Bits 64-95
# ULONG ulUnicodeRange4 // Bits 96-127
# CHAR achVendID[4]
# USHORT fsSelection
# USHORT usFirstCharIndex
# USHORT usLastCharIndex
# SHORT sTypoAscender
# SHORT sTypoDescender
# SHORT sTypoLineGap
# USHORT usWinAscent
# USHORT usWinDescent
# ULONG ulCodePageRange1 // Bits 0-31
# ULONG ulCodePageRange2 // Bits 32-63
# SHORT sxHeight
# SHORT sCapHeight
# USHORT usDefaultChar
# USHORT usBreakChar
# USHORT usMaxContext
#
#
# The Naming Table is organized as follows:
#
# [name table header]
# [name records]
# [string data]
#
# Name Table Header
#
# USHORT format // Format selector (=0).
# USHORT count // Number of name records.
# USHORT stringOffset // Offset to start of string storage (from start of table).
#
# Name Record
#
# USHORT platformID // Platform ID.
# USHORT encodingID // Platform-specific encoding ID.
# USHORT languageID // Language ID.
# USHORT nameID // Name ID.
# USHORT length // String length (in bytes).
# USHORT offset // String offset from start of storage area (in bytes).
#
# head Table
#
# Fixed tableVersion // Table version number 0x00010000 for version 1.0.
# Fixed fontRevision // Set by font manufacturer.
# ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum.
# ULONG magicNumber // Set to 0x5F0F3CF5.
# USHORT flags
# USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines.
# LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer
# LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer
# SHORT xMin // For all glyph bounding boxes.
# SHORT yMin
# SHORT xMax
# SHORT yMax
# USHORT macStyle
# USHORT lowestRecPPEM // Smallest readable size in pixels.
# SHORT fontDirectionHint
# SHORT indexToLocFormat // 0 for short offsets, 1 for long.
# SHORT glyphDataFormat // 0 for current format.
#
#
#
# Embedded OpenType (EOT) file format
# http://www.w3.org/Submission/EOT/
#
# EOT version 0x00020001
#
# An EOT font consists of a header with the original OpenType font
# appended at the end. Most of the data in the EOT header is simply a
# copy of data from specific tables within the font data. The exceptions
# are the 'Flags' field and the root string name field. The root string
# is a set of names indicating domains for which the font data can be
# used. A null root string implies the font data can be used anywhere.
# The EOT header is in little-endian byte order but the font data remains
# in big-endian order as specified by the OpenType spec.
#
# Overall structure:
#
# [EOT header]
# [EOT name records]
# [font data]
#
# EOT header
#
# ULONG eotSize // Total structure length in bytes (including string and font data)
# ULONG fontDataSize // Length of the OpenType font (FontData) in bytes
# ULONG version // Version number of this format - 0x00020001
# ULONG flags // Processing Flags (0 == no special processing)
# BYTE fontPANOSE[10] // OS/2 Table panose
# BYTE charset // DEFAULT_CHARSET (0x01)
# BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise
# ULONG weight // OS/2 Table usWeightClass
# USHORT fsType // OS/2 Table fsType (specifies embedding permission flags)
# USHORT magicNumber // Magic number for EOT file - 0x504C.
# ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1
# ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2
# ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3
# ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4
# ULONG codePageRange1 // OS/2 Table ulCodePageRange1
# ULONG codePageRange2 // OS/2 Table ulCodePageRange2
# ULONG checkSumAdjustment // head Table CheckSumAdjustment
# ULONG reserved[4] // Reserved - must be 0
# USHORT padding1 // Padding - must be 0
#
# EOT name records
#
# USHORT FamilyNameSize // Font family name size in bytes
# BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16
# USHORT Padding2 // Padding - must be 0
#
# USHORT StyleNameSize // Style name size in bytes
# BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16
# USHORT Padding3 // Padding - must be 0
#
# USHORT VersionNameSize // Version name size in bytes
# bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16
# USHORT Padding4 // Padding - must be 0
#
# USHORT FullNameSize // Full name size in bytes
# BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16
# USHORT Padding5 // Padding - must be 0
#
# USHORT RootStringSize // Root string size in bytes
# BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
|
# Copyright © 2005-2009 NAME Lingfo Pty Ltd
# This module is part of the xlrd3 package, which is released under a
# BSD-style licence.
#
# xlrd3, the Python 3 port of xlrd v0.7.1
#
# A Python module for extracting data from MS Excel spreadsheet files.
#
# General information
#
# Acknowledgements
#
# Development of this module would not have been possible without the document
# "OpenOffice.org's Documentation of the Microsoft Excel File Format"
# ("OOo docs" for short).
# The latest version is available from OpenOffice.org in
# http://sc.openoffice.org/excelfileformat.pdf PDF format
# and
# http://sc.openoffice.org/excelfileformat.odt ODT format.
# Small portions of the OOo docs are reproduced in this
# document. A study of the OOo docs is recommended for those who wish a
# deeper understanding of the Excel file layout than the xlrd docs can provide.
#
# Provision of formatting information in version 0.6.1 was funded by
# http://www.simplistix.co.uk Simplistix Ltd.
#
# Unicode
#
# This module presents all text strings as Python unicode objects.
# From Excel 97 onwards, text in Excel spreadsheets has been stored as Unicode.
# Older files (Excel 95 and earlier) don't keep strings in Unicode;
# a CODEPAGE record provides a codepage number (for example, 1252) which is
# used by xlrd to derive the encoding (for same example: "cp1252") which is
# used to translate to Unicode.
#
# If the CODEPAGE record is missing (possible if the file was created
# by third-party software), xlrd will assume that the encoding is ascii, and keep going.
# If the actual encoding is not ascii, a UnicodeDecodeError exception will be raised and
# you will need to determine the encoding yourself, and tell xlrd::
#
# book = xlrd.open_workbook(..., encoding_override="cp1252")
#
# If the CODEPAGE record exists but is wrong (for example, the codepage
# number is 1251, but the strings are actually encoded in koi8_r),
# it can be overridden using the same mechanism.
# The supplied runxlrd.py has a corresponding command-line argument, which
# may be used for experimentation::
#
# runxlrd.py -e koi8_r 3rows myfile.xls
#
# The first place to look for an encoding ("codec name") is
# http://docs.python.org/lib/standard-encodings.html
# the Python documentation.
#
# Dates in Excel spreadsheets
#
# In reality, there are no such things. What you have are floating point
# numbers and pious hope.
# There are several problems with Excel dates:
#
# (1) Dates are not stored as a separate data type; they are stored as
# floating point numbers and you have to rely on
# (a) the "number format" applied to them in Excel and/or
# (b) knowing which cells are supposed to have dates in them.
# This module helps with (a) by inspecting the
# format that has been applied to each number cell;
# if it appears to be a date format, the cell
# is classified as a date rather than a number. Feedback on this feature,
# especially from non-English-speaking locales, would be appreciated.
#
# (2) Excel for Windows stores dates by default as the number of
# days (or fraction thereof) since 1899-12-31T00:00:00. Excel for
# Macintosh uses a default start date of 1904-01-01T00:00:00. The date
# system can be changed in Excel on a per-workbook basis (for example:
# Tools -> Options -> Calculation, tick the "1904 date system" box).
# This is of course a bad idea if there are already dates in the
# workbook. There is no good reason to change it even if there are no
# dates in the workbook. Which date system is in use is recorded in the
# workbook. A workbook transported from Windows to Macintosh (or vice
# versa) will work correctly with the host Excel. When using this
# module's xldate_as_tuple function to convert numbers from a workbook,
# you must use the datemode attribute of the Book object. If you guess,
# or make a judgement depending on where you believe the workbook was
# created, you run the risk of being 1462 days out of kilter.
#
# Reference:
# http://support.microsoft.com/default.aspx?scid=KB;EN-US;q180162
#
# (3) The Excel implementation of the Windows-default 1900-based date system works on the
# incorrect premise that 1900 was a leap year. It interprets the number 60 as meaning 1900-02-29,
# which is not a valid date. Consequently any number less than 61 is ambiguous. Example: is 59 the
# result of 1900-02-28 entered directly, or is it 1900-03-01 minus 2 days? The OpenOffice.org Calc
# program "corrects" the Microsoft problem; entering 1900-02-27 causes the number 59 to be stored.
# Save as an XLS file, then open the file with Excel -- you'll see 1900-02-28 displayed.
#
# Reference: http://support.microsoft.com/default.aspx?scid=kb;en-us;214326
#
# (4) The Macintosh-default 1904-based date system counts 1904-01-02 as day 1 and 1904-01-01 as day zero.
# Thus any number such that (0.0 <= number < 1.0) is ambiguous. Is 0.625 a time of day (15:00:00),
# independent of the calendar,
# or should it be interpreted as an instant on a particular day (1904-01-01T15:00:00)?
# The xldate_* functions in this module
# take the view that such a number is a calendar-independent time of day (like Python's datetime.time type) for both
# date systems. This is consistent with more recent Microsoft documentation
# (for example, the help file for Excel 2002 which says that the first day
# in the 1904 date system is 1904-01-02).
#
# (5) Usage of the Excel DATE() function may leave strange dates in a spreadsheet. Quoting the help file,
# in respect of the 1900 date system: "If year is between 0 (zero) and 1899 (inclusive),
# Excel adds that value to 1900 to calculate the year. For example, DATE(108,1,2) returns January 2, 2008 (1900+108)."
# This gimmick, semi-defensible only for arguments up to 99 and only in the pre-Y2K-awareness era,
# means that DATE(1899, 12, 31) is interpreted as 3799-12-31.
#
# For further information, please refer to the documentation for the xldate_* functions.
#
# Named references, constants, formulas, and macros
#
# A name is used to refer to a cell, a group of cells, a constant
# value, a formula, or a macro. Usually the scope of a name is global
# across the whole workbook. However it can be local to a worksheet.
# For example, if the sales figures are in different cells in
# different sheets, the user may define the name "Sales" in each
# sheet. There are built-in names, like "Print_Area" and
# "Print_Titles"; these two are naturally local to a sheet.
#
# To inspect the names with a user interface like MS Excel, OOo Calc,
# or Gnumeric, click on Insert/Names/Define. This will show the global
# names, plus those local to the currently selected sheet.
#
# A Book object provides two dictionaries (name_map and
# name_and_scope_map) and a list (name_obj_list) which allow various
# ways of accessing the Name objects. There is one Name object for
# each NAME record found in the workbook. Name objects have many
# attributes, several of which are relevant only when obj.macro is 1.
#
# In the examples directory you will find namesdemo.xls which
# showcases the many different ways that names can be used, and
# xlrdnamesAPIdemo.py which offers 3 different queries for inspecting
# the names in your files, and shows how to extract whatever a name is
# referring to. There is currently one "convenience method",
# Name.cell(), which extracts the value in the case where the name
# refers to a single cell. More convenience methods are planned. The
# source code for Name.cell (in __init__.py) is an extra source of
# information on how the Name attributes hang together.
#
# Name information is **not** extracted from files older than
# Excel 5.0 (Book.biff_version < 50)
#
# Formatting
#
# Introduction
#
# This collection of features, new in xlrd version 0.6.1, is intended
# to provide the information needed to (1) display/render spreadsheet contents
# (say) on a screen or in a PDF file, and (2) copy spreadsheet data to another
# file without losing the ability to display/render it.
#
# The Palette; Colour Indexes
#
# A colour is represented in Excel as a (red, green, blue) ("RGB") tuple
# with each component in range(256). However it is not possible to access an
# unlimited number of colours; each spreadsheet is limited to a palette of 64 different
# colours (24 in Excel 3.0 and 4.0, 8 in Excel 2.0). Colours are referenced by an index
# ("colour index") into this palette.
#
# Colour indexes 0 to 7 represent 8 fixed built-in colours: black, white, red, green, blue,
# yellow, magenta, and cyan.
#
# The remaining colours in the palette (8 to 63 in Excel 5.0 and later)
# can be changed by the user. In the Excel 2003 UI, Tools/Options/Color presents a palette
# of 7 rows of 8 colours. The last two rows are reserved for use in charts.
# The correspondence between this grid and the assigned
# colour indexes is NOT left-to-right top-to-bottom.
# Indexes 8 to 15 correspond to changeable
# parallels of the 8 fixed colours -- for example, index 7 is forever cyan;
# index 15 starts off being cyan but can be changed by the user.
#
# The default colour for each index depends on the file version; tables of the defaults
# are available in the source code. If the user changes one or more colours,
# a PALETTE record appears in the XLS file -- it gives the RGB values for *all* changeable
# indexes.
# Note that colours can be used in "number formats": "[CYAN]...." and "[COLOR8]...." refer
# to colour index 7; "[COLOR16]...." will produce cyan
# unless the user changes colour index 15 to something else.
#
# In addition, there are several "magic" colour indexes used by Excel:
# 0x18 (BIFF3-BIFF4), 0x40 (BIFF5-BIFF8): System window text colour for border lines
# (used in XF, CF, and WINDOW2 records)
# 0x19 (BIFF3-BIFF4), 0x41 (BIFF5-BIFF8): System window background colour for pattern background
# (used in XF and CF records )
# 0x43: System face colour (dialogue background colour)
# 0x4D: System window text colour for chart border lines
# 0x4E: System window background colour for chart areas
# 0x4F: Automatic colour for chart border lines (seems to be always Black)
# 0x50: System ToolTip background colour (used in note objects)
# 0x51: System ToolTip text colour (used in note objects)
# 0x7FFF: System window text colour for fonts (used in FONT and CF records)
# Note 0x7FFF appears to be the *default* colour index. It appears quite often in FONT
# records.
#
# Default Formatting
#
# Default formatting is applied to all empty cells (those not described by a cell record).
# Firstly row default information (ROW record, Rowinfo class) is used if available.
# Failing that, column default information (COLINFO record, Colinfo class) is used if available.
# As a last resort the worksheet/workbook default cell format will be used; this
# should always be present in an Excel file,
# described by the XF record with the fixed index 15 (0-based). By default, it uses the
# worksheet/workbook default cell style, described by the very first XF record (index 0).
#
# Formatting features not included in xlrd version 0.6.1
#
# - Rich text i.e. strings containing partial bold, italic
# and underlined text, change of font inside a string, etc.
# See OOo docs s3.4 and s3.2
# - Asian phonetic text (known as "ruby"), used for Japanese furigana. See OOo docs
# s3.4.2 (p15)
# - Conditional formatting. See OOo docs
# s5.12, s6.21 (CONDFMT record), s6.16 (CF record)
# - Miscellaneous sheet-level and book-level items e.g. printing layout, screen panes.
# - Modern Excel file versions don't keep most of the built-in
# "number formats" in the file; Excel loads formats according to the
# user's locale. Currently xlrd's emulation of this is limited to
# a hard-wired table that applies to the US English locale. This may mean
# that currency symbols, date order, thousands separator, decimals separator, etc
# are inappropriate. Note that this does not affect users who are copying XLS
# files, only those who are visually rendering cells.
#
# Loading worksheets on demand
#
# This feature, new in version 0.7.1, is governed by the on_demand argument
# to the open_workbook() function and allows saving memory and time by loading
# only those sheets that the caller is interested in, and releasing sheets
# when no longer required.
#
# on_demand=False (default): No change. open_workbook() loads global data
# and all sheets, releases resources no longer required (principally the
# str or mmap object containing the Workbook stream), and returns.
#
# on_demand=True and BIFF version < 5.0: A warning message is emitted,
# on_demand is recorded as False, and the old process is followed.
#
# on_demand=True and BIFF version >= 5.0: open_workbook() loads global
# data and returns without releasing resources. At this stage, the only
# information available about sheets is Book.nsheets and Book.sheet_names().
#
# Book.sheet_by_name() and Book.sheet_by_index() will load the requested
# sheet if it is not already loaded.
#
# Book.sheets() will load all/any unloaded sheets.
#
# The caller may save memory by calling
# Book.unload_sheet(sheet_name_or_index) when finished with the sheet.
# This applies irrespective of the state of on_demand.
#
# The caller may re-load an unloaded sheet by calling Book.sheet_by_xxxx()
# -- except if those required resources have been released (which will
# have happened automatically when on_demand is false). This is the only
# case where an exception will be raised.
#
# The caller may query the state of a sheet:
# Book.sheet_loaded(sheet_name_or_index) -> a bool
#
# 2010-12-03 mozman start xlrd3, for changes see NEWS.txt
#
# 2009-04-27 SJM Integrated on_demand patch by NAME 2008-11-23 SJM Support dumping FILEPASS and EXTERNNAME records; extra info from SUPBOOK records
# 2008-11-23 SJM colname utility function now supports more than 256 columns
# 2008-04-24 SJM Recovery code for file with out-of-order/missing/wrong CODEPAGE record needed to be called for EXTERNSHEET/BOUNDSHEET/NAME/SHEETHDR records.
# 2008-02-08 SJM Preparation for Excel 2.0 support
# 2008-02-03 SJM Minor tweaks for IronPython support
# 2008-02-02 SJM Previous change stopped dump() and count_records() ... fixed
# 2007-12-25 SJM Decouple Book initialisation & loading -- to allow for multiple loaders.
# 2007-12-20 SJM Better error message for unsupported file format.
# 2007-12-04 SJM Added support for Excel 2.x (BIFF2) files.
# 2007-11-20 SJM Wasn't handling EXTERNSHEET record that needed CONTINUE record(s)
# 2007-07-07 SJM Version changed to 0.7.0 (alpha 1)
# 2007-07-07 SJM Logfile arg wasn't being passed from open_workbook to compdoc.CompDoc
# 2007-05-21 SJM If no CODEPAGE record in pre-8.0 file, assume ascii and keep going.
# 2007-04-22 SJM Removed antique undocumented Book.get_name_dict method.
|
"""
# ggame
The simple cross-platform sprite and game platform for Brython Server (Pygame, Tkinter to follow?).
Ggame stands for a couple of things: "good game" (of course!) and also "git game" or "github game"
because it is designed to operate with [Brython Server](http://runpython.com) in concert with
Github as a backend file store.
Ggame is **not** intended to be a full-featured gaming API, with every bell and whistle. Ggame is
designed primarily as a tool for teaching computer programming, recognizing that the ability
to create engaging and interactive games is a powerful motivator for many progamming students.
Accordingly, any functional or performance enhancements that *can* be reasonably implemented
by the user are left as an exercise.
## Functionality Goals
The ggame library is intended to be trivially easy to use. For example:
from ggame import App, ImageAsset, Sprite
# Create a displayed object at 100,100 using an image asset
Sprite(ImageAsset("ggame/bunny.png"), (100,100))
# Create the app, with a 500x500 pixel stage
app = App(500,500)
# Run the app
app.run()
## Overview
There are three major components to the `ggame` system: Assets, Sprites and the App.
### Assets
Asset objects (i.e. `ggame.ImageAsset`, etc.) typically represent separate files that
are provided by the "art department". These might be background images, user interface
images, or images that represent objects in the game. In addition, `ggame.SoundAsset`
is used to represent sound files (`.wav` or `.mp3` format) that can be played in the
game.
Ggame also extends the asset concept to include graphics that are generated dynamically
at run-time, such as geometrical objects, e.g. rectangles, lines, etc.
### Sprites
All of the visual aspects of the game are represented by instances of `ggame.Sprite` or
subclasses of it.
### App
Every ggame application must create a single instance of the `ggame.App` class (or
a sub-class of it). Creating an instance of the `ggame.App` class will initiate
creation of a pop-up window on your browser. Executing the app's `run` method will
begin the process of refreshing the visual assets on the screen.
### Events
No game is complete without a player and players produce events. Your code handles user
input by registering to receive keyboard and mouse events using `ggame.App.listenKeyEvent` and
`ggame.App.listenMouseEvent` methods.
## Execution Environment
Ggame is designed to be executed in a web browser using [Brython](http://brython.info/),
[Pixi.js](http://www.pixijs.com/) and [Buzz](http://buzz.jaysalvat.com/). The easiest
way to do this is by executing from [runpython](http://runpython.com), with source
code residing on [github](http://github.com).
When using [runpython](http://runpython.com), you will have to configure your browser
to allow popup windows.
To use Ggame in your own application, you will minimally need to create a folder called
`ggame` in your project. Within `ggame`, copy the `ggame.py`, `sysdeps.py` and
`__init__.py` files from the [ggame project](https://github.com/BrythonServer/ggame).
### Include Ggame as a Git Subtree
From the same directory as your own python sources (note: you must have an existing git
repository with committed files in order for the following to work properly),
execute the following terminal commands:
git remote add -f ggame https://github.com/BrythonServer/ggame.git
git merge -s ours --no-commit ggame/master
mkdir ggame
git read-tree --prefix=ggame/ -u ggame/master
git commit -m "Merge ggame project as our subdirectory"
If you want to pull in updates from ggame in the future:
git pull -s subtree ggame master
You can see an example of how a ggame subtree is used by examining the
[Brython Server Spacewar](https://github.com/BrythonServer/Spacewar) repo on Github.
## Geometry
When referring to screen coordinates, note that the x-axis of the computer screen
is *horizontal* with the zero position on the left hand side of the screen. The
y-axis is *vertical* with the zero position at the **top** of the screen.
Increasing positive y-coordinates correspond to the downward direction on the
computer screen. Note that this is **different** from the way you may have learned
about x and y coordinates in math class!
""" |
"""
===============
Array Internals
===============
Internal organization of numpy arrays
=====================================
It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to NumPy".
NumPy arrays consist of two major components, the raw array data (from now on,
referred to as the data buffer), and the information about the raw array data.
The data buffer is typically what people think of as arrays in C or Fortran,
a contiguous (and fixed) block of memory containing fixed sized data items.
NumPy also contains a significant set of data that describes how to interpret
the data in the data buffer. This extra information contains (among other things):
1) The basic data element's size in bytes
2) The start of the data within the data buffer (an offset relative to the
beginning of the data buffer).
3) The number of dimensions and the size of each dimension
4) The separation between elements for each dimension (the 'stride'). This
does not have to be a multiple of the element size
5) The byte order of the data (which may not be the native byte order)
6) Whether the buffer is read-only
7) Information (via the dtype object) about the interpretation of the basic
data element. The basic data element may be as simple as a int or a float,
or it may be a compound object (e.g., struct-like), a fixed character field,
or Python object pointers.
8) Whether the array is to interpreted as C-order or Fortran-order.
This arrangement allow for very flexible use of arrays. One thing that it allows
is simple changes of the metadata to change the interpretation of the array buffer.
Changing the byteorder of the array is a simple change involving no rearrangement
of the data. The shape of the array can be changed very easily without changing
anything in the data buffer or any data copying at all
Among other things that are made possible is one can create a new array metadata
object that uses the same data buffer
to create a new view of that data buffer that has a different interpretation
of the buffer (e.g., different shape, offset, byte order, strides, etc) but
shares the same data bytes. Many operations in numpy do just this such as
slices. Other operations, such as transpose, don't move data elements
around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
Typically these new versions of the array metadata but the same data buffer are
new 'views' into the data buffer. There is a different ndarray object, but it
uses the same data buffer. This is why it is necessary to force copies through
use of the .copy() method if one really wants to make a new and independent
copy of the data buffer.
New views into arrays mean the object reference counts for the data buffer
increase. Simply doing away with the original array object will not remove the
data buffer if other views of it still exist.
Multidimensional Array Indexing Order Issues
============================================
What is the right way to index
multi-dimensional arrays? Before you jump to conclusions about the one and
true way to index multi-dimensional arrays, it pays to understand why this is
a confusing issue. This section will try to explain in detail how numpy
indexing works and why we adopt the convention we do for images, and when it
may be appropriate to adopt other conventions.
The first thing to understand is
that there are two conflicting conventions for indexing 2-dimensional arrays.
Matrix notation uses the first index to indicate which row is being selected and
the second index to indicate which column is selected. This is opposite the
geometrically oriented-convention for images where people generally think the
first index represents x position (i.e., column) and the second represents y
position (i.e., row). This alone is the source of much confusion;
matrix-oriented users and image-oriented users expect two different things with
regard to indexing.
The second issue to understand is how indices correspond
to the order the array is stored in memory. In Fortran the first index is the
most rapidly varying index when moving through the elements of a two
dimensional array as it is stored in memory. If you adopt the matrix
convention for indexing, then this means the matrix is stored one column at a
time (since the first index moves to the next row as it changes). Thus Fortran
is considered a Column-major language. C has just the opposite convention. In
C, the last index changes most rapidly as one moves through the array as
stored in memory. Thus C is a Row-major language. The matrix is stored by
rows. Note that in both cases it presumes that the matrix convention for
indexing is being used, i.e., for both Fortran and C, the first index is the
row. Note this convention implies that the indexing convention is invariant
and that the data order changes to keep that so.
But that's not the only way
to look at it. Suppose one has large two-dimensional arrays (images or
matrices) stored in data files. Suppose the data are stored by rows rather than
by columns. If we are to preserve our index convention (whether matrix or
image) that means that depending on the language we use, we may be forced to
reorder the data if it is read into memory to preserve our indexing
convention. For example if we read row-ordered data into memory without
reordering, it will match the matrix indexing convention for C, but not for
Fortran. Conversely, it will match the image indexing convention for Fortran,
but not for C. For C, if one is using data stored in row order, and one wants
to preserve the image index convention, the data must be reordered when
reading into memory.
In the end, which you do for Fortran or C depends on
which is more important, not reordering data or preserving the indexing
convention. For large images, reordering data is potentially expensive, and
often the indexing convention is inverted to avoid that.
The situation with
numpy makes this issue yet more complicated. The internal machinery of numpy
arrays is flexible enough to accept any ordering of indices. One can simply
reorder indices by manipulating the internal stride information for arrays
without reordering the data at all. NumPy will know how to map the new index
order to the data without moving the data.
So if this is true, why not choose
the index order that matches what you most expect? In particular, why not define
row-ordered images to use the image convention? (This is sometimes referred
to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
order options for array ordering in numpy.) The drawback of doing this is
potential performance penalties. It's common to access the data sequentially,
either implicitly in array operations or explicitly by looping over rows of an
image. When that is done, then the data will be accessed in non-optimal order.
As the first index is incremented, what is actually happening is that elements
spaced far apart in memory are being sequentially accessed, with usually poor
memory access speeds. For example, for a two dimensional image 'im' defined so
that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
Python behavior then im[0] would represent a column at x=0. Yet that data
would be spread over the whole array since the data are stored in row order.
Despite the flexibility of numpy's indexing, it can't really paper over the fact
basic operations are rendered inefficient because of data order or that getting
contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
im[0]), thus one can't use an idiom such as for row in im; for col in im does
work, but doesn't yield contiguous column data.
As it turns out, numpy is
smart enough when dealing with ufuncs to determine which index is the most
rapidly varying one in memory and uses that for the innermost loop. Thus for
ufuncs there is no large intrinsic advantage to either approach in most cases.
On the other hand, use of .flat with an FORTRAN ordered array will lead to
non-optimal memory access as adjacent elements in the flattened array (iterator,
actually) are not contiguous in memory.
Indeed, the fact is that Python
indexing on lists and other sequences naturally leads to an outside-to inside
ordering (the first index gets the largest grouping, the next the next largest,
and the last gets the smallest element). Since image data are normally stored
by rows, this corresponds to position within rows being the last item indexed.
If you do want to use Fortran ordering realize that
there are two approaches to consider: 1) accept that the first index is just not
the most rapidly changing in memory and have all your I/O routines reorder
your data when going from memory to disk or visa versa, or use numpy's
mechanism for mapping the first index to the most rapidly varying data. We
recommend the former if possible. The disadvantage of the latter is that many
of numpy's functions will yield arrays without Fortran ordering unless you are
careful to use the 'order' keyword. Doing this would be highly inconvenient.
Otherwise we recommend simply learning to reverse the usual order of indices
when accessing elements of an array. Granted, it goes against the grain, but
it is more in line with Python semantics and the natural order of the data.
""" |
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |
# Configuration file for ipcontroller.
#------------------------------------------------------------------------------
# Configurable configuration
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# SingletonConfigurable configuration
#------------------------------------------------------------------------------
# A configurable that only allows one instance.
#
# This class is for classes that should only have one instance of itself or
# *any* subclass. To create and retrieve such a class use the
# :meth:`SingletonConfigurable.instance` method.
#------------------------------------------------------------------------------
# Application configuration
#------------------------------------------------------------------------------
# This is an application.
# Set the log level by value or name.
# c.Application.log_level = 30
# The date format used by logging formatters for %(asctime)s
# c.Application.log_datefmt = '%Y-%m-%d %H:%M:%S'
# The Logging format template
# c.Application.log_format = '[%(name)s]%(highlevel)s %(message)s'
#------------------------------------------------------------------------------
# BaseIPythonApplication configuration
#------------------------------------------------------------------------------
# IPython: an enhanced interactive Python shell.
# Path to an extra config file to load.
#
# If specified, load this config file in addition to any other IPython config.
# c.BaseIPythonApplication.extra_config_file = ''
# The IPython profile to use.
# c.BaseIPythonApplication.profile = 'default'
# The name of the IPython directory. This directory is used for logging
# configuration (through profiles), history storage, etc. The default is usually
# $HOME/.ipython. This option can also be specified through the environment
# variable IPYTHONDIR.
# c.BaseIPythonApplication.ipython_dir = ''
# Whether to install the default config files into the profile dir. If a new
# profile is being created, and IPython contains config files for that profile,
# then they will be staged into the new directory. Otherwise, default config
# files will be automatically generated.
# c.BaseIPythonApplication.copy_config_files = False
# Whether to overwrite existing config files when copying
# c.BaseIPythonApplication.overwrite = False
# Whether to create profile dir if it doesn't exist
# c.BaseIPythonApplication.auto_create = False
# Create a massive crash report when IPython encounters what may be an internal
# error. The default is to append a short message to the usual traceback
# c.BaseIPythonApplication.verbose_crash = False
#------------------------------------------------------------------------------
# BaseParallelApplication configuration
#------------------------------------------------------------------------------
# IPython: an enhanced interactive Python shell.
# Set the working dir for the process.
# c.BaseParallelApplication.work_dir = '/tank/vishnu/git/vishnu2kmohan/pyspark-docker'
# The ZMQ URL of the iplogger to aggregate logging.
# c.BaseParallelApplication.log_url = ''
# String id to add to runtime files, to prevent name collisions when using
# multiple clusters with a single profile simultaneously.
#
# When set, files will be named like: 'ipcontroller-<cluster_id>-engine.json'
#
# Since this is text inserted into filenames, typical recommendations apply:
# Simple character strings are ideal, and spaces are not recommended (but should
# generally work).
# c.BaseParallelApplication.cluster_id = ''
# whether to log to a file
# c.BaseParallelApplication.log_to_file = False
# whether to cleanup old logfiles before starting
# c.BaseParallelApplication.clean_logs = False
#------------------------------------------------------------------------------
# IPControllerApp configuration
#------------------------------------------------------------------------------
# import statements to be run at startup. Necessary in some environments
# c.IPControllerApp.import_statements = traitlets.Undefined
# ssh url for clients to use when connecting to the Controller processes. It
# should be of the form: [user@]server[:port]. The Controller's listening
# addresses must be accessible from the ssh server
# c.IPControllerApp.ssh_server = ''
# Reload engine state from JSON file
# c.IPControllerApp.restore_engines = False
# ssh url for engines to use when connecting to the Controller processes. It
# should be of the form: [user@]server[:port]. The Controller's listening
# addresses must be accessible from the ssh server
# c.IPControllerApp.engine_ssh_server = ''
# JSON filename where client connection info will be stored.
# c.IPControllerApp.client_json_file = 'ipcontroller-client.json'
# The external IP or domain name of the Controller, used for disambiguating
# engine and client connections.
# c.IPControllerApp.location = ''
# JSON filename where engine connection info will be stored.
# c.IPControllerApp.engine_json_file = 'ipcontroller-engine.json'
# Use threads instead of processes for the schedulers
# c.IPControllerApp.use_threads = False
# Whether to reuse existing json connection files. If False, connection files
# will be removed on a clean exit.
# c.IPControllerApp.reuse_files = False
# Whether to create profile dir if it doesn't exist.
# c.IPControllerApp.auto_create = True
#------------------------------------------------------------------------------
# LoggingConfigurable configuration
#------------------------------------------------------------------------------
# A parent class for Configurables that log.
#
# Subclasses have a log trait, and the default behavior is to get the logger
# from the currently running Application.
#------------------------------------------------------------------------------
# ProfileDir configuration
#------------------------------------------------------------------------------
# An object to manage the profile directory and its resources.
#
# The profile directory is used by all IPython applications, to manage
# configuration, logging and security.
#
# This object knows how to find, create and manage these directories. This
# should be used by any code that wants to handle profiles.
# Set the profile location directly. This overrides the logic used by the
# `profile` option.
# c.ProfileDir.location = ''
#------------------------------------------------------------------------------
# Session configuration
#------------------------------------------------------------------------------
# Object for handling serialization and sending of messages.
#
# The Session object handles building messages and sending them with ZMQ sockets
# or ZMQStream objects. Objects can communicate with each other over the
# network via Session objects, and only need to work with the dict-based IPython
# message spec. The Session will handle serialization/deserialization, security,
# and metadata.
#
# Sessions support configurable serialization via packer/unpacker traits, and
# signing with HMAC digests via the key/keyfile traits.
#
# Parameters ----------
#
# debug : bool
# whether to trigger extra debugging statements
# packer/unpacker : str : 'json', 'pickle' or import_string
# importstrings for methods to serialize message parts. If just
# 'json' or 'pickle', predefined JSON and pickle packers will be used.
# Otherwise, the entire importstring must be used.
#
# The functions must accept at least valid JSON input, and output *bytes*.
#
# For example, to use msgpack:
# packer = 'msgpack.packb', unpacker='msgpack.unpackb'
# pack/unpack : callables
# You can also set the pack/unpack callables for serialization directly.
# session : bytes
# the ID of this Session object. The default is to generate a new UUID.
# username : USERNAME username added to message headers. The default is to ask the OS.
# key : bytes
# The key used to initialize an HMAC signature. If unset, messages
# will not be signed or checked.
# keyfile : filepath
# The file containing a key. If this is set, `key` will be initialized
# to the contents of the file.
# Debug output in the Session
# c.Session.debug = False
# The maximum number of digests to remember.
#
# The digest history will be culled when it exceeds this value.
# c.Session.digest_history_size = 65536
# execution key, for signing messages.
# c.Session.key = b''
# Username for the Session. Default is your system username.
# c.Session.username = 'vishnu'
# The maximum number of items for a container to be introspected for custom
# serialization. Containers larger than this are pickled outright.
# c.Session.item_threshold = 64
# Threshold (in bytes) beyond which an object's buffer should be extracted to
# avoid pickling.
# c.Session.buffer_threshold = 1024
# The name of the packer for serializing messages. Should be one of 'json',
# 'pickle', or an import name for a custom callable serializer.
# c.Session.packer = 'json'
# path to file containing execution key.
# c.Session.keyfile = ''
# The UUID identifying this session.
# c.Session.session = ''
# Threshold (in bytes) beyond which a buffer should be sent without copying.
# c.Session.copy_threshold = 65536
# The name of the unpacker for unserializing messages. Only used with custom
# functions for `packer`.
# c.Session.unpacker = 'json'
# The digest scheme used to construct the message signatures. Must have the form
# 'hmac-HASH'.
# c.Session.signature_scheme = 'hmac-sha256'
# Metadata dictionary, which serves as the default top-level metadata dict for
# each message.
# c.Session.metadata = traitlets.Undefined
#------------------------------------------------------------------------------
# SessionFactory configuration
#------------------------------------------------------------------------------
# The Base class for configurables that have a Session, Context, logger, and
# IOLoop.
#------------------------------------------------------------------------------
# RegistrationFactory configuration
#------------------------------------------------------------------------------
# The Base Configurable for objects that involve registration.
# The port on which the Hub listens for registration.
# c.RegistrationFactory.regport = 0
# The IP address for registration. This is generally either 'IP_ADDRESS' for
# loopback only or '*' for all interfaces.
# c.RegistrationFactory.ip = ''
# The 0MQ url used for registration. This sets transport, ip, and port in one
# variable. For example: url='tcp://IP_ADDRESS:12345' or url='epgm://*:90210'
# c.RegistrationFactory.url = ''
# The 0MQ transport for communications. This will likely be the default of
# 'tcp', but other values include 'ipc', 'epgm', 'inproc'.
# c.RegistrationFactory.transport = 'tcp'
#------------------------------------------------------------------------------
# HubFactory configuration
#------------------------------------------------------------------------------
# The Configurable for setting up a Hub.
# PUB/ROUTER Port pair for Engine heartbeats
# c.HubFactory.hb = traitlets.Undefined
# IP on which to listen for monitor messages. [default: loopback]
# c.HubFactory.monitor_ip = ''
# 0MQ transport for client connections. [default : tcp]
# c.HubFactory.client_transport = 'tcp'
# Client/Engine Port pair for Control queue
# c.HubFactory.control = traitlets.Undefined
# Engine registration timeout in seconds [default:
# max(30,10*heartmonitor.period)]
# c.HubFactory.registration_timeout = 0
# PUB port for sending engine status notifications
# c.HubFactory.notifier_port = 0
# Client/Engine Port pair for Task queue
# c.HubFactory.task = traitlets.Undefined
# 0MQ transport for engine connections. [default: tcp]
# c.HubFactory.engine_transport = 'tcp'
# 0MQ transport for monitor messages. [default : tcp]
# c.HubFactory.monitor_transport = 'tcp'
# The class to use for the DB backend
#
# Options include:
#
# SQLiteDB: SQLite MongoDB : use MongoDB DictDB : in-memory storage (fastest,
# but be mindful of memory growth of the Hub) NoDB : disable database
# altogether (default)
# c.HubFactory.db_class = 'NoDB'
# Monitor (SUB) port for queue traffic
# c.HubFactory.mon_port = 0
# IP on which to listen for client connections. [default: loopback]
# c.HubFactory.client_ip = ''
# Client/Engine Port pair for IOPub relay
# c.HubFactory.iopub = traitlets.Undefined
# IP on which to listen for engine connections. [default: loopback]
# c.HubFactory.engine_ip = ''
# Client/Engine Port pair for MUX queue
# c.HubFactory.mux = traitlets.Undefined
#------------------------------------------------------------------------------
# TaskScheduler configuration
#------------------------------------------------------------------------------
# Python TaskScheduler object.
#
# This is the simplest object that supports msg_id based DAG dependencies.
# *Only* task msg_ids are checked, not msg_ids of jobs submitted via the MUX
# queue.
# select the task scheduler scheme [default: Python LRU] Options are: 'pure',
# 'lru', 'plainrandom', 'weighted', 'twobin','leastload'
# c.TaskScheduler.scheme_name = 'leastload'
# specify the High Water Mark (HWM) for the downstream socket in the Task
# scheduler. This is the maximum number of allowed outstanding tasks on each
# engine.
#
# The default (1) means that only one task can be outstanding on each engine.
# Setting TaskScheduler.hwm=0 means there is no limit, and the engines continue
# to be assigned tasks while they are working, effectively hiding network
# latency behind computation, but can result in an imbalance of work when
# submitting many heterogenous tasks all at once. Any positive value greater
# than one is a compromise between the two.
# c.TaskScheduler.hwm = 1
#------------------------------------------------------------------------------
# HeartMonitor configuration
#------------------------------------------------------------------------------
# A basic HeartMonitor class pingstream: a PUB stream pongstream: an ROUTER
# stream period: the period of the heartbeat in milliseconds
# Whether to include every heartbeat in debugging output.
#
# Has to be set explicitly, because there will be *a lot* of output.
# c.HeartMonitor.debug = False
# Allowed consecutive missed pings from controller Hub to engine before
# unregistering.
# c.HeartMonitor.max_heartmonitor_misses = 10
# The frequency at which the Hub pings the engines for heartbeats (in ms)
# c.HeartMonitor.period = 3000
#------------------------------------------------------------------------------
# BaseDB configuration
#------------------------------------------------------------------------------
# Empty Parent class so traitlets work on DB.
#------------------------------------------------------------------------------
# DictDB configuration
#------------------------------------------------------------------------------
# Basic in-memory dict-based object for saving Task Records.
#
# This is the first object to present the DB interface for logging tasks out of
# memory.
#
# The interface is based on MongoDB, so adding a MongoDB backend should be
# straightforward.
# The fraction by which the db should culled when one of the limits is exceeded
#
# In general, the db size will spend most of its time with a size in the range:
#
# [limit * (1-cull_fraction), limit]
#
# for each of size_limit and record_limit.
# c.DictDB.cull_fraction = 0.1
# The maximum number of records in the db
#
# When the history exceeds this size, the first record_limit * cull_fraction
# records will be culled.
# c.DictDB.record_limit = 1024
# The maximum total size (in bytes) of the buffers stored in the db
#
# When the db exceeds this size, the oldest records will be culled until the
# total size is under size_limit * (1-cull_fraction). default: 1 GB
# c.DictDB.size_limit = 1073741824
#------------------------------------------------------------------------------
# SQLiteDB configuration
#------------------------------------------------------------------------------
# SQLite3 TaskRecord backend.
# The filename of the sqlite task database. [default: 'tasks.db']
# c.SQLiteDB.filename = 'tasks.db'
# The directory containing the sqlite task database. The default is to use the
# cluster_dir location.
# c.SQLiteDB.location = ''
# The SQLite Table to use for storing tasks for this session. If unspecified, a
# new table will be created with the Hub's IDENT. Specifying the table will
# result in tasks from previous sessions being available via Clients' db_query
# and get_result methods.
# c.SQLiteDB.table = 'ipython-tasks'
|
"""Exception classes for CherryPy.
CherryPy provides (and uses) exceptions for declaring that the HTTP response
should be a status other than the default "200 OK". You can ``raise`` them like
normal Python exceptions. You can also call them and they will raise
themselves; this means you can set an
:class:`HTTPError<cherrypy._cperror.HTTPError>`
or :class:`HTTPRedirect<cherrypy._cperror.HTTPRedirect>` as the
:attr:`request.handler<cherrypy._cprequest.Request.handler>`.
.. _redirectingpost:
Redirecting POST
================
When you GET a resource and are redirected by the server to another Location,
there's generally no problem since GET is both a "safe method" (there should
be no side-effects) and an "idempotent method" (multiple calls are no different
than a single call).
POST, however, is neither safe nor idempotent--if you
charge a credit card, you don't want to be charged twice by a redirect!
For this reason, *none* of the 3xx responses permit a user-agent (browser) to
resubmit a POST on redirection without first confirming the action with the
user:
===== ================================= ===========
300 Multiple Choices Confirm with the user
301 Moved Permanently Confirm with the user
302 Found (Object moved temporarily) Confirm with the user
303 See Other GET the new URI--no confirmation
304 Not modified (for conditional GET only--POST should not raise this error)
305 Use Proxy Confirm with the user
307 Temporary Redirect Confirm with the user
===== ================================= ===========
However, browsers have historically implemented these restrictions poorly;
in particular, many browsers do not force the user to confirm 301, 302
or 307 when redirecting POST. For this reason, CherryPy defaults to 303,
which most user-agents appear to have implemented correctly. Therefore, if
you raise HTTPRedirect for a POST request, the user-agent will most likely
attempt to GET the new URI (without asking for confirmation from the user).
We realize this is confusing for developers, but it's the safest thing we
could do. You are of course free to raise ``HTTPRedirect(uri, status=302)``
or any other 3xx status if you know what you're doing, but given the
environment, we couldn't let any of those be the default.
Custom Error Handling
=====================
.. image:: /refman/cperrors.gif
Anticipated HTTP responses
--------------------------
The 'error_page' config namespace can be used to provide custom HTML output for
expected responses (like 404 Not Found). Supply a filename from which the
output will be read. The contents will be interpolated with the values
%(status)s, %(message)s, %(traceback)s, and %(version)s using plain old Python
`string formatting <http://docs.python.org/2/library/stdtypes.html#string-formatting-operations>`_.
::
_cp_config = {
'error_page.404': os.path.join(localDir, "static/index.html")
}
Beginning in version 3.1, you may also provide a function or other callable as
an error_page entry. It will be passed the same status, message, traceback and
version arguments that are interpolated into templates::
def error_page_402(status, message, traceback, version):
return "Error %s - Well, I'm very sorry but you haven't paid!" % status
cherrypy.config.update({'error_page.402': error_page_402})
Also in 3.1, in addition to the numbered error codes, you may also supply
"error_page.default" to handle all codes which do not have their own error_page
entry.
Unanticipated errors
--------------------
CherryPy also has a generic error handling mechanism: whenever an unanticipated
error occurs in your code, it will call
:func:`Request.error_response<cherrypy._cprequest.Request.error_response>` to
set the response status, headers, and body. By default, this is the same
output as
:class:`HTTPError(500) <cherrypy._cperror.HTTPError>`. If you want to provide
some other behavior, you generally replace "request.error_response".
Here is some sample code that shows how to display a custom error message and
send an e-mail containing the error::
from cherrypy import _cperror
def handle_error():
cherrypy.response.status = 500
cherrypy.response.body = [
"<html><body>Sorry, an error occured</body></html>"
]
sendMail('EMAIL',
'Error in your web app',
_cperror.format_exc())
class Root:
_cp_config = {'request.error_response': handle_error}
Note that you have to explicitly set
:attr:`response.body <cherrypy._cprequest.Response.body>`
and not simply return an error message as a result.
""" |
"""This module tests SyntaxErrors.
Here's an example of the sort of thing that is tested.
>>> def f(x):
... global x
Traceback (most recent call last):
SyntaxError: name 'x' is local and global
The tests are all raise SyntaxErrors. They were created by checking
each C call that raises SyntaxError. There are several modules that
raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and
symtable.c.
The parser itself outlaws a lot of invalid syntax. None of these
errors are tested here at the moment. We should add some tests; since
there are infinitely many programs with invalid syntax, we would need
to be judicious in selecting some.
The compiler generates a synthetic module name for code executed by
doctest. Since all the code comes from the same module, a suffix like
[1] is appended to the module name, As a consequence, changing the
order of tests in this module means renumbering all the errors after
it. (Maybe we should enable the ellipsis option for these tests.)
In ast.c, syntax errors are raised by calling ast_error().
Errors from set_context():
TODO(jhylton): "assignment to None" is inconsistent with other messages
>>> obj.None = 1
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[1]>, line 1)
>>> None = 1
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[2]>, line 1)
It's a syntax error to assign to the empty tuple. Why isn't it an
error to assign to the empty list? It will always raise some error at
runtime.
>>> () = 1
Traceback (most recent call last):
SyntaxError: can't assign to () (<doctest test.test_syntax[3]>, line 1)
>>> f() = 1
Traceback (most recent call last):
SyntaxError: can't assign to function call (<doctest test.test_syntax[4]>, line 1)
>>> del f()
Traceback (most recent call last):
SyntaxError: can't delete function call (<doctest test.test_syntax[5]>, line 1)
>>> a + 1 = 2
Traceback (most recent call last):
SyntaxError: can't assign to operator (<doctest test.test_syntax[6]>, line 1)
>>> (x for x in x) = 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression (<doctest test.test_syntax[7]>, line 1)
>>> 1 = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal (<doctest test.test_syntax[8]>, line 1)
>>> "abc" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal (<doctest test.test_syntax[9]>, line 1)
>>> `1` = 1
Traceback (most recent call last):
SyntaxError: can't assign to repr (<doctest test.test_syntax[10]>, line 1)
If the left-hand side of an assignment is a list or tuple, an illegal
expression inside that contain should still cause a syntax error.
This test just checks a couple of cases rather than enumerating all of
them.
>>> (a, "b", c) = (1, 2, 3)
Traceback (most recent call last):
SyntaxError: can't assign to literal (<doctest test.test_syntax[11]>, line 1)
>>> [a, b, c + 1] = [1, 2, 3]
Traceback (most recent call last):
SyntaxError: can't assign to operator (<doctest test.test_syntax[12]>, line 1)
>>> a if 1 else b = 1
Traceback (most recent call last):
SyntaxError: can't assign to conditional expression (<doctest test.test_syntax[13]>, line 1)
From compiler_complex_args():
>>> def f(None=1):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[14]>, line 1)
From ast_for_arguments():
>>> def f(x, y=1, z):
... pass
Traceback (most recent call last):
SyntaxError: non-default argument follows default argument (<doctest test.test_syntax[15]>, line 1)
>>> def f(x, None):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[16]>, line 1)
>>> def f(*None):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[17]>, line 1)
>>> def f(**None):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[18]>, line 1)
From ast_for_funcdef():
>>> def None(x):
... pass
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[19]>, line 1)
From ast_for_call():
>>> def f(it, *varargs):
... return list(it)
>>> L = range(10)
>>> f(x for x in L)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(x for x in L, 1)
Traceback (most recent call last):
SyntaxError: Generator expression must be parenthesized if not sole argument (<doctest test.test_syntax[23]>, line 1)
>>> f((x for x in L), 1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... i244, i245, i246, i247, i248, i249, i250, i251, i252,
... i253, i254, i255)
Traceback (most recent call last):
SyntaxError: more than 255 arguments (<doctest test.test_syntax[25]>, line 1)
The actual error cases counts positional arguments, keyword arguments,
and generator expression arguments separately. This test combines the
three.
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... (x for x in i244), i245, i246, i247, i248, i249, i250, i251,
... i252=1, i253=1, i254=1, i255=1)
Traceback (most recent call last):
SyntaxError: more than 255 arguments (<doctest test.test_syntax[26]>, line 1)
>>> f(lambda x: x[0] = 3)
Traceback (most recent call last):
SyntaxError: lambda cannot contain assignment (<doctest test.test_syntax[27]>, line 1)
The grammar accepts any test (basically, any expression) in the
keyword slot of a call site. Test a few different options.
>>> f(x()=2)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression (<doctest test.test_syntax[28]>, line 1)
>>> f(a or b=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression (<doctest test.test_syntax[29]>, line 1)
>>> f(x.y=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression (<doctest test.test_syntax[30]>, line 1)
From ast_for_expr_stmt():
>>> (x for x in x) += 1
Traceback (most recent call last):
SyntaxError: augmented assignment to generator expression not possible (<doctest test.test_syntax[31]>, line 1)
>>> None += 1
Traceback (most recent call last):
SyntaxError: assignment to None (<doctest test.test_syntax[32]>, line 1)
>>> f() += 1
Traceback (most recent call last):
SyntaxError: illegal expression for augmented assignment (<doctest test.test_syntax[33]>, line 1)
""" |
"""
===================
Universal Functions
===================
Ufuncs are, generally speaking, mathematical functions or operations that are
applied element-by-element to the contents of an array. That is, the result
in each output array element only depends on the value in the corresponding
input array (or arrays) and on no other array elements. NumPy comes with a
large suite of ufuncs, and scipy extends that suite substantially. The simplest
example is the addition operator: ::
>>> np.array([0,2,3,4]) + np.array([1,1,-1,2])
array([1, 3, 2, 6])
The unfunc module lists all the available ufuncs in numpy. Documentation on
the specific ufuncs may be found in those modules. This documentation is
intended to address the more general aspects of unfuncs common to most of
them. All of the ufuncs that make use of Python operators (e.g., +, -, etc.)
have equivalent functions defined (e.g. add() for +)
Type coercion
=============
What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of
two different types? What is the type of the result? Typically, the result is
the higher of the two types. For example: ::
float32 + float64 -> float64
int8 + int32 -> int32
int16 + float32 -> float32
float32 + complex64 -> complex64
There are some less obvious cases generally involving mixes of types
(e.g. uints, ints and floats) where equal bit sizes for each are not
capable of saving all the information in a different type of equivalent
bit size. Some examples are int32 vs float32 or uint32 vs int32.
Generally, the result is the higher type of larger size than both
(if available). So: ::
int32 + float32 -> float64
uint32 + int32 -> int64
Finally, the type coercion behavior when expressions involve Python
scalars is different than that seen for arrays. Since Python has a
limited number of types, combining a Python int with a dtype=np.int8
array does not coerce to the higher type but instead, the type of the
array prevails. So the rules for Python scalars combined with arrays is
that the result will be that of the array equivalent the Python scalar
if the Python scalar is of a higher 'kind' than the array (e.g., float
vs. int), otherwise the resultant type will be that of the array.
For example: ::
Python int + int8 -> int8
Python float + int8 -> float64
ufunc methods
=============
Binary ufuncs support 4 methods.
**.reduce(arr)** applies the binary operator to elements of the array in
sequence. For example: ::
>>> np.add.reduce(np.arange(10)) # adds all elements of array
45
For multidimensional arrays, the first dimension is reduced by default: ::
>>> np.add.reduce(np.arange(10).reshape(2,5))
array([ 5, 7, 9, 11, 13])
The axis keyword can be used to specify different axes to reduce: ::
>>> np.add.reduce(np.arange(10).reshape(2,5),axis=1)
array([10, 35])
**.accumulate(arr)** applies the binary operator and generates an an
equivalently shaped array that includes the accumulated amount for each
element of the array. A couple examples: ::
>>> np.add.accumulate(np.arange(10))
array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45])
>>> np.multiply.accumulate(np.arange(1,9))
array([ 1, 2, 6, 24, 120, 720, 5040, 40320])
The behavior for multidimensional arrays is the same as for .reduce(),
as is the use of the axis keyword).
**.reduceat(arr,indices)** allows one to apply reduce to selected parts
of an array. It is a difficult method to understand. See the documentation
at:
**.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and
arr2. It will work on multidimensional arrays (the shape of the result is
the concatenation of the two input shapes.: ::
>>> np.multiply.outer(np.arange(3),np.arange(4))
array([[0, 0, 0, 0],
[0, 1, 2, 3],
[0, 2, 4, 6]])
Output arguments
================
All ufuncs accept an optional output array. The array must be of the expected
output shape. Beware that if the type of the output array is of a different
(and lower) type than the output result, the results may be silently truncated
or otherwise corrupted in the downcast to the lower type. This usage is useful
when one wants to avoid creating large temporary arrays and instead allows one
to reuse the same array memory repeatedly (at the expense of not being able to
use more convenient operator notation in expressions). Note that when the
output argument is used, the ufunc still returns a reference to the result.
>>> x = np.arange(2)
>>> np.add(np.arange(2),np.arange(2.),x)
array([0, 2])
>>> x
array([0, 2])
and & or as ufuncs
==================
Invariably people try to use the python 'and' and 'or' as logical operators
(and quite understandably). But these operators do not behave as normal
operators since Python treats these quite differently. They cannot be
overloaded with array equivalents. Thus using 'and' or 'or' with an array
results in an error. There are two alternatives:
1) use the ufunc functions logical_and() and logical_or().
2) use the bitwise operators & and \\|. The drawback of these is that if
the arguments to these operators are not boolean arrays, the result is
likely incorrect. On the other hand, most usages of logical_and and
logical_or are with boolean arrays. As long as one is careful, this is
a convenient way to apply these operators.
""" |
"""
============
Array basics
============
Array types and conversions between types
=========================================
Numpy supports a much greater variety of numerical types than Python does.
This section shows which are available, and how to modify an array's data-type.
========== =========================================================
Data type Description
========== =========================================================
bool Boolean (True or False) stored as a byte
int Platform integer (normally either ``int32`` or ``int64``)
int8 Byte (-128 to 127)
int16 Integer (-32768 to 32767)
int32 Integer (-2147483648 to 2147483647)
int64 Integer (9223372036854775808 to 9223372036854775807)
uint8 Unsigned integer (0 to 255)
uint16 Unsigned integer (0 to 65535)
uint32 Unsigned integer (0 to 4294967295)
uint64 Unsigned integer (0 to 18446744073709551615)
float Shorthand for ``float64``.
float16 Half precision float: sign bit, 5 bits exponent,
10 bits mantissa
float32 Single precision float: sign bit, 8 bits exponent,
23 bits mantissa
float64 Double precision float: sign bit, 11 bits exponent,
52 bits mantissa
complex Shorthand for ``complex128``.
complex64 Complex number, represented by two 32-bit floats (real
and imaginary components)
complex128 Complex number, represented by two 64-bit floats (real
and imaginary components)
========== =========================================================
Numpy numerical types are instances of ``dtype`` (data-type) objects, each
having unique characteristics. Once you have imported NumPy using
::
>>> import numpy as np
the dtypes are available as ``np.bool``, ``np.float32``, etc.
Advanced types, not listed in the table above, are explored in
section :ref:`structured_arrays`.
There are 5 basic numerical types representing booleans (bool), integers (int),
unsigned integers (uint) floating point (float) and complex. Those with numbers
in their name indicate the bitsize of the type (i.e. how many bits are needed
to represent a single value in memory). Some types, such as ``int`` and
``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit
vs. 64-bit machines). This should be taken into account when interfacing
with low-level code (such as C or Fortran) where the raw memory is addressed.
Data-types can be used as functions to convert python numbers to array scalars
(see the array scalar section for an explanation), python sequences of numbers
to arrays of that type, or as arguments to the dtype keyword that many numpy
functions or methods accept. Some examples::
>>> import numpy as np
>>> x = np.float32(1.0)
>>> x
1.0
>>> y = np.int_([1,2,4])
>>> y
array([1, 2, 4])
>>> z = np.arange(3, dtype=np.uint8)
>>> z
array([0, 1, 2], dtype=uint8)
Array types can also be referred to by character codes, mostly to retain
backward compatibility with older packages such as Numeric. Some
documentation may still refer to these, for example::
>>> np.array([1, 2, 3], dtype='f')
array([ 1., 2., 3.], dtype=float32)
We recommend using dtype objects instead.
To convert the type of an array, use the .astype() method (preferred) or
the type itself as a function. For example: ::
>>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE
array([ 0., 1., 2.])
>>> np.int8(z)
array([0, 1, 2], dtype=int8)
Note that, above, we use the *Python* float object as a dtype. NumPy knows
that ``int`` refers to ``np.int``, ``bool`` means ``np.bool`` and
that ``float`` is ``np.float``. The other data-types do not have Python
equivalents.
To determine the type of an array, look at the dtype attribute::
>>> z.dtype
dtype('uint8')
dtype objects also contain information about the type, such as its bit-width
and its byte-order. The data type can also be used indirectly to query
properties of the type, such as whether it is an integer::
>>> d = np.dtype(int)
>>> d
dtype('int32')
>>> np.issubdtype(d, int)
True
>>> np.issubdtype(d, float)
False
Array Scalars
=============
Numpy generally returns elements of arrays as array scalars (a scalar
with an associated dtype). Array scalars differ from Python scalars, but
for the most part they can be used interchangeably (the primary
exception is for versions of Python older than v2.x, where integer array
scalars cannot act as indices for lists and tuples). There are some
exceptions, such as when code requires very specific attributes of a scalar
or when it checks specifically whether a value is a Python scalar. Generally,
problems are easily fixed by explicitly converting array scalars
to Python scalars, using the corresponding Python type function
(e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``).
The primary advantage of using array scalars is that
they preserve the array type (Python may not have a matching scalar type
available, e.g. ``int16``). Therefore, the use of array scalars ensures
identical behaviour between arrays and scalars, irrespective of whether the
value is inside an array or not. NumPy scalars also have many of the same
methods arrays do.
""" |
#!/usr/local/sci/bin/python
# PYTHON3
# Python3
# Author: NAME Created: 26 July 2018
# Last update: 2 Aug 2018
# Location: /data/local/hadkw/HADCRUH2/UPDATE2017/PROGS/PYTHON/ # this will probably change
# GitHub: https://github.com/Kate-Willett/Climate_Explorer/tree/master/PYTHON/
# -----------------------
# CODE PURPOSE AND OUTPUT
# -----------------------
# Reads in monthly grids from HadISDH or ERA-Interim and creates a time average
# e.g., across a number of months or years. It can create a month or seasonal
# field or average a month or seasonal field over a specified number of years.
# It can plot a map of that time average
# It can save the field of the time average to netCDF
# It can work on a subset region only or a global field.
# If its a region if can produce a plot of all gridbox time series for that region
# It can produce a global
# or regional average time series of the monthly or time averaged values.
# It can plot a time series of the spatial average and save it to netCDF
#
# <references to related published material, e.g. that describes data set>
#
# -----------------------
# LIST OF MODULES
# -----------------------
# Inbuilt:
#import matplotlib.pyplot as plt
#import numpy as np
#import numpy.ma as ma
#import sys, os, getopt
#import scipy.stats
#import struct
#import cartopy.crs as ccrs
#import cartopy.feature as cpf
#import datetime as dt
#import netCDF4 as nc4 # To solve error "is not a valid NetCDF 3 file"
#from matplotlib.dates import date2num,num2date
#from matplotlib import ticker
#import matplotlib.cm as mpl_cm
#import pdb # for stopping and restarting with editability (stop is pdb.set_trace(),restart is c)
#
# Other:
#
# -----------------------
# DATA
# -----------------------
# directory for trendmaps:
# /data/local/hadkw/HADCRUH2/UPDATE2014/STATISTICS/TRENDS/
# directory for land cover map:
# /data/local/hadkw/HADCRUH2/UPDATE2014/OTHERDATA/
# land cover file used to calculate % land represented:
# HadCRUT.IP_ADDRESS.land_fraction.nc
# files currently worked on:
# HadISDH.landq.IP_ADDRESS7f_FLATgridIDPHA5by5_anoms8110_MAR2018_cf.nc
# HadISDH.landRH.IP_ADDRESS7f_FLATgridIDPHA5by5_anoms8110_MAR2018_cf.nc
# HadISDH.landT.IP_ADDRESS7f_FLATgridIDPHA5by5_anoms8110_MAR2018_cf.nc
# q2m_monthly_5by5_ERA-Interim_data_19792017_anoms1981-2010.nc
# rh2m_monthly_5by5_ERA-Interim_data_19792017_anoms1981-2010.nc
# t2m_monthly_5by5_ERA-Interim_data_19792017_anoms1981-2010.nc
#
# -----------------------
# HOW TO RUN THE CODE
# -----------------------
#
# # set up all editables as you wish within the script
# > module load scitools/experimental-current
# > python MakeTimeSpaceAveragePlotSave_JUL2018.py
#
# or with some/all command line arguments
# > module load scitools/experimental-current
# >python MakeTimeSpaceAveragePlotSave_Jul2018.py --sy 1973 --ey 1973 --sm 1 --em 1 --ss 1 --sc JJA --gs 0 --slt -10.0 --sln -10.0 etc
#
# -----------------------
# OUTPUT
# -----------------------
# directory for output images:
# /data/local/hadkw/HADCRUH2/UPDATE2017/IMAGES/ANALYSIS/
# Output image files:
# TimeAverage_HadISDH.landq.IP_ADDRESS7f_'...',png/.eps
# RegionAverage_HadISDH.landq.IP_ADDRESS7f_'...',png/.eps
# RegionAverage_HadISDH.landq.IP_ADDRESS7f_'...'multi.png/.eps
# Output netCDF files:
# TimeAverage_HadISDH.landq.IP_ADDRESS7f_'...',png/.eps
# RegionAverage_HadISDH.landq.IP_ADDRESS7f_'...',png/.eps
#
# -----------------------
# VERSION/RELEASE NOTES
# -----------------------
#
# Version 1 26 July 2018)
# ---------
#
# Enhancements
#
# Changes
#
# Bug fixes
#
# -----------------------
# OTHER INFORMATION
# -----------------------
#
#************************************************************************
# MAIN PROGRAM
#************************************************************************
# Set up python imports
|
"""
Linear mixed effects models for Statsmodels
The data are partitioned into disjoint groups. The probability model
for group i is:
Y = X*beta + Z*gamma + epsilon
where
* n_i is the number of observations in group i
* Y is a n_i dimensional response vector
* X is a n_i x k_fe design matrix for the fixed effects
* beta is a k_fe-dimensional vector of fixed effects slopes
* Z is a n_i x k_re design matrix for the random effects
* gamma is a k_re-dimensional random vector with mean 0
and covariance matrix Psi; note that each group
gets its own independent realization of gamma.
* epsilon is a n_i dimensional vector of iid normal
errors with mean 0 and variance sigma^2; the epsilon
values are independent both within and between groups
Y, X and Z must be entirely observed. beta, Psi, and sigma^2 are
estimated using ML or REML estimation, and gamma and epsilon are
random so define the probability model.
The mean structure is E[Y|X,Z] = X*beta. If only the mean structure
is of interest, GEE is a good alternative to mixed models.
The primary reference for the implementation details is:
MJ NAME NAME (1988). "Newton NAME and EM algorithms for
linear mixed effects models for repeated measures data". Journal of
the American Statistical Association. Volume 83, Issue 404, pages
1014-1022.
See also this more recent document:
http://econ.ucsb.edu/~doug/245a/Papers/Mixed%20Effects%20Implement.pdf
All the likelihood, gradient, and Hessian calculations closely follow
Lindstrom and NAME.
The following two documents are written more from the perspective of
users:
http://lme4.r-forge.r-project.org/lMMwR/lrgprt.pdf
http://lme4.r-forge.r-project.org/slides/2009-07-07-Rennes/3Longitudinal-4.pdf
Notation:
* `cov_re` is the random effects covariance matrix (referred to above
as Psi) and `scale` is the (scalar) error variance. For a single
group, the marginal covariance matrix of endog given exog is scale*I
+ Z * cov_re * Z', where Z is the design matrix for the random
effects in one group.
Notes:
1. Three different parameterizations are used here in different
places. The regression slopes (usually called `fe_params`) are
identical in all three parameterizations, but the variance parameters
differ. The parameterizations are:
* The "natural parameterization" in which cov(endog) = scale*I + Z *
cov_re * Z', as described above. This is the main parameterization
visible to the user.
* The "profile parameterization" in which cov(endog) = I +
Z * cov_re1 * Z'. This is the parameterization of the profile
likelihood that is maximized to produce parameter estimates.
(see Lindstrom and NAME for details). The "natural" cov_re is
equal to the "profile" cov_re1 times scale.
* The "square root parameterization" in which we work with the
Cholesky factor of cov_re1 instead of cov_re1 directly.
All three parameterizations can be "packed" by concatenating fe_params
together with the lower triangle of the dependence structure. Note
that when unpacking, it is important to either square or reflect the
dependence structure depending on which parameterization is being
used.
2. The situation where the random effects covariance matrix is
singular is numerically challenging. Small changes in the covariance
parameters may lead to large changes in the likelihood and
derivatives.
3. The optimization strategy is to first use OLS to get starting
values for the mean structure. Then we optionally perform a few EM
steps, followed by optionally performing a few steepest ascent steps.
This is followed by conjugate gradient optimization using one of the
scipy gradient optimizers. The EM and steepest ascent steps are used
to get adequate starting values for the conjugate gradient
optimization, which is much faster.
""" |
"""
=============================
Byteswapping and byte order
=============================
Introduction to byte ordering and ndarrays
==========================================
The ``ndarray`` is an object that provide a python array interface to data
in memory.
It often happens that the memory that you want to view with an array is
not of the same byte ordering as the computer on which you are running
Python.
For example, I might be working on a computer with a little-endian CPU -
such as an Intel Pentium, but I have loaded some data from a file
written by a computer that is big-endian. Let's say I have loaded 4
bytes from a file written by a Sun (big-endian) computer. I know that
these 4 bytes represent two 16-bit integers. On a big-endian machine, a
two-byte integer is stored with the Most Significant Byte (MSB) first,
and then the Least Significant Byte (LSB). Thus the bytes are, in memory order:
#. MSB integer 1
#. LSB integer 1
#. MSB integer 2
#. LSB integer 2
Let's say the two integers were in fact 1 and 770. Because 770 = 256 *
3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2.
The bytes I have loaded from the file would have these contents:
>>> big_end_str = chr(0) + chr(1) + chr(3) + chr(2)
>>> big_end_str
'\\x00\\x01\\x03\\x02'
We might want to use an ``ndarray`` to access these integers. In that
case, we can create an array around this memory, and tell numpy that
there are two integers, and that they are 16 bit and big-endian:
>>> import numpy as np
>>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_str)
>>> big_end_arr[0]
1
>>> big_end_arr[1]
770
Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian'
(``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For
example, if our data represented a single unsigned 4-byte little-endian
integer, the dtype string would be ``<u4``.
In fact, why don't we try that?
>>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_str)
>>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3
True
Returning to our ``big_end_arr`` - in this case our underlying data is
big-endian (data endianness) and we've set the dtype to match (the dtype
is also big-endian). However, sometimes you need to flip these around.
Changing byte ordering
======================
As you can imagine from the introduction, there are two ways you can
affect the relationship between the byte ordering of the array and the
underlying memory it is looking at:
* Change the byte-ordering information in the array dtype so that it
interprets the undelying data as being in a different byte order.
This is the role of ``arr.newbyteorder()``
* Change the byte-ordering of the underlying data, leaving the dtype
interpretation as it was. This is what ``arr.byteswap()`` does.
The common situations in which you need to change byte ordering are:
#. Your data and dtype endianess don't match, and you want to change
the dtype so that it matches the data.
#. Your data and dtype endianess don't match, and you want to swap the
data so that they match the dtype
#. Your data and dtype endianess match, but you want the data swapped
and the dtype to reflect this
Data and dtype endianness don't match, change dtype to match data
-----------------------------------------------------------------
We make something where they don't match:
>>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_str)
>>> wrong_end_dtype_arr[0]
256
The obvious fix for this situation is to change the dtype so it gives
the correct endianness:
>>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder()
>>> fixed_end_dtype_arr[0]
1
Note the the array has not changed in memory:
>>> fixed_end_dtype_arr.tobytes() == big_end_str
True
Data and type endianness don't match, change data to match dtype
----------------------------------------------------------------
You might want to do this if you need the data in memory to be a certain
ordering. For example you might be writing the memory out to a file
that needs a certain byte ordering.
>>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap()
>>> fixed_end_mem_arr[0]
1
Now the array *has* changed in memory:
>>> fixed_end_mem_arr.tobytes() == big_end_str
False
Data and dtype endianness match, swap data and dtype
----------------------------------------------------
You may have a correctly specified array dtype, but you need the array
to have the opposite byte order in memory, and you want the dtype to
match so the array values make sense. In this case you just do both of
the previous operations:
>>> swapped_end_arr = big_end_arr.byteswap().newbyteorder()
>>> swapped_end_arr[0]
1
>>> swapped_end_arr.tobytes() == big_end_str
False
An easier way of casting the data to a specific dtype and byte ordering
can be achieved with the ndarray astype method:
>>> swapped_end_arr = big_end_arr.astype('<i2')
>>> swapped_end_arr[0]
1
>>> swapped_end_arr.tobytes() == big_end_str
False
""" |
"""
Discrete Fourier Transform (:mod:`numpy.fft`)
=============================================
.. currentmodule:: numpy.fft
Standard FFTs
-------------
.. autosummary::
:toctree: generated/
fft Discrete Fourier transform.
ifft Inverse discrete Fourier transform.
fft2 Discrete Fourier transform in two dimensions.
ifft2 Inverse discrete Fourier transform in two dimensions.
fftn Discrete Fourier transform in N-dimensions.
ifftn Inverse discrete Fourier transform in N dimensions.
Real FFTs
---------
.. autosummary::
:toctree: generated/
rfft Real discrete Fourier transform.
irfft Inverse real discrete Fourier transform.
rfft2 Real discrete Fourier transform in two dimensions.
irfft2 Inverse real discrete Fourier transform in two dimensions.
rfftn Real discrete Fourier transform in N dimensions.
irfftn Inverse real discrete Fourier transform in N dimensions.
Hermitian FFTs
--------------
.. autosummary::
:toctree: generated/
hfft Hermitian discrete Fourier transform.
ihfft Inverse Hermitian discrete Fourier transform.
Helper routines
---------------
.. autosummary::
:toctree: generated/
fftfreq Discrete Fourier Transform sample frequencies.
rfftfreq DFT sample frequencies (for usage with rfft, irfft).
fftshift Shift zero-frequency component to center of spectrum.
ifftshift Inverse of fftshift.
Background information
----------------------
Fourier analysis is fundamentally a method for expressing a function as a
sum of periodic components, and for recovering the function from those
components. When both the function and its Fourier transform are
replaced with discretized counterparts, it is called the discrete Fourier
transform (DFT). The DFT has become a mainstay of numerical computing in
part because of a very fast algorithm for computing it, called the Fast
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_
provide an accessible introduction to Fourier analysis and its
applications.
Because the discrete Fourier transform separates its input into
components that contribute at discrete frequencies, it has a great number
of applications in digital signal processing, e.g., for filtering, and in
this context the discretized input to the transform is customarily
referred to as a *signal*, which exists in the *time domain*. The output
is called a *spectrum* or *transform* and exists in the *frequency
domain*.
Implementation details
----------------------
There are many ways to define the DFT, varying in the sign of the
exponent, normalization, etc. In this implementation, the DFT is defined
as
.. math::
A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\}
\\qquad k = 0,\\ldots,n-1.
The DFT is in general defined for complex inputs and outputs, and a
single-frequency component at linear frequency :math:`f` is
represented by a complex exponential
:math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t`
is the sampling interval.
The values in the result follow so-called "standard" order: If ``A =
fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the mean of
the signal), which is always purely real for real inputs. Then ``A[1:n/2]``
contains the positive-frequency terms, and ``A[n/2+1:]`` contains the
negative-frequency terms, in order of decreasingly negative frequency.
For an even number of input points, ``A[n/2]`` represents both positive and
negative Nyquist frequency, and is also purely real for real input. For
an odd number of input points, ``A[(n-1)/2]`` contains the largest positive
frequency, while ``A[(n+1)/2]`` contains the largest negative frequency.
The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies
of corresponding elements in the output. The routine
``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the
zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes
that shift.
When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)``
is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum.
The phase spectrum is obtained by ``np.angle(A)``.
The inverse DFT is defined as
.. math::
a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\}
\\qquad m = 0,\\ldots,n-1.
It differs from the forward transform by the sign of the exponential
argument and the default normalization by :math:`1/n`.
Normalization
-------------
The default normalization has the direct transforms unscaled and the inverse
transforms are scaled by :math:`1/n`. It is possible to obtain unitary
transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is
`None`) so that both direct and inverse transforms will be scaled by
:math:`1/\\sqrt{n}`.
Real and Hermitian transforms
-----------------------------
When the input is purely real, its transform is Hermitian, i.e., the
component at frequency :math:`f_k` is the complex conjugate of the
component at frequency :math:`-f_k`, which means that for real
inputs there is no information in the negative frequency components that
is not already available from the positive frequency components.
The family of `rfft` functions is
designed to operate on real inputs, and exploits this symmetry by
computing only the positive frequency components, up to and including the
Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex
output points. The inverses of this family assumes the same symmetry of
its input, and for an output of ``n`` points uses ``n/2+1`` input points.
Correspondingly, when the spectrum is purely real, the signal is
Hermitian. The `hfft` family of functions exploits this symmetry by
using ``n/2+1`` complex points in the input (time) domain for ``n`` real
points in the frequency domain.
In higher dimensions, FFTs are used, e.g., for image analysis and
filtering. The computational efficiency of the FFT means that it can
also be a faster way to compute large convolutions, using the property
that a convolution in the time domain is equivalent to a point-by-point
multiplication in the frequency domain.
Higher dimensions
-----------------
In two dimensions, the DFT is defined as
.. math::
A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1}
a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\}
\\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1,
which extends in the obvious way to higher dimensions, and the inverses
in higher dimensions also extend in the same way.
References
----------
.. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the
machine calculation of complex Fourier series," *Math. Comput.*
19: 297-301.
.. [NR] NAME NAME NAME and NAME
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
12-13. Cambridge Univ. Press, Cambridge, UK.
Examples
--------
For examples, see the various functions.
""" |
{'scaleV0': 0.10000000149011612, 'autoTaper': True, 'branchDist': 1.0, 'leaves': 25, 'ratio': 0.014999999664723873, 'shape': '7', 'useArm': False, 'bevelRes': 0, 'rootFlare': 1.0, 'frameRate': 1.0, 'segSplits': (0.0, 0.0, 0.0, 0.0), 'handleType': '0', 'leafScaleT': 0.0, 'radiusTweak': (1.0, 1.0, 1.0, 1.0), 'armLevels': 2, 'attractOut': (0.0, 0.0, 0.0, 0.0), 'prunePowerHigh': 0.5, 'showLeaves': False, 'nrings': 0, 'leafScaleV': 0.0, 'baseSize': 0.4000000059604645, 'af2': 1.0, 'curve': (0.0, -40.0, -40.0, 0.0), 'resU': 4, 'prune': False, 'taperCrown': 0.0, 'minRadius': 0.001500000013038516, 'pruneRatio': 1.0, 'scale': 13.0, 'prunePowerLow': 0.0010000000474974513, 'leafDownAngle': 45.0, 'pruneWidthPeak': 0.6000000238418579, 'splitHeight': 0.20000000298023224, 'scale0': 1.0, 'splitAngle': (0.0, 0.0, 0.0, 0.0), 'leafShape': 'hex', 'splitBias': 0.0, 'pruneWidth': 0.4000000059604645, 'bevel': False, 'armAnim': False, 'attractUp': (0.0, 0.0, 0.5, 0.5), 'leafDownAngleV': 10.0, 'curveRes': (8, 5, 3, 1), 'curveV': (20.0, 50.0, 75.0, 0.0), 'horzLeaves': True, 'customShape': (0.5, 1.0, 0.30000001192092896, 0.5), 'af3': 4.0, 'taper': (1.0, 1.0, 1.0, 1.0), 'af1': 1.0, 'pruneBase': 0.30000001192092896, 'branches': (0, 50, 30, 10), 'loopFrames': 0, 'levels': 3, 'rotateV': (15.0, 0.0, 0.0, 0.0), 'leafScale': 0.17000000178813934, 'curveBack': (0.0, 0.0, 0.0, 0.0), 'splitAngleV': (0.0, 0.0, 0.0, 0.0), 'gust': 1.0, 'rotate': (99.5, 137.5, 137.5, 137.5), 'useOldDownAngle': False, 'useParentAngle': True, 'downAngle': (90.0, 110.0, 45.0, 45.0), 'downAngleV': (0.0, 80.0, 10.0, 10.0), 'baseSize_s': 0.25, 'leafType': '0', 'length': (1.0, 0.30000001192092896, 0.6000000238418579, 0.44999998807907104), 'boneStep': (1, 1, 1, 1), 'lengthV': (0.0, 0.0, 0.0, 0.0), 'wind': 1.0, 'previewArm': False, 'makeMesh': False, 'shapeS': '4', 'ratioPower': 1.100000023841858, 'leafAnim': False, 'scaleV': 3.0, 'baseSplits': 0, 'rMode': 'rotate', 'leafDist': '6', 'closeTip': False, 'leafRotateV': 10.0, 'leafScaleX': 1.0, 'splitByLen': True, 'gustF': 0.07500000298023224, 'leafRotate': 90.0, 'seed': 0, 'leafangle': 0.0} |
"""Module for interactive demos using IPython.
This module implements a few classes for running Python scripts interactively
in IPython for demonstrations. With very simple markup (a few tags in
comments), you can control points where the script stops executing and returns
control to IPython.
Provided classes
----------------
The classes are (see their docstrings for further details):
- Demo: pure python demos
- IPythonDemo: demos with input to be processed by IPython as if it had been
typed interactively (so magics work, as well as any other special syntax you
may have added via input prefilters).
- LineDemo: single-line version of the Demo class. These demos are executed
one line at a time, and require no markup.
- IPythonLineDemo: IPython version of the LineDemo class (the demo is
executed a line at a time, but processed via IPython).
- ClearMixin: mixin to make Demo classes with less visual clutter. It
declares an empty marquee and a pre_cmd that clears the screen before each
block (see Subclassing below).
- ClearDemo, ClearIPDemo: mixin-enabled versions of the Demo and IPythonDemo
classes.
Inheritance diagram:
.. inheritance-diagram:: IPython.lib.demo
:parts: 3
Subclassing
-----------
The classes here all include a few methods meant to make customization by
subclassing more convenient. Their docstrings below have some more details:
- marquee(): generates a marquee to provide visible on-screen markers at each
block start and end.
- pre_cmd(): run right before the execution of each block.
- post_cmd(): run right after the execution of each block. If the block
raises an exception, this is NOT called.
Operation
---------
The file is run in its own empty namespace (though you can pass it a string of
arguments as if in a command line environment, and it will see those as
sys.argv). But at each stop, the global IPython namespace is updated with the
current internal demo namespace, so you can work interactively with the data
accumulated so far.
By default, each block of code is printed (with syntax highlighting) before
executing it and you have to confirm execution. This is intended to show the
code to an audience first so you can discuss it, and only proceed with
execution once you agree. There are a few tags which allow you to modify this
behavior.
The supported tags are:
# <demo> stop
Defines block boundaries, the points where IPython stops execution of the
file and returns to the interactive prompt.
You can optionally mark the stop tag with extra dashes before and after the
word 'stop', to help visually distinguish the blocks in a text editor:
# <demo> --- stop ---
# <demo> silent
Make a block execute silently (and hence automatically). Typically used in
cases where you have some boilerplate or initialization code which you need
executed but do not want to be seen in the demo.
# <demo> auto
Make a block execute automatically, but still being printed. Useful for
simple code which does not warrant discussion, since it avoids the extra
manual confirmation.
# <demo> auto_all
This tag can _only_ be in the first block, and if given it overrides the
individual auto tags to make the whole demo fully automatic (no block asks
for confirmation). It can also be given at creation time (or the attribute
set later) to override what's in the file.
While _any_ python file can be run as a Demo instance, if there are no stop
tags the whole file will run in a single block (no different that calling
first %pycat and then %run). The minimal markup to make this useful is to
place a set of stop tags; the other tags are only there to let you fine-tune
the execution.
This is probably best explained with the simple example file below. You can
copy this into a file named ex_demo.py, and try running it via::
from IPython.demo import Demo
d = Demo('ex_demo.py')
d()
Each time you call the demo object, it runs the next block. The demo object
has a few useful methods for navigation, like again(), edit(), jump(), seek()
and back(). It can be reset for a new run via reset() or reloaded from disk
(in case you've edited the source) via reload(). See their docstrings below.
Note: To make this simpler to explore, a file called "demo-exercizer.py" has
been added to the "docs/examples/core" directory. Just cd to this directory in
an IPython session, and type::
%run demo-exercizer.py
and then follow the directions.
Example
-------
The following is a very simple example of a valid demo file.
::
#################### EXAMPLE DEMO <ex_demo.py> ###############################
'''A simple interactive demo to illustrate the use of IPython's Demo class.'''
print 'Hello, welcome to an interactive IPython demo.'
# The mark below defines a block boundary, which is a point where IPython will
# stop execution and return to the interactive prompt. The dashes are actually
# optional and used only as a visual aid to clearly separate blocks while
# editing the demo code.
# <demo> stop
x = 1
y = 2
# <demo> stop
# the mark below makes this block as silent
# <demo> silent
print 'This is a silent block, which gets executed but not printed.'
# <demo> stop
# <demo> auto
print 'This is an automatic block.'
print 'It is executed without asking for confirmation, but printed.'
z = x+y
print 'z=',x
# <demo> stop
# This is just another normal block.
print 'z is now:', z
print 'bye!'
################### END EXAMPLE DEMO <ex_demo.py> ############################
""" |
"""Matplotlib license for the deprecation module.
License agreement for matplotlib versions 1.3.0 and later
=========================================================
1. This LICENSE AGREEMENT is between the Matplotlib Development Team
("MDT"), and the Individual or Organization ("Licensee") accessing and
otherwise using matplotlib software in source or binary form and its
associated documentation.
2. Subject to the terms and conditions of this License Agreement, MDT
hereby grants Licensee a nonexclusive, royalty-free, world-wide license
to reproduce, analyze, test, perform and/or display publicly, prepare
derivative works, distribute, and otherwise use matplotlib
alone or in any derivative version, provided, however, that MDT's
License Agreement and MDT's notice of copyright, i.e., "Copyright (c)
2012- Matplotlib Development Team; All Rights Reserved" are retained in
matplotlib alone or in any derivative version prepared by
Licensee.
3. In the event Licensee prepares a derivative work that is based on or
incorporates matplotlib or any part thereof, and wants to
make the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to matplotlib .
4. MDT is making matplotlib available to Licensee on an "AS
IS" basis. MDT MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, MDT MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB
WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
5. MDT SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR
LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING
MATPLOTLIB , OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF
THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between MDT and
Licensee. This License Agreement does not grant permission to use MDT
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using matplotlib ,
Licensee agrees to be bound by the terms and conditions of this License
Agreement.
License agreement for matplotlib versions prior to 1.3.0
========================================================
1. This LICENSE AGREEMENT is between NAME ("JDH"), and the
Individual or Organization ("Licensee") accessing and otherwise using
matplotlib software in source or binary form and its associated
documentation.
2. Subject to the terms and conditions of this License Agreement, JDH
hereby grants Licensee a nonexclusive, royalty-free, world-wide license
to reproduce, analyze, test, perform and/or display publicly, prepare
derivative works, distribute, and otherwise use matplotlib
alone or in any derivative version, provided, however, that JDH's
License Agreement and JDH's notice of copyright, i.e., "Copyright (c)
2002-2011 NAME; All Rights Reserved" are retained in
matplotlib alone or in any derivative version prepared by
Licensee.
3. In the event Licensee prepares a derivative work that is based on or
incorporates matplotlib or any part thereof, and wants to
make the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to matplotlib.
4. JDH is making matplotlib available to Licensee on an "AS
IS" basis. JDH MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, JDH MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB
WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
5. JDH SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR
LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING
MATPLOTLIB , OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF
THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between JDH and
Licensee. This License Agreement does not grant permission to use JDH
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using matplotlib,
Licensee agrees to be bound by the terms and conditions of this License
Agreement.
""" |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ***********************IMPORTANT NMAP LICENSE TERMS************************
# * *
# * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is *
# * also a registered trademark of Insecure.Com LLC. This program is free *
# * software; you may redistribute and/or modify it under the terms of the *
# * GNU General Public License as published by the Free Software *
# * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS *
# * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, *
# * modify, and redistribute this software under certain conditions. If *
# * you wish to embed Nmap technology into proprietary software, we sell *
# * alternative licenses (contact EMAIL Dozens of software *
# * vendors already license Nmap technology such as host discovery, port *
# * scanning, OS detection, version detection, and the Nmap Scripting *
# * Engine. *
# * *
# * Note that the GPL places important restrictions on "derivative works", *
# * yet it does not provide a detailed definition of that term. To avoid *
# * misunderstandings, we interpret that term as broadly as copyright law *
# * allows. For example, we consider an application to constitute a *
# * derivative work for the purpose of this license if it does any of the *
# * following with any software or content covered by this license *
# * ("Covered Software"): *
# * *
# * o Integrates source code from Covered Software. *
# * *
# * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db *
# * or nmap-service-probes. *
# * *
# * o Is designed specifically to execute Covered Software and parse the *
# * results (as opposed to typical shell or execution-menu apps, which will *
# * execute anything you tell them to). *
# * *
# * o Includes Covered Software in a proprietary executable installer. The *
# * installers produced by InstallShield are an example of this. Including *
# * Nmap with other software in compressed or archival form does not *
# * trigger this provision, provided appropriate open source decompression *
# * or de-archiving software is widely available for no charge. For the *
# * purposes of this license, an installer is considered to include Covered *
# * Software even if it actually retrieves a copy of Covered Software from *
# * another source during runtime (such as by downloading it from the *
# * Internet). *
# * *
# * o Links (statically or dynamically) to a library which does any of the *
# * above. *
# * *
# * o Executes a helper program, module, or script to do any of the above. *
# * *
# * This list is not exclusive, but is meant to clarify our interpretation *
# * of derived works with some common examples. Other people may interpret *
# * the plain GPL differently, so we consider this a special exception to *
# * the GPL that we apply to Covered Software. Works which meet any of *
# * these conditions must conform to all of the terms of this license, *
# * particularly including the GPL Section 3 requirements of providing *
# * source code and allowing free redistribution of the work as a whole. *
# * *
# * As another special exception to the GPL terms, Insecure.Com LLC grants *
# * permission to link the code of this program with any version of the *
# * OpenSSL library which is distributed under a license identical to that *
# * listed in the included docs/licenses/OpenSSL.txt file, and distribute *
# * linked combinations including the two. *
# * *
# * Any redistribution of Covered Software, including any derived works, *
# * must obey and carry forward all of the terms of this license, including *
# * obeying all GPL rules and restrictions. For example, source code of *
# * the whole work must be provided and free redistribution must be *
# * allowed. All GPL references to "this License", are to be treated as *
# * including the terms and conditions of this license text as well. *
# * *
# * Because this license imposes special exceptions to the GPL, Covered *
# * Work may not be combined (even as part of a larger work) with plain GPL *
# * software. The terms, conditions, and exceptions of this license must *
# * be included as well. This license is incompatible with some other open *
# * source licenses as well. In some cases we can relicense portions of *
# * Nmap or grant special permissions to use it in other open source *
# * software. Please contact EMAIL with any such requests. *
# * Similarly, we don't incorporate incompatible open source software into *
# * Covered Software without special permission from the copyright holders. *
# * *
# * If you have any questions about the licensing restrictions on using *
# * Nmap in other works, are happy to help. As mentioned above, we also *
# * offer alternative license to integrate Nmap into proprietary *
# * applications and appliances. These contracts have been sold to dozens *
# * of software vendors, and generally include a perpetual license as well *
# * as providing for priority support and updates. They also fund the *
# * continued development of Nmap. Please email EMAIL for further *
# * information. *
# * *
# * If you have received a written license agreement or contract for *
# * Covered Software stating terms other than these, you may choose to use *
# * and redistribute Covered Software under those terms instead of these. *
# * *
# * Source is provided to this software because we believe users have a *
# * right to know exactly what a program is going to do before they run it. *
# * This also allows you to audit the software for security holes (none *
# * have been found so far). *
# * *
# * Source code also allows you to port Nmap to new platforms, fix bugs, *
# * and add new features. You are highly encouraged to send your changes *
# * to the EMAIL mailing list for possible incorporation into the *
# * main distribution. By sending these changes to Fyodor or one of the *
# * Insecure.Org development mailing lists, or checking them into the Nmap *
# * source code repository, it is understood (unless you specify otherwise) *
# * that you are offering the Nmap Project (Insecure.Com LLC) the *
# * unlimited, non-exclusive right to reuse, modify, and relicense the *
# * code. Nmap will always be available Open Source, but this is important *
# * because the inability to relicense code has caused devastating problems *
# * for other Free Software projects (such as KDE and NASM). We also *
# * occasionally relicense the code to third parties as discussed above. *
# * If you wish to specify special license conditions of your *
# * contributions, just say so when you send them. *
# * *
# * This program is distributed in the hope that it will be useful, but *
# * WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap *
# * license file for more details (it's in a COPYING file included with *
# * Nmap, and also available from https://svn.nmap.org/nmap/COPYING *
# * *
# ***************************************************************************/
|
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |