comments
stringlengths 2
31.4k
|
---|
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
"""
>>> from django.core.paginator import Paginator
>>> from pagination.templatetags.pagination_tags import paginate
>>> from django.template import Template, Context
>>> p = Paginator(range(15), 2)
>>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages']
[1, 2, 3, 4, 5, 6, 7, 8]
>>> p = Paginator(range(17), 2)
>>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages']
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> p = Paginator(range(19), 2)
>>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages']
[1, 2, 3, 4, None, 7, 8, 9, 10]
>>> p = Paginator(range(21), 2)
>>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages']
[1, 2, 3, 4, None, 8, 9, 10, 11]
# Testing orphans
>>> p = Paginator(range(5), 2, 1)
>>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages']
[1, 2]
>>> p = Paginator(range(21), 2, 1)
>>> paginate({'paginator': p, 'page_obj': p.page(1)})['pages']
[1, 2, 3, 4, None, 7, 8, 9, 10]
>>> t = Template("{% load pagination_tags %}{% autopaginate var 2 %}{% paginate %}")
>>> from django.http import HttpRequest as DjangoHttpRequest
>>> class HttpRequest(DjangoHttpRequest):
... page = 1
>>> t.render(Context({'var': range(21), 'request': HttpRequest()}))
u'\\n\\n<div class="pagination">...
>>>
>>> t = Template("{% load pagination_tags %}{% autopaginate var %}{% paginate %}")
>>> t.render(Context({'var': range(21), 'request': HttpRequest()}))
u'\\n\\n<div class="pagination">...
>>> t = Template("{% load pagination_tags %}{% autopaginate var 20 %}{% paginate %}")
>>> t.render(Context({'var': range(21), 'request': HttpRequest()}))
u'\\n\\n<div class="pagination">...
>>> t = Template("{% load pagination_tags %}{% autopaginate var by %}{% paginate %}")
>>> t.render(Context({'var': range(21), 'by': 20, 'request': HttpRequest()}))
u'\\n\\n<div class="pagination">...
>>> t = Template("{% load pagination_tags %}{% autopaginate var by as foo %}{{ foo }}")
>>> t.render(Context({'var': range(21), 'by': 20, 'request': HttpRequest()}))
u'[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]'
>>>
# Testing InfinitePaginator
>>> from paginator import InfinitePaginator
>>> InfinitePaginator
<class 'pagination.paginator.InfinitePaginator'>
>>> p = InfinitePaginator(range(20), 2, link_template='/bacon/page/%d')
>>> p.validate_number(2)
2
>>> p.orphans
0
>>> p3 = p.page(3)
>>> p3
<Page 3>
>>> p3.end_index()
6
>>> p3.has_next()
True
>>> p3.has_previous()
True
>>> p.page(10).has_next()
False
>>> p.page(1).has_previous()
False
>>> p3.next_link()
'/bacon/page/4'
>>> p3.previous_link()
'/bacon/page/2'
# Testing FinitePaginator
>>> from paginator import FinitePaginator
>>> FinitePaginator
<class 'pagination.paginator.FinitePaginator'>
>>> p = FinitePaginator(range(20), 2, offset=10, link_template='/bacon/page/%d')
>>> p.validate_number(2)
2
>>> p.orphans
0
>>> p3 = p.page(3)
>>> p3
<Page 3>
>>> p3.start_index()
10
>>> p3.end_index()
6
>>> p3.has_next()
True
>>> p3.has_previous()
True
>>> p3.next_link()
'/bacon/page/4'
>>> p3.previous_link()
'/bacon/page/2'
>>> p = FinitePaginator(range(20), 20, offset=10, link_template='/bacon/page/%d')
>>> p2 = p.page(2)
>>> p2
<Page 2>
>>> p2.has_next()
False
>>> p3.has_previous()
True
>>> p2.next_link()
>>> p2.previous_link()
'/bacon/page/1'
>>> from pagination.middleware import PaginationMiddleware
>>> from django.core.handlers.wsgi import WSGIRequest
>>> from StringIO import StringIO
>>> middleware = PaginationMiddleware()
>>> request = WSGIRequest({'REQUEST_METHOD': 'POST', 'CONTENT_TYPE': 'multipart', 'wsgi.input': StringIO()})
>>> middleware.process_request(request)
>>> request.upload_handlers.append('asdf')
""" |
"""ctypes-based OpenGL wrapper for Python
This is the PyOpenGL 3.x tree, it attempts to provide
a largely compatible API for code written with the
PyOpenGL 2.x series using the ctypes foreign function
interface system.
Configuration Variables:
There are a few configuration variables in this top-level
module. Applications should be the only code that tweaks
these variables, mid-level libraries should not take it
upon themselves to disable/enable features at this level.
The implication there is that your library code should be
able to work with any of the valid configurations available
with these sets of flags.
Further, once any entry point has been loaded, the variables
can no longer be updated. The OpenGL._confligflags module
imports the variables from this location, and once that
import occurs the flags should no longer be changed.
ERROR_CHECKING -- if set to a False value before
importing any OpenGL.* libraries will completely
disable error-checking. This can dramatically
improve performance, but makes debugging far
harder.
This is intended to be turned off *only* in a
production environment where you *know* that
your code is entirely free of situations where you
use exception-handling to handle error conditions,
i.e. where you are explicitly checking for errors
everywhere they can occur in your code.
Default: True
ERROR_LOGGING -- If True, then wrap array-handler
functions with error-logging operations so that all exceptions
will be reported to log objects in OpenGL.logs, note that
this means you will get lots of error logging whenever you
have code that tests by trying something and catching an
error, this is intended to be turned on only during
development so that you can see why something is failing.
Errors are normally logged to the OpenGL.errors logger.
Only triggers if ERROR_CHECKING is True
Default: False
ERROR_ON_COPY -- if set to a True value before
importing the numpy/lists support modules, will
cause array operations to raise
OpenGL.error.CopyError if the operation
would cause a data-copy in order to make the
passed data-type match the target data-type.
This effectively disables all list/tuple array
support, as they are inherently copy-based.
This feature allows for optimisation of your
application. It should only be enabled during
testing stages to prevent raising errors on
recoverable conditions at run-time.
Default: False
CONTEXT_CHECKING -- if set to True, PyOpenGL will wrap
*every* GL and GLU call with a check to see if there
is a valid context. If there is no valid context
then will throw OpenGL.errors.NoContext. This is an
*extremely* slow check and is not enabled by default,
intended to be enabled in order to track down (wrong)
code that uses GL/GLU entry points before the context
has been initialized (something later Linux GLs are
very picky about).
Default: False
STORE_POINTERS -- if set to True, PyOpenGL array operations
will attempt to store references to pointers which are
being passed in order to prevent memory-access failures
if the pointed-to-object goes out of scope. This
behaviour is primarily intended to allow temporary arrays
to be created without causing memory errors, thus it is
trading off performance for safety.
To use this flag effectively, you will want to first set
ERROR_ON_COPY to True and eliminate all cases where you
are copying arrays. Copied arrays *will* segfault your
application deep within the GL if you disable this feature!
Once you have eliminated all copying of arrays in your
application, you will further need to be sure that all
arrays which are passed to the GL are stored for at least
the time period for which they are active in the GL. That
is, you must be sure that your array objects live at least
until they are no longer bound in the GL. This is something
you need to confirm by thinking about your application's
structure.
When you are sure your arrays won't cause seg-faults, you
can set STORE_POINTERS=False in your application and enjoy
a (slight) speed up.
Note: this flag is *only* observed when ERROR_ON_COPY == True,
as a safety measure to prevent pointless segfaults
Default: True
WARN_ON_FORMAT_UNAVAILABLE -- If True, generates
logging-module warn-level events when a FormatHandler
plugin is not loadable (with traceback).
Default: False
FULL_LOGGING -- If True, then wrap functions with
logging operations which reports each call along with its
arguments to the OpenGL.calltrace logger at the INFO
level. This is *extremely* slow. You should *not* enable
this in production code!
You will need to have a logging configuration (e.g.
logging.basicConfig()
) call in your top-level script to see the results of the
logging.
Default: False
ALLOW_NUMPY_SCALARS -- if True, we will wrap
all GLint/GLfloat calls conversions with wrappers
that allow for passing numpy scalar values.
Note that this is experimental, *not* reliable,
and very slow!
Note that byte/char types are not wrapped.
Default: False
UNSIGNED_BYTE_IMAGES_AS_STRING -- if True, we will return
GL_UNSIGNED_BYTE image-data as strings, instead of arrays
for glReadPixels and glGetTexImage
Default: True
FORWARD_COMPATIBLE_ONLY -- only include OpenGL 3.1 compatible
entry points. Note that this will generally break most
PyOpenGL code that hasn't been explicitly made "legacy free"
via a significant rewrite.
Default: False
SIZE_1_ARRAY_UNPACK -- if True, unpack size-1 arrays to be
scalar values, as done in PyOpenGL 1.5 -> 3.0.0, that is,
if a glGenList( 1 ) is done, return a uint rather than
an array of uints.
Default: True
USE_ACCELERATE -- if True, attempt to use the OpenGL_accelerate
package to provide Cython-coded accelerators for core wrapping
operations.
Default: True
MODULE_ANNOTATIONS -- if True, attempt to annotate alternates() and
constants to track in which module they are defined (only useful
for the documentation-generation passes, really).
Default: False
""" |
"""
A multi-dimensional ``Vector`` class, take 9: operator ``@``
WARNING: This example requires Python 3.5 or later.
A ``Vector`` is built from an iterable of numbers::
>>> Vector([3.1, 4.2])
Vector([3.1, 4.2])
>>> Vector((3, 4, 5))
Vector([3.0, 4.0, 5.0])
>>> Vector(range(10))
Vector([0.0, 1.0, 2.0, 3.0, 4.0, ...])
Tests with 2-dimensions (same results as ``vector2d_v1.py``)::
>>> v1 = Vector([3, 4])
>>> x, y = v1
>>> x, y
(3.0, 4.0)
>>> v1
Vector([3.0, 4.0])
>>> v1_clone = eval(repr(v1))
>>> v1 == v1_clone
True
>>> print(v1)
(3.0, 4.0)
>>> octets = bytes(v1)
>>> octets
b'd\\x00\\x00\\x00\\x00\\x00\\x00\\x08@\\x00\\x00\\x00\\x00\\x00\\x00\\x10@'
>>> abs(v1)
5.0
>>> bool(v1), bool(Vector([0, 0]))
(True, False)
Test of ``.frombytes()`` class method:
>>> v1_clone = Vector.frombytes(bytes(v1))
>>> v1_clone
Vector([3.0, 4.0])
>>> v1 == v1_clone
True
Tests with 3-dimensions::
>>> v1 = Vector([3, 4, 5])
>>> x, y, z = v1
>>> x, y, z
(3.0, 4.0, 5.0)
>>> v1
Vector([3.0, 4.0, 5.0])
>>> v1_clone = eval(repr(v1))
>>> v1 == v1_clone
True
>>> print(v1)
(3.0, 4.0, 5.0)
>>> abs(v1) # doctest:+ELLIPSIS
7.071067811...
>>> bool(v1), bool(Vector([0, 0, 0]))
(True, False)
Tests with many dimensions::
>>> v7 = Vector(range(7))
>>> v7
Vector([0.0, 1.0, 2.0, 3.0, 4.0, ...])
>>> abs(v7) # doctest:+ELLIPSIS
9.53939201...
Test of ``.__bytes__`` and ``.frombytes()`` methods::
>>> v1 = Vector([3, 4, 5])
>>> v1_clone = Vector.frombytes(bytes(v1))
>>> v1_clone
Vector([3.0, 4.0, 5.0])
>>> v1 == v1_clone
True
Tests of sequence behavior::
>>> v1 = Vector([3, 4, 5])
>>> len(v1)
3
>>> v1[0], v1[len(v1)-1], v1[-1]
(3.0, 5.0, 5.0)
Test of slicing::
>>> v7 = Vector(range(7))
>>> v7[-1]
6.0
>>> v7[1:4]
Vector([1.0, 2.0, 3.0])
>>> v7[-1:]
Vector([6.0])
>>> v7[1,2]
Traceback (most recent call last):
...
TypeError: Vector indices must be integers
Tests of dynamic attribute access::
>>> v7 = Vector(range(10))
>>> v7.x
0.0
>>> v7.y, v7.z, v7.t
(1.0, 2.0, 3.0)
Dynamic attribute lookup failures::
>>> v7.k
Traceback (most recent call last):
...
AttributeError: 'Vector' object has no attribute 'k'
>>> v3 = Vector(range(3))
>>> v3.t
Traceback (most recent call last):
...
AttributeError: 'Vector' object has no attribute 't'
>>> v3.spam
Traceback (most recent call last):
...
AttributeError: 'Vector' object has no attribute 'spam'
Tests of hashing::
>>> v1 = Vector([3, 4])
>>> v2 = Vector([3.1, 4.2])
>>> v3 = Vector([3, 4, 5])
>>> v6 = Vector(range(6))
>>> hash(v1), hash(v3), hash(v6)
(7, 2, 1)
Most hash values of non-integers vary from a 32-bit to 64-bit Python build::
>>> import sys
>>> hash(v2) == (384307168202284039 if sys.maxsize > 2**32 else 357915986)
True
Tests of ``format()`` with Cartesian coordinates in 2D::
>>> v1 = Vector([3, 4])
>>> format(v1)
'(3.0, 4.0)'
>>> format(v1, '.2f')
'(3.00, 4.00)'
>>> format(v1, '.3e')
'(3.000e+00, 4.000e+00)'
Tests of ``format()`` with Cartesian coordinates in 3D and 7D::
>>> v3 = Vector([3, 4, 5])
>>> format(v3)
'(3.0, 4.0, 5.0)'
>>> format(Vector(range(7)))
'(0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0)'
Tests of ``format()`` with spherical coordinates in 2D, 3D and 4D::
>>> format(Vector([1, 1]), 'h') # doctest:+ELLIPSIS
'<1.414213..., 0.785398...>'
>>> format(Vector([1, 1]), '.3eh')
'<1.414e+00, 7.854e-01>'
>>> format(Vector([1, 1]), '0.5fh')
'<1.41421, 0.78540>'
>>> format(Vector([1, 1, 1]), 'h') # doctest:+ELLIPSIS
'<1.73205..., 0.95531..., 0.78539...>'
>>> format(Vector([2, 2, 2]), '.3eh')
'<3.464e+00, 9.553e-01, 7.854e-01>'
>>> format(Vector([0, 0, 0]), '0.5fh')
'<0.00000, 0.00000, 0.00000>'
>>> format(Vector([-1, -1, -1, -1]), 'h') # doctest:+ELLIPSIS
'<2.0, 2.09439..., 2.18627..., 3.92699...>'
>>> format(Vector([2, 2, 2, 2]), '.3eh')
'<4.000e+00, 1.047e+00, 9.553e-01, 7.854e-01>'
>>> format(Vector([0, 1, 0, 0]), '0.5fh')
'<1.00000, 1.57080, 0.00000, 0.00000>'
Basic tests of operator ``+``::
>>> v1 = Vector([3, 4, 5])
>>> v2 = Vector([6, 7, 8])
>>> v1 + v2
Vector([9.0, 11.0, 13.0])
>>> v1 + v2 == Vector([3+6, 4+7, 5+8])
True
>>> v3 = Vector([1, 2])
>>> v1 + v3 # short vectors are filled with 0.0 on addition
Vector([4.0, 6.0, 5.0])
Tests of ``+`` with mixed types::
>>> v1 + (10, 20, 30)
Vector([13.0, 24.0, 35.0])
>>> from vector2d_v3 import Vector2d
>>> v2d = Vector2d(1, 2)
>>> v1 + v2d
Vector([4.0, 6.0, 5.0])
Tests of ``+`` with mixed types, swapped operands::
>>> (10, 20, 30) + v1
Vector([13.0, 24.0, 35.0])
>>> from vector2d_v3 import Vector2d
>>> v2d = Vector2d(1, 2)
>>> v2d + v1
Vector([4.0, 6.0, 5.0])
Tests of ``+`` with an unsuitable operand:
>>> v1 + 1
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for +: 'Vector' and 'int'
>>> v1 + 'ABC'
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for +: 'Vector' and 'str'
Basic tests of operator ``*``::
>>> v1 = Vector([1, 2, 3])
>>> v1 * 10
Vector([10.0, 20.0, 30.0])
>>> 10 * v1
Vector([10.0, 20.0, 30.0])
Tests of ``*`` with unusual but valid operands::
>>> v1 * True
Vector([1.0, 2.0, 3.0])
>>> from fractions import Fraction
>>> v1 * Fraction(1, 3) # doctest:+ELLIPSIS
Vector([0.3333..., 0.6666..., 1.0])
Tests of ``*`` with unsuitable operands::
>>> v1 * (1, 2)
Traceback (most recent call last):
...
TypeError: can't multiply sequence by non-int of type 'Vector'
Tests of operator `==`::
>>> va = Vector(range(1, 4))
>>> vb = Vector([1.0, 2.0, 3.0])
>>> va == vb
True
>>> vc = Vector([1, 2])
>>> from vector2d_v3 import Vector2d
>>> v2d = Vector2d(1, 2)
>>> vc == v2d
True
>>> va == (1, 2, 3)
False
Tests of operator `!=`::
>>> va != vb
False
>>> vc != v2d
False
>>> va != (1, 2, 3)
True
Tests for operator `@` (Python >= 3.5), computing the dot product::
>>> va = Vector([1, 2, 3])
>>> vz = Vector([5, 6, 7])
>>> va @ vz == 38.0 # 1*5 + 2*6 + 3*7
True
>>> [10, 20, 30] @ vz
380.0
>>> va @ 3
Traceback (most recent call last):
...
TypeError: unsupported operand type(s) for @: 'Vector' and 'int'
""" |
"""
=====================================
Structured Arrays (aka Record Arrays)
=====================================
Introduction
============
Numpy provides powerful capabilities to create arrays of structs or records.
These arrays permit one to manipulate the data by the structs or by fields of
the struct. A simple example will show what is meant.: ::
>>> x = np.zeros((2,),dtype=('i4,f4,a10'))
>>> x[:] = [(1,2.,'Hello'),(2,3.,"World")]
>>> x
array([(1, 2.0, 'Hello'), (2, 3.0, 'World')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
Here we have created a one-dimensional array of length 2. Each element of
this array is a record that contains three items, a 32-bit integer, a 32-bit
float, and a string of length 10 or less. If we index this array at the second
position we get the second record: ::
>>> x[1]
(2,3.,"World")
Conveniently, one can access any field of the array by indexing using the
string that names that field. In this case the fields have received the
default names 'f0', 'f1' and 'f2'.
>>> y = x['f1']
>>> y
array([ 2., 3.], dtype=float32)
>>> y[:] = 2*y
>>> y
array([ 4., 6.], dtype=float32)
>>> x
array([(1, 4.0, 'Hello'), (2, 6.0, 'World')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
In these examples, y is a simple float array consisting of the 2nd field
in the record. But, rather than being a copy of the data in the structured
array, it is a view, i.e., it shares exactly the same memory locations.
Thus, when we updated this array by doubling its values, the structured
array shows the corresponding values as doubled as well. Likewise, if one
changes the record, the field view also changes: ::
>>> x[1] = (-1,-1.,"Master")
>>> x
array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
>>> y
array([ 4., -1.], dtype=float32)
Defining Structured Arrays
==========================
One defines a structured array through the dtype object. There are
**several** alternative ways to define the fields of a record. Some of
these variants provide backward compatibility with Numeric, numarray, or
another module, and should not be used except for such purposes. These
will be so noted. One specifies record structure in
one of four alternative ways, using an argument (as supplied to a dtype
function keyword or a dtype object constructor itself). This
argument must be one of the following: 1) string, 2) tuple, 3) list, or
4) dictionary. Each of these is briefly described below.
1) String argument (as used in the above examples).
In this case, the constructor expects a comma-separated list of type
specifiers, optionally with extra shape information.
The type specifiers can take 4 different forms: ::
a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f4, f8, c8, c16, a<n>
(representing bytes, ints, unsigned ints, floats, complex and
fixed length strings of specified byte lengths)
b) int8,...,uint8,...,float32, float64, complex64, complex128
(this time with bit sizes)
c) older Numeric/numarray type specifications (e.g. Float32).
Don't use these in new code!
d) Single character type specifiers (e.g H for unsigned short ints).
Avoid using these unless you must. Details can be found in the
Numpy book
These different styles can be mixed within the same string (but why would you
want to do that?). Furthermore, each type specifier can be prefixed
with a repetition number, or a shape. In these cases an array
element is created, i.e., an array within a record. That array
is still referred to as a single field. An example: ::
>>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64')
>>> x
array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])],
dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))])
By using strings to define the record structure, it precludes being
able to name the fields in the original definition. The names can
be changed as shown later, however.
2) Tuple argument: The only relevant tuple case that applies to record
structures is when a structure is mapped to an existing data type. This
is done by pairing in a tuple, the existing data type with a matching
dtype definition (using any of the variants being described here). As
an example (using a definition using a list, so see 3) for further
details): ::
>>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')]))
>>> x
array([0, 0, 0])
>>> x['r']
array([0, 0, 0], dtype=uint8)
In this case, an array is produced that looks and acts like a simple int32 array,
but also has definitions for fields that use only one byte of the int32 (a bit
like Fortran equivalencing).
3) List argument: In this case the record structure is defined with a list of
tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field
('' is permitted), 2) the type of the field, and 3) the shape (optional).
For example:
>>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
>>> x
array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])],
dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))])
4) Dictionary argument: two different forms are permitted. The first consists
of a dictionary with two required keys ('names' and 'formats'), each having an
equal sized list of values. The format list contains any type/shape specifier
allowed in other contexts. The names must be strings. There are two optional
keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to
the required two where offsets contain integer offsets for each field, and
titles are objects containing metadata for each field (these do not have
to be strings), where the value of None is permitted. As an example: ::
>>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[('col1', '>i4'), ('col2', '>f4')])
The other dictionary form permitted is a dictionary of name keys with tuple
values specifying type, offset, and an optional title.
>>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')])
Accessing and modifying field names
===================================
The field names are an attribute of the dtype object defining the record structure.
For the last example: ::
>>> x.dtype.names
('col1', 'col2')
>>> x.dtype.names = ('x', 'y')
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')])
>>> x.dtype.names = ('x', 'y', 'z') # wrong number of names
<type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2
Accessing field titles
====================================
The field titles provide a standard place to put associated info for fields.
They do not have to be strings.
>>> x.dtype.fields['x'][2]
'title 1'
""" |
"""
====================================
Linear algebra (:mod:`scipy.linalg`)
====================================
.. currentmodule:: scipy.linalg
Linear algebra functions.
.. seealso::
`numpy.linalg` for more linear algebra functions. Note that
although `scipy.linalg` imports most of them, identically named
functions from `scipy.linalg` may offer more or slightly differing
functionality.
Basics
======
.. autosummary::
:toctree: generated/
inv - Find the inverse of a square matrix
solve - Solve a linear system of equations
solve_banded - Solve a banded linear system
solveh_banded - Solve a Hermitian or symmetric banded system
solve_circulant - Solve a circulant system
solve_triangular - Solve a triangular matrix
solve_toeplitz - Solve a toeplitz matrix
det - Find the determinant of a square matrix
norm - Matrix and vector norm
lstsq - Solve a linear least-squares problem
pinv - Pseudo-inverse (Moore-Penrose) using lstsq
pinv2 - Pseudo-inverse using svd
pinvh - Pseudo-inverse of hermitian matrix
kron - Kronecker product of two arrays
tril - Construct a lower-triangular matrix from a given matrix
triu - Construct an upper-triangular matrix from a given matrix
orthogonal_procrustes - Solve an orthogonal Procrustes problem
LinAlgError
Eigenvalue Problems
===================
.. autosummary::
:toctree: generated/
eig - Find the eigenvalues and eigenvectors of a square matrix
eigvals - Find just the eigenvalues of a square matrix
eigh - Find the e-vals and e-vectors of a Hermitian or symmetric matrix
eigvalsh - Find just the eigenvalues of a Hermitian or symmetric matrix
eig_banded - Find the eigenvalues and eigenvectors of a banded matrix
eigvals_banded - Find just the eigenvalues of a banded matrix
Decompositions
==============
.. autosummary::
:toctree: generated/
lu - LU decomposition of a matrix
lu_factor - LU decomposition returning unordered matrix and pivots
lu_solve - Solve Ax=b using back substitution with output of lu_factor
svd - Singular value decomposition of a matrix
svdvals - Singular values of a matrix
diagsvd - Construct matrix of singular values from output of svd
orth - Construct orthonormal basis for the range of A using svd
cholesky - Cholesky decomposition of a matrix
cholesky_banded - Cholesky decomp. of a sym. or Hermitian banded matrix
cho_factor - Cholesky decomposition for use in solving a linear system
cho_solve - Solve previously factored linear system
cho_solve_banded - Solve previously factored banded linear system
polar - Compute the polar decomposition.
qr - QR decomposition of a matrix
qr_multiply - QR decomposition and multiplication by Q
qr_update - Rank k QR update
qr_delete - QR downdate on row or column deletion
qr_insert - QR update on row or column insertion
rq - RQ decomposition of a matrix
qz - QZ decomposition of a pair of matrices
ordqz - QZ decomposition of a pair of matrices with reordering
schur - Schur decomposition of a matrix
rsf2csf - Real to complex Schur form
hessenberg - Hessenberg form of a matrix
.. seealso::
`scipy.linalg.interpolative` -- Interpolative matrix decompositions
Matrix Functions
================
.. autosummary::
:toctree: generated/
expm - Matrix exponential
logm - Matrix logarithm
cosm - Matrix cosine
sinm - Matrix sine
tanm - Matrix tangent
coshm - Matrix hyperbolic cosine
sinhm - Matrix hyperbolic sine
tanhm - Matrix hyperbolic tangent
signm - Matrix sign
sqrtm - Matrix square root
funm - Evaluating an arbitrary matrix function
expm_frechet - Frechet derivative of the matrix exponential
expm_cond - Relative condition number of expm in the Frobenius norm
fractional_matrix_power - Fractional matrix power
Matrix Equation Solvers
=======================
.. autosummary::
:toctree: generated/
solve_sylvester - Solve the Sylvester matrix equation
solve_continuous_are - Solve the continuous-time algebraic Riccati equation
solve_discrete_are - Solve the discrete-time algebraic Riccati equation
solve_discrete_lyapunov - Solve the discrete-time Lyapunov equation
solve_lyapunov - Solve the (continous-time) Lyapunov equation
Special Matrices
================
.. autosummary::
:toctree: generated/
block_diag - Construct a block diagonal matrix from submatrices
circulant - Circulant matrix
companion - Companion matrix
dft - Discrete Fourier transform matrix
hadamard - Hadamard matrix of order 2**n
hankel - Hankel matrix
helmert - Helmert matrix
hilbert - Hilbert matrix
invhilbert - Inverse Hilbert matrix
leslie - Leslie matrix
pascal - Pascal matrix
invpascal - Inverse Pascal matrix
toeplitz - Toeplitz matrix
tri - Construct a matrix filled with ones at and below a given diagonal
Low-level routines
==================
.. autosummary::
:toctree: generated/
get_blas_funcs
get_lapack_funcs
find_best_blas_type
.. seealso::
`scipy.linalg.blas` -- Low-level BLAS functions
`scipy.linalg.lapack` -- Low-level LAPACK functions
`scipy.linalg.cython_blas` -- Low-level BLAS functions for Cython
`scipy.linalg.cython_lapack` -- Low-level LAPACK functions for Cython
""" |
# FIXME: to be fixed... does not work as of today
# import unittest
# from datetime import datetime, timedelta
# from DIRAC.ResourceStatusSystem.Utilities.mock import Mock
# from DIRAC.Core.LCG.GOCDBClient import GOCDBClient
# from DIRAC.Core.LCG.SLSClient import *
# from DIRAC.Core.LCG.SAMResultsClient import *
# from DIRAC.Core.LCG.GGUSTicketsClient import GGUSTicketsClient
# #from DIRAC.ResourceStatusSystem.Utilities.Exceptions import *
# #from DIRAC.ResourceStatusSystem.Utilities.Utils import *
#
# #############################################################################
#
# class ClientsTestCase(unittest.TestCase):
# """ Base class for the clients test cases
# """
# def setUp(self):
#
# from DIRAC.Core.Base.Script import parseCommandLine
# parseCommandLine()
#
# self.mockRSS = Mock()
#
# self.GOCCli = GOCDBClient()
# self.SLSCli = SLSClient()
# self.SAMCli = SAMResultsClient()
# self.GGUSCli = GGUSTicketsClient()
#
# #############################################################################
#
# class GOCDBClientSuccess(ClientsTestCase):
#
# def test__downTimeXMLParsing(self):
# now = datetime.utcnow().replace(microsecond = 0, second = 0)
# tomorrow = datetime.utcnow().replace(microsecond = 0, second = 0) + timedelta(hours = 24)
# inAWeek = datetime.utcnow().replace(microsecond = 0, second = 0) + timedelta(days = 7)
#
# nowLess12h = str( now - timedelta(hours = 12) )[:-3]
# nowPlus8h = str( now + timedelta(hours = 8) )[:-3]
# nowPlus24h = str( now + timedelta(hours = 24) )[:-3]
# nowPlus40h = str( now + timedelta(hours = 40) )[:-3]
# nowPlus50h = str( now + timedelta(hours = 50) )[:-3]
# nowPlus60h = str( now + timedelta(hours = 60) )[:-3]
#
# XML_site_ongoing = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
# XML_node_ongoing = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505455" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><HOSTNAME>egse-cresco.portici.enea.it</HOSTNAME><HOSTED_BY>GRISU-ENEA-GRID</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
# XML_nodesite_ongoing = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505455" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><HOSTNAME>egse-cresco.portici.enea.it</HOSTNAME><HOSTED_BY>GRISU-ENEA-GRID</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus8h+'</FORMATED_END_DATE></DOWNTIME><DOWNTIME ID="78505456" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
# XML_site_startingIn8h = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus8h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
# XML_node_startingIn8h = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505455" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><HOSTNAME>egse-cresco.portici.enea.it</HOSTNAME><HOSTED_BY>GRISU-ENEA-GRID</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus8h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus24h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
# XML_site_ongoing_and_site_starting_in_24_hours = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G1" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus8h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME><DOWNTIME ID="78505457" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE 2</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus24h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus40h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
# XML_site_startingIn24h_and_site_startingIn50h = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G1" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus24h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus40h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME><DOWNTIME ID="78505457" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus50h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus60h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
# XML_site_ongoing_and_other_site_starting_in_24_hours = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G1" CLASSIFICATION="SCHEDULED"><SITENAME>GRISU-ENEA-GRID</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus8h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME><DOWNTIME ID="78505457" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><SITENAME>CERN-PROD</SITENAME><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems SITE 2</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus24h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus40h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
# XML_node_ongoing_and_other_node_starting_in_24_hours = '<?xml version="1.0"?>\n<ROOT><DOWNTIME ID="78505456" PRIMARY_KEY="28490G1" CLASSIFICATION="SCHEDULED"><HOSTNAME>egse-cresco.portici.enea.it</HOSTNAME><HOSTED_BY>GRISU-ENEA-GRID</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems RESOURCE</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowLess12h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus8h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME><DOWNTIME ID="78505457" PRIMARY_KEY="28490G0" CLASSIFICATION="SCHEDULED"><HOSTNAME>ce112.cern.ch</HOSTNAME><HOSTED_BY>CERN-PROD</HOSTED_BY><SEVERITY>OUTAGE</SEVERITY><DESCRIPTION>Software problems RESOURCE 2</DESCRIPTION><INSERT_DATE>1276273965</INSERT_DATE><START_DATE>1276360500</START_DATE><END_DATE>1276878660</END_DATE><FORMATED_START_DATE>'+nowPlus24h+'</FORMATED_START_DATE><FORMATED_END_DATE>'+nowPlus40h+'</FORMATED_END_DATE><GOCDB_PORTAL_URL>https://next.gocdb.eu/portal/index.php?Page_Type=View_Object&object_id=18509&grid_id=0</GOCDB_PORTAL_URL></DOWNTIME></ROOT>\n'
#
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing, 'Site')
# self.assertEqual(res.keys()[0], '28490G0 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G0 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing, 'Resource')
# self.assertEqual(res.keys()[0], '28490G0 egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G0 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G0 egse-cresco.portici.enea.it']['HOSTED_BY'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing, 'Resource')
# self.assertEqual(res, {})
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing, 'Site')
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_nodesite_ongoing, 'Site')
# self.assertEquals(len(res), 1)
# self.assertEqual(res.keys()[0], '28490G0 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G0 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_nodesite_ongoing, 'Resource')
# self.assertEquals(len(res), 1)
# self.assertEqual(res.keys()[0], '28490G0 egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G0 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_startingIn8h, 'Site', None, now)
# self.assertEqual(res, {})
# res = self.GOCCli._downTimeXMLParsing(XML_node_startingIn8h, 'Resource', None, now)
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_site_starting_in_24_hours, 'Site', None, now)
# self.assertEqual(res.keys()[0], '28490G1 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_site_starting_in_24_hours, 'Resource', None, now)
# self.assertEqual(res, {})
# res = self.GOCCli._downTimeXMLParsing(XML_site_startingIn24h_and_site_startingIn50h, 'Site', None, now)
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_startingIn24h_and_site_startingIn50h, 'Site', None, tomorrow)
# self.assertEqual(res.keys()[0], '28490G1 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', ['GRISU-ENEA-GRID'])
# self.assertEqual(res.keys()[0], '28490G1 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', ['GRISU-ENEA-GRID', 'CERN-PROD'])
# self.assert_('28490G1 GRISU-ENEA-GRID' in res.keys())
# self.assert_('28490G0 CERN-PROD' in res.keys())
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
# self.assertEqual(res['28490G0 CERN-PROD']['SITENAME'], 'CERN-PROD')
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', 'CERN-PROD')
# self.assertEqual(res.keys()[0], '28490G0 CERN-PROD')
# self.assertEqual(res['28490G0 CERN-PROD']['SITENAME'], 'CERN-PROD')
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', 'CNAF-T1')
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', ['GRISU-ENEA-GRID', 'CERN-PROD'], now)
# self.assertEqual(res.keys()[0], '28490G1 GRISU-ENEA-GRID')
# self.assertEqual(res['28490G1 GRISU-ENEA-GRID']['SITENAME'], 'GRISU-ENEA-GRID')
# res = self.GOCCli._downTimeXMLParsing(XML_site_ongoing_and_other_site_starting_in_24_hours, 'Site', ['GRISU-ENEA-GRID', 'CERN-PROD'], inAWeek)
# self.assertEqual(res.keys()[0], '28490G0 CERN-PROD')
# self.assertEqual(res['28490G0 CERN-PROD']['SITENAME'], 'CERN-PROD')
#
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', ['egse-cresco.portici.enea.it'])
# self.assertEqual(res.keys()[0], '28490G1 egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G1 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', ['egse-cresco.portici.enea.it', 'ce112.cern.ch'])
# self.assert_('28490G1 egse-cresco.portici.enea.it' in res.keys())
# self.assert_('28490G0 ce112.cern.ch' in res.keys())
# self.assertEqual(res['28490G1 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
# self.assertEqual(res['28490G0 ce112.cern.ch']['HOSTNAME'], 'ce112.cern.ch')
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', 'ce112.cern.ch')
# self.assertEqual(res.keys()[0], '28490G0 ce112.cern.ch')
# self.assertEqual(res['28490G0 ce112.cern.ch']['HOSTNAME'], 'ce112.cern.ch')
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', 'grid0.fe.infn.it')
# self.assertEqual(res, {})
#
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', ['egse-cresco.portici.enea.it', 'ce112.cern.ch'], now)
# self.assert_('28490G1 egse-cresco.portici.enea.it' in res.keys())
# self.assertEqual(res['28490G1 egse-cresco.portici.enea.it']['HOSTNAME'], 'egse-cresco.portici.enea.it')
# res = self.GOCCli._downTimeXMLParsing(XML_node_ongoing_and_other_node_starting_in_24_hours, 'Resource', ['egse-cresco.portici.enea.it', 'ce112.cern.ch'], inAWeek)
# self.assertEqual(res.keys()[0], '28490G0 ce112.cern.ch')
# self.assertEqual(res['28490G0 ce112.cern.ch']['HOSTNAME'], 'ce112.cern.ch')
#
#
# def test_getStatus(self):
# for granularity in ('Site', 'Resource'):
# res = self.GOCCli.getStatus(granularity, 'XX')['Value']
# self.assertEqual(res, None)
# res = self.GOCCli.getStatus(granularity, 'XX', datetime.utcnow())['Value']
# self.assertEqual(res, None)
# res = self.GOCCli.getStatus(granularity, 'XX', datetime.utcnow(), 12)['Value']
# self.assertEqual(res, None)
#
# res = self.GOCCli.getStatus('Site', 'pic')['Value']
# self.assertEqual(res, None)
#
# def test_getServiceEndpointInfo(self):
# for granularity in ('hostname', 'sitename', 'roc',
# 'country', 'service_type', 'monitored'):
# res = self.GOCCli.getServiceEndpointInfo(granularity, 'XX')['Value']
# self.assertEqual(res, [])
#
# #############################################################################
#
# class SAMResultsClientSuccess(ClientsTestCase):
#
# def test_getStatus(self):
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA')['Value']
# self.assertEqual(res, {'SS':'ok'})
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA', ['ver'])['Value']
# self.assertEqual(res, {'ver':'ok'})
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA', ['LHCb CE-lhcb-os', 'PilotRole'])['Value']
# self.assertEqual(res, {'PilotRole':'ok', 'LHCb CE-lhcb-os':'ok'})
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA', ['wrong'])['Value']
# self.assertEqual(res, None)
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA', ['ver', 'wrong'])['Value']
# self.assertEqual(res, {'ver':'ok'})
# res = self.SAMCli.getStatus('Resource', 'grid0.fe.infn.it', 'INFN-FERRARA')['Value']
# self.assertEqual(res, {'SS':'ok'})
#
# res = self.SAMCli.getStatus('Site', 'INFN-FERRARA')['Value']
# self.assertEqual(res, {'SiteStatus':'ok'})
#
# #############################################################################
#
# #class SAMResultsClientFailure(ClientsTestCase):
# #
# # def test_getStatus(self):
# # self.failUnlessRaises(NoSAMTests, self.SAMCli.getStatus, 'Resource', 'XX', 'INFN-FERRARA')
#
# #############################################################################
#
# class SLSClientSuccess(ClientsTestCase):
#
# def test_getAvailabilityStatus(self):
# res = self.SLSCli.getAvailabilityStatus('RAL-LHCb_FAILOVER')['Value']
# self.assertEqual(res, 100)
#
# def test_getServiceInfo(self):
# res = self.SLSCli.getServiceInfo('CASTORLHCB_LHCBMDST', ["Volume to be recallled GB"])['Value']
# self.assertEqual(res["Volume to be recallled GB"], 0.0)
#
# #############################################################################
#
# #class SLSClientFailure(ClientsTestCase):
# #
# # def test_getStatus(self):
# # self.failUnlessRaises(NoServiceException, self.SLSCli.getAvailabilityStatus, 'XX')
#
# #############################################################################
#
# class GGUSTicketsClientSuccess(ClientsTestCase):
#
# def test_getTicketsList(self):
# res = self.GGUSCli.getTicketsList('INFN-CAGLIARI')['Value']
# self.assertEqual(res[0]['open'], 0)
#
#
# #############################################################################
#
# if __name__ == '__main__':
# suite = unittest.defaultTestLoader.loadTestsFromTestCase(ClientsTestCase)
# suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(GOCDBClientSuccess))
# # suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(GOCDBClient_Failure))
# suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(SAMResultsClientSuccess))
# # suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(SAMResultsClientFailure))
# suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(SLSClientSuccess))
# # suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(SLSClientFailure))
# suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(GGUSTicketsClientSuccess))
# testResult = unittest.TextTestRunner(verbosity=2).run(suite)
|
# tagmerge.py - merge .hgtags files
#
# Copyright 2014 NAME <EMAIL>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
# This module implements an automatic merge algorithm for mercurial's tag files
#
# The tagmerge algorithm implemented in this module is able to resolve most
# merge conflicts that currently would trigger a .hgtags merge conflict. The
# only case that it does not (and cannot) handle is that in which two tags point
# to different revisions on each merge parent _and_ their corresponding tag
# histories have the same rank (i.e. the same length). In all other cases the
# merge algorithm will choose the revision belonging to the parent with the
# highest ranked tag history. The merged tag history is the combination of both
# tag histories (special care is taken to try to combine common tag histories
# where possible).
#
# In addition to actually merging the tags from two parents, taking into
# account the base, the algorithm also tries to minimize the difference
# between the merged tag file and the first parent's tag file (i.e. it tries to
# make the merged tag order as as similar as possible to the first parent's tag
# file order).
#
# The algorithm works as follows:
# 1. read the tags from p1, p2 and the base
# - when reading the p1 tags, also get the line numbers associated to each
# tag node (these will be used to sort the merged tags in a way that
# minimizes the diff to p1). Ignore the file numbers when reading p2 and
# the base
# 2. recover the "lost tags" (i.e. those that are found in the base but not on
# p1 or p2) and add them back to p1 and/or p2
# - at this point the only tags that are on p1 but not on p2 are those new
# tags that were introduced in p1. Same thing for the tags that are on p2
# but not on p2
# 3. take all tags that are only on p1 or only on p2 (but not on the base)
# - Note that these are the tags that were introduced between base and p1
# and between base and p2, possibly on separate clones
# 4. for each tag found both on p1 and p2 perform the following merge algorithm:
# - the tags conflict if their tag "histories" have the same "rank" (i.e.
# length) AND the last (current) tag is NOT the same
# - for non conflicting tags:
# - choose which are the high and the low ranking nodes
# - the high ranking list of nodes is the one that is longer.
# In case of draw favor p1
# - the merged node list is made of 3 parts:
# - first the nodes that are common to the beginning of both
# the low and the high ranking nodes
# - second the non common low ranking nodes
# - finally the non common high ranking nodes (with the last
# one being the merged tag node)
# - note that this is equivalent to putting the whole low ranking
# node list first, followed by the non common high ranking nodes
# - note that during the merge we keep the "node line numbers", which will
# be used when writing the merged tags to the tag file
# 5. write the merged tags taking into account to their positions in the first
# parent (i.e. try to keep the relative ordering of the nodes that come
# from p1). This minimizes the diff between the merged and the p1 tag files
# This is done by using the following algorithm
# - group the nodes for a given tag that must be written next to each other
# - A: nodes that come from consecutive lines on p1
# - B: nodes that come from p2 (i.e. whose associated line number is
# None) and are next to one of the a nodes in A
# - each group is associated with a line number coming from p1
# - generate a "tag block" for each of the groups
# - a tag block is a set of consecutive "node tag" lines belonging to
# the same tag and which will be written next to each other on the
# merged tags file
# - sort the "tag blocks" according to their associated number line
# - put blocks whose nodes come all from p2 first
# - write the tag blocks in the sorted order
|
"""
Discrete Fourier Transform (:mod:`numpy.fft`)
=============================================
.. currentmodule:: numpy.fft
Standard FFTs
-------------
.. autosummary::
:toctree: generated/
fft Discrete Fourier transform.
ifft Inverse discrete Fourier transform.
fft2 Discrete Fourier transform in two dimensions.
ifft2 Inverse discrete Fourier transform in two dimensions.
fftn Discrete Fourier transform in N-dimensions.
ifftn Inverse discrete Fourier transform in N dimensions.
Real FFTs
---------
.. autosummary::
:toctree: generated/
rfft Real discrete Fourier transform.
irfft Inverse real discrete Fourier transform.
rfft2 Real discrete Fourier transform in two dimensions.
irfft2 Inverse real discrete Fourier transform in two dimensions.
rfftn Real discrete Fourier transform in N dimensions.
irfftn Inverse real discrete Fourier transform in N dimensions.
Hermitian FFTs
--------------
.. autosummary::
:toctree: generated/
hfft Hermitian discrete Fourier transform.
ihfft Inverse Hermitian discrete Fourier transform.
Helper routines
---------------
.. autosummary::
:toctree: generated/
fftfreq Discrete Fourier Transform sample frequencies.
rfftfreq DFT sample frequencies (for usage with rfft, irfft).
fftshift Shift zero-frequency component to center of spectrum.
ifftshift Inverse of fftshift.
Background information
----------------------
Fourier analysis is fundamentally a method for expressing a function as a
sum of periodic components, and for recovering the function from those
components. When both the function and its Fourier transform are
replaced with discretized counterparts, it is called the discrete Fourier
transform (DFT). The DFT has become a mainstay of numerical computing in
part because of a very fast algorithm for computing it, called the Fast
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_
provide an accessible introduction to Fourier analysis and its
applications.
Because the discrete Fourier transform separates its input into
components that contribute at discrete frequencies, it has a great number
of applications in digital signal processing, e.g., for filtering, and in
this context the discretized input to the transform is customarily
referred to as a *signal*, which exists in the *time domain*. The output
is called a *spectrum* or *transform* and exists in the *frequency
domain*.
Implementation details
----------------------
There are many ways to define the DFT, varying in the sign of the
exponent, normalization, etc. In this implementation, the DFT is defined
as
.. math::
A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\}
\\qquad k = 0,\\ldots,n-1.
The DFT is in general defined for complex inputs and outputs, and a
single-frequency component at linear frequency :math:`f` is
represented by a complex exponential
:math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t`
is the sampling interval.
The values in the result follow so-called "standard" order: If ``A =
fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the sum of
the signal), which is always purely real for real inputs. Then ``A[1:n/2]``
contains the positive-frequency terms, and ``A[n/2+1:]`` contains the
negative-frequency terms, in order of decreasingly negative frequency.
For an even number of input points, ``A[n/2]`` represents both positive and
negative Nyquist frequency, and is also purely real for real input. For
an odd number of input points, ``A[(n-1)/2]`` contains the largest positive
frequency, while ``A[(n+1)/2]`` contains the largest negative frequency.
The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies
of corresponding elements in the output. The routine
``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the
zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes
that shift.
When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)``
is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum.
The phase spectrum is obtained by ``np.angle(A)``.
The inverse DFT is defined as
.. math::
a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\}
\\qquad m = 0,\\ldots,n-1.
It differs from the forward transform by the sign of the exponential
argument and the default normalization by :math:`1/n`.
Normalization
-------------
The default normalization has the direct transforms unscaled and the inverse
transforms are scaled by :math:`1/n`. It is possible to obtain unitary
transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is
`None`) so that both direct and inverse transforms will be scaled by
:math:`1/\\sqrt{n}`.
Real and Hermitian transforms
-----------------------------
When the input is purely real, its transform is Hermitian, i.e., the
component at frequency :math:`f_k` is the complex conjugate of the
component at frequency :math:`-f_k`, which means that for real
inputs there is no information in the negative frequency components that
is not already available from the positive frequency components.
The family of `rfft` functions is
designed to operate on real inputs, and exploits this symmetry by
computing only the positive frequency components, up to and including the
Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex
output points. The inverses of this family assumes the same symmetry of
its input, and for an output of ``n`` points uses ``n/2+1`` input points.
Correspondingly, when the spectrum is purely real, the signal is
Hermitian. The `hfft` family of functions exploits this symmetry by
using ``n/2+1`` complex points in the input (time) domain for ``n`` real
points in the frequency domain.
In higher dimensions, FFTs are used, e.g., for image analysis and
filtering. The computational efficiency of the FFT means that it can
also be a faster way to compute large convolutions, using the property
that a convolution in the time domain is equivalent to a point-by-point
multiplication in the frequency domain.
Higher dimensions
-----------------
In two dimensions, the DFT is defined as
.. math::
A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1}
a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\}
\\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1,
which extends in the obvious way to higher dimensions, and the inverses
in higher dimensions also extend in the same way.
References
----------
.. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the
machine calculation of complex Fourier series," *Math. Comput.*
19: 297-301.
.. [NR] NAME NAME NAME and NAME
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
12-13. Cambridge Univ. Press, Cambridge, UK.
Examples
--------
For examples, see the various functions.
""" |
"""
=============================
Subclassing ndarray in python
=============================
Credits
-------
This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses.
Introduction
------------
Subclassing ndarray is relatively simple, but it has some complications
compared to other Python objects. On this page we explain the machinery
that allows you to subclass ndarray, and the implications for
implementing a subclass.
ndarrays and object creation
============================
Subclassing ndarray is complicated by the fact that new instances of
ndarray classes can come about in three different ways. These are:
#. Explicit constructor call - as in ``MySubClass(params)``. This is
the usual route to Python instance creation.
#. View casting - casting an existing ndarray as a given subclass
#. New from template - creating a new instance from a template
instance. Examples include returning slices from a subclassed array,
creating return types from ufuncs, and copying arrays. See
:ref:`new-from-template` for more details
The last two are characteristics of ndarrays - in order to support
things like array slicing. The complications of subclassing ndarray are
due to the mechanisms numpy has to support these latter two routes of
instance creation.
.. _view-casting:
View casting
------------
*View casting* is the standard ndarray mechanism by which you take an
ndarray of any subclass, and return a view of the array as another
(specified) subclass:
>>> import numpy as np
>>> # create a completely useless ndarray subclass
>>> class C(np.ndarray): pass
>>> # create a standard ndarray
>>> arr = np.zeros((3,))
>>> # take a view of it, as our useless subclass
>>> c_arr = arr.view(C)
>>> type(c_arr)
<class 'C'>
.. _new-from-template:
Creating new from template
--------------------------
New instances of an ndarray subclass can also come about by a very
similar mechanism to :ref:`view-casting`, when numpy finds it needs to
create a new instance from a template instance. The most obvious place
this has to happen is when you are taking slices of subclassed arrays.
For example:
>>> v = c_arr[1:]
>>> type(v) # the view is of type 'C'
<class 'C'>
>>> v is c_arr # but it's a new instance
False
The slice is a *view* onto the original ``c_arr`` data. So, when we
take a view from the ndarray, we return a new ndarray, of the same
class, that points to the data in the original.
There are other points in the use of ndarrays where we need such views,
such as copying arrays (``c_arr.copy()``), creating ufunc output arrays
(see also :ref:`array-wrap`), and reducing methods (like
``c_arr.mean()``.
Relationship of view casting and new-from-template
--------------------------------------------------
These paths both use the same machinery. We make the distinction here,
because they result in different input to your methods. Specifically,
:ref:`view-casting` means you have created a new instance of your array
type from any potential subclass of ndarray. :ref:`new-from-template`
means you have created a new instance of your class from a pre-existing
instance, allowing you - for example - to copy across attributes that
are particular to your subclass.
Implications for subclassing
----------------------------
If we subclass ndarray, we need to deal not only with explicit
construction of our array type, but also :ref:`view-casting` or
:ref:`new-from-template`. Numpy has the machinery to do this, and this
machinery that makes subclassing slightly non-standard.
There are two aspects to the machinery that ndarray uses to support
views and new-from-template in subclasses.
The first is the use of the ``ndarray.__new__`` method for the main work
of object initialization, rather then the more usual ``__init__``
method. The second is the use of the ``__array_finalize__`` method to
allow subclasses to clean up after the creation of views and new
instances from templates.
A brief Python primer on ``__new__`` and ``__init__``
=====================================================
``__new__`` is a standard Python method, and, if present, is called
before ``__init__`` when we create a class instance. See the `python
__new__ documentation
<http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail.
For example, consider the following Python code:
.. testcode::
class C(object):
def __new__(cls, *args):
print('Cls in __new__:', cls)
print('Args in __new__:', args)
return object.__new__(cls, *args)
def __init__(self, *args):
print('type(self) in __init__:', type(self))
print('Args in __init__:', args)
meaning that we get:
>>> c = C('hello')
Cls in __new__: <class 'C'>
Args in __new__: ('hello',)
type(self) in __init__: <class 'C'>
Args in __init__: ('hello',)
When we call ``C('hello')``, the ``__new__`` method gets its own class
as first argument, and the passed argument, which is the string
``'hello'``. After python calls ``__new__``, it usually (see below)
calls our ``__init__`` method, with the output of ``__new__`` as the
first argument (now a class instance), and the passed arguments
following.
As you can see, the object can be initialized in the ``__new__``
method or the ``__init__`` method, or both, and in fact ndarray does
not have an ``__init__`` method, because all the initialization is
done in the ``__new__`` method.
Why use ``__new__`` rather than just the usual ``__init__``? Because
in some cases, as for ndarray, we want to be able to return an object
of some other class. Consider the following:
.. testcode::
class D(C):
def __new__(cls, *args):
print('D cls is:', cls)
print('D args in __new__:', args)
return C.__new__(C, *args)
def __init__(self, *args):
# we never get here
print('In D __init__')
meaning that:
>>> obj = D('hello')
D cls is: <class 'D'>
D args in __new__: ('hello',)
Cls in __new__: <class 'C'>
Args in __new__: ('hello',)
>>> type(obj)
<class 'C'>
The definition of ``C`` is the same as before, but for ``D``, the
``__new__`` method returns an instance of class ``C`` rather than
``D``. Note that the ``__init__`` method of ``D`` does not get
called. In general, when the ``__new__`` method returns an object of
class other than the class in which it is defined, the ``__init__``
method of that class is not called.
This is how subclasses of the ndarray class are able to return views
that preserve the class type. When taking a view, the standard
ndarray machinery creates the new ndarray object with something
like::
obj = ndarray.__new__(subtype, shape, ...
where ``subdtype`` is the subclass. Thus the returned view is of the
same class as the subclass, rather than being of class ``ndarray``.
That solves the problem of returning views of the same type, but now
we have a new problem. The machinery of ndarray can set the class
this way, in its standard methods for taking views, but the ndarray
``__new__`` method knows nothing of what we have done in our own
``__new__`` method in order to set attributes, and so on. (Aside -
why not call ``obj = subdtype.__new__(...`` then? Because we may not
have a ``__new__`` method with the same call signature).
The role of ``__array_finalize__``
==================================
``__array_finalize__`` is the mechanism that numpy provides to allow
subclasses to handle the various ways that new instances get created.
Remember that subclass instances can come about in these three ways:
#. explicit constructor call (``obj = MySubClass(params)``). This will
call the usual sequence of ``MySubClass.__new__`` then (if it exists)
``MySubClass.__init__``.
#. :ref:`view-casting`
#. :ref:`new-from-template`
Our ``MySubClass.__new__`` method only gets called in the case of the
explicit constructor call, so we can't rely on ``MySubClass.__new__`` or
``MySubClass.__init__`` to deal with the view casting and
new-from-template. It turns out that ``MySubClass.__array_finalize__``
*does* get called for all three methods of object creation, so this is
where our object creation housekeeping usually goes.
* For the explicit constructor call, our subclass will need to create a
new ndarray instance of its own class. In practice this means that
we, the authors of the code, will need to make a call to
``ndarray.__new__(MySubClass,...)``, or do view casting of an existing
array (see below)
* For view casting and new-from-template, the equivalent of
``ndarray.__new__(MySubClass,...`` is called, at the C level.
The arguments that ``__array_finalize__`` recieves differ for the three
methods of instance creation above.
The following code allows us to look at the call sequences and arguments:
.. testcode::
import numpy as np
class C(np.ndarray):
def __new__(cls, *args, **kwargs):
print('In __new__ with class %s' % cls)
return np.ndarray.__new__(cls, *args, **kwargs)
def __init__(self, *args, **kwargs):
# in practice you probably will not need or want an __init__
# method for your subclass
print('In __init__ with class %s' % self.__class__)
def __array_finalize__(self, obj):
print('In array_finalize:')
print(' self type is %s' % type(self))
print(' obj type is %s' % type(obj))
Now:
>>> # Explicit constructor
>>> c = C((10,))
In __new__ with class <class 'C'>
In array_finalize:
self type is <class 'C'>
obj type is <type 'NoneType'>
In __init__ with class <class 'C'>
>>> # View casting
>>> a = np.arange(10)
>>> cast_a = a.view(C)
In array_finalize:
self type is <class 'C'>
obj type is <type 'numpy.ndarray'>
>>> # Slicing (example of new-from-template)
>>> cv = c[:1]
In array_finalize:
self type is <class 'C'>
obj type is <class 'C'>
The signature of ``__array_finalize__`` is::
def __array_finalize__(self, obj):
``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our
own class (``self``) as well as the object from which the view has been
taken (``obj``). As you can see from the output above, the ``self`` is
always a newly created instance of our subclass, and the type of ``obj``
differs for the three instance creation methods:
* When called from the explicit constructor, ``obj`` is ``None``
* When called from view casting, ``obj`` can be an instance of any
subclass of ndarray, including our own.
* When called in new-from-template, ``obj`` is another instance of our
own subclass, that we might use to update the new ``self`` instance.
Because ``__array_finalize__`` is the only method that always sees new
instances being created, it is the sensible place to fill in instance
defaults for new object attributes, among other tasks.
This may be clearer with an example.
Simple example - adding an extra attribute to ndarray
-----------------------------------------------------
.. testcode::
import numpy as np
class InfoArray(np.ndarray):
def __new__(subtype, shape, dtype=float, buffer=None, offset=0,
strides=None, order=None, info=None):
# Create the ndarray instance of our type, given the usual
# ndarray input arguments. This will call the standard
# ndarray constructor, but return an object of our type.
# It also triggers a call to InfoArray.__array_finalize__
obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides,
order)
# set the new 'info' attribute to the value passed
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# ``self`` is a new object resulting from
# ndarray.__new__(InfoArray, ...), therefore it only has
# attributes that the ndarray.__new__ constructor gave it -
# i.e. those of a standard ndarray.
#
# We could have got to the ndarray.__new__ call in 3 ways:
# From an explicit constructor - e.g. InfoArray():
# obj is None
# (we're in the middle of the InfoArray.__new__
# constructor, and self.info will be set when we return to
# InfoArray.__new__)
if obj is None: return
# From view casting - e.g arr.view(InfoArray):
# obj is arr
# (type(obj) can be InfoArray)
# From new-from-template - e.g infoarr[:3]
# type(obj) is InfoArray
#
# Note that it is here, rather than in the __new__ method,
# that we set the default value for 'info', because this
# method sees all creation of default objects - with the
# InfoArray.__new__ constructor, but also with
# arr.view(InfoArray).
self.info = getattr(obj, 'info', None)
# We do not need to return anything
Using the object looks like this:
>>> obj = InfoArray(shape=(3,)) # explicit constructor
>>> type(obj)
<class 'InfoArray'>
>>> obj.info is None
True
>>> obj = InfoArray(shape=(3,), info='information')
>>> obj.info
'information'
>>> v = obj[1:] # new-from-template - here - slicing
>>> type(v)
<class 'InfoArray'>
>>> v.info
'information'
>>> arr = np.arange(10)
>>> cast_arr = arr.view(InfoArray) # view casting
>>> type(cast_arr)
<class 'InfoArray'>
>>> cast_arr.info is None
True
This class isn't very useful, because it has the same constructor as the
bare ndarray object, including passing in buffers and shapes and so on.
We would probably prefer the constructor to be able to take an already
formed ndarray from the usual numpy calls to ``np.array`` and return an
object.
Slightly more realistic example - attribute added to existing array
-------------------------------------------------------------------
Here is a class that takes a standard ndarray that already exists, casts
as our type, and adds an extra attribute.
.. testcode::
import numpy as np
class RealisticInfoArray(np.ndarray):
def __new__(cls, input_array, info=None):
# Input array is an already formed ndarray instance
# We first cast to be our class type
obj = np.asarray(input_array).view(cls)
# add the new attribute to the created instance
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# see InfoArray.__array_finalize__ for comments
if obj is None: return
self.info = getattr(obj, 'info', None)
So:
>>> arr = np.arange(5)
>>> obj = RealisticInfoArray(arr, info='information')
>>> type(obj)
<class 'RealisticInfoArray'>
>>> obj.info
'information'
>>> v = obj[1:]
>>> type(v)
<class 'RealisticInfoArray'>
>>> v.info
'information'
.. _array-wrap:
``__array_wrap__`` for ufuncs
-------------------------------------------------------
``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy
functions, to allow a subclass to set the type of the return value
and update attributes and metadata. Let's show how this works with an example.
First we make the same subclass as above, but with a different name and
some print statements:
.. testcode::
import numpy as np
class MySubClass(np.ndarray):
def __new__(cls, input_array, info=None):
obj = np.asarray(input_array).view(cls)
obj.info = info
return obj
def __array_finalize__(self, obj):
print('In __array_finalize__:')
print(' self is %s' % repr(self))
print(' obj is %s' % repr(obj))
if obj is None: return
self.info = getattr(obj, 'info', None)
def __array_wrap__(self, out_arr, context=None):
print('In __array_wrap__:')
print(' self is %s' % repr(self))
print(' arr is %s' % repr(out_arr))
# then just call the parent
return np.ndarray.__array_wrap__(self, out_arr, context)
We run a ufunc on an instance of our new array:
>>> obj = MySubClass(np.arange(5), info='spam')
In __array_finalize__:
self is MySubClass([0, 1, 2, 3, 4])
obj is array([0, 1, 2, 3, 4])
>>> arr2 = np.arange(5)+1
>>> ret = np.add(arr2, obj)
In __array_wrap__:
self is MySubClass([0, 1, 2, 3, 4])
arr is array([1, 3, 5, 7, 9])
In __array_finalize__:
self is MySubClass([1, 3, 5, 7, 9])
obj is MySubClass([0, 1, 2, 3, 4])
>>> ret
MySubClass([1, 3, 5, 7, 9])
>>> ret.info
'spam'
Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the
input with the highest ``__array_priority__`` value, in this case
``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and
``out_arr`` as the (ndarray) result of the addition. In turn, the
default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the
result to class ``MySubClass``, and called ``__array_finalize__`` -
hence the copying of the ``info`` attribute. This has all happened at the C level.
But, we could do anything we wanted:
.. testcode::
class SillySubClass(np.ndarray):
def __array_wrap__(self, arr, context=None):
return 'I lost your data'
>>> arr1 = np.arange(5)
>>> obj = arr1.view(SillySubClass)
>>> arr2 = np.arange(5)
>>> ret = np.multiply(obj, arr2)
>>> ret
'I lost your data'
So, by defining a specific ``__array_wrap__`` method for our subclass,
we can tweak the output from ufuncs. The ``__array_wrap__`` method
requires ``self``, then an argument - which is the result of the ufunc -
and an optional parameter *context*. This parameter is returned by some
ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc,
domain of the ufunc). ``__array_wrap__`` should return an instance of
its containing class. See the masked array subclass for an
implementation.
In addition to ``__array_wrap__``, which is called on the way out of the
ufunc, there is also an ``__array_prepare__`` method which is called on
the way into the ufunc, after the output arrays are created but before any
computation has been performed. The default implementation does nothing
but pass through the array. ``__array_prepare__`` should not attempt to
access the array data or resize the array, it is intended for setting the
output array type, updating attributes and metadata, and performing any
checks based on the input that may be desired before computation begins.
Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or
subclass thereof or raise an error.
Extra gotchas - custom ``__del__`` methods and ndarray.base
-----------------------------------------------------------
One of the problems that ndarray solves is keeping track of memory
ownership of ndarrays and their views. Consider the case where we have
created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``.
The two objects are looking at the same memory. Numpy keeps track of
where the data came from for a particular array or view, with the
``base`` attribute:
>>> # A normal ndarray, that owns its own data
>>> arr = np.zeros((4,))
>>> # In this case, base is None
>>> arr.base is None
True
>>> # We take a view
>>> v1 = arr[1:]
>>> # base now points to the array that it derived from
>>> v1.base is arr
True
>>> # Take a view of a view
>>> v2 = v1[1:]
>>> # base points to the view it derived from
>>> v2.base is v1
True
In general, if the array owns its own memory, as for ``arr`` in this
case, then ``arr.base`` will be None - there are some exceptions to this
- see the numpy book for more details.
The ``base`` attribute is useful in being able to tell whether we have
a view or the original array. This in turn can be useful if we need
to know whether or not to do some specific cleanup when the subclassed
array is deleted. For example, we may only want to do the cleanup if
the original array is deleted, but not the views. For an example of
how this can work, have a look at the ``memmap`` class in
``numpy.core``.
""" |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# GNU GENERAL PUBLIC LICENSE
# Version 3, 29 June 2012
#
# Copyright (C) 2012 Free Software Foundation, Inc. <http://fsf.org/>
# Everyone is permitted to copy and distribute verbatim copies
# of this license document, but changing it is not allowed.
#
# Preamble
#
# The GNU General Public License is a free, copyleft license for
# software and other kinds of works.
#
# The licenses for most software and other practical works are designed
# to take away your freedom to share and change the works. By contrast,
# the GNU General Public License is intended to guarantee your freedom to
# share and change all versions of a program--to make sure it remains free
# software for all its users. We, the Free Software Foundation, use the
# GNU General Public License for most of our software; it applies also to
# any other work released this way by its authors. You can apply it to
# your programs, too.
#
# When we speak of free software, we are referring to freedom, not
# price. Our General Public Licenses are designed to make sure that you
# have the freedom to distribute copies of free software (and charge for
# them if you wish), that you receive source code or can get it if you
# want it, that you can change the software or use pieces of it in new
# free programs, and that you know you can do these things.
#
# To protect your rights, we need to prevent others from denying you
# these rights or asking you to surrender the rights. Therefore, you have
# certain responsibilities if you distribute copies of the software, or if
# you modify it: responsibilities to respect the freedom of others.
#
# For example, if you distribute copies of such a program, whether
# gratis or for a fee, you must pass on to the recipients the same
# freedoms that you received. You must make sure that they, too, receive
# or can get the source code. And you must show them these terms so they
# know their rights.
#
# Developers that use the GNU GPL protect your rights with two steps:
# (1) assert copyright on the software, and (2) offer you this License
# giving you legal permission to copy, distribute and/or modify it.
#
# For the developers' and authors' protection, the GPL clearly explains
# that there is no warranty for this free software. For both users' and
# authors' sake, the GPL requires that modified versions be marked as
# changed, so that their problems will not be attributed erroneously to
# authors of previous versions.
#
# Some devices are designed to deny users access to install or run
# modified versions of the software inside them, although the manufacturer
# can do so. This is fundamentally incompatible with the aim of
# protecting users' freedom to change the software. The systematic
# pattern of such abuse occurs in the area of products for individuals to
# use, which is precisely where it is most unacceptable. Therefore, we
# have designed this version of the GPL to prohibit the practice for those
# products. If such problems arise substantially in other domains, we
# stand ready to extend this provision to those domains in future versions
# of the GPL, as needed to protect the freedom of users.
#
# Finally, every program is threatened constantly by software patents.
# States should not allow patents to restrict development and use of
# software on general-purpose computers, but in those that do, we wish to
# avoid the special danger that patents applied to a free program could
# make it effectively proprietary. To prevent this, the GPL assures that
# patents cannot be used to render the program non-free.
#
# The precise terms and conditions for copying, distribution and
# modification follow.
#
# TERMS AND CONDITIONS
#
# 0. Definitions.
#
# "This License" refers to version 3 of the GNU General Public License.
#
# "Copyright" also means copyright-like laws that apply to other kinds of
# works, such as semiconductor masks.
#
# "The Program" refers to any copyrightable work licensed under this
# License. Each licensee is addressed as "you". "Licensees" and
# "recipients" may be individuals or organizations.
#
# To "modify" a work means to copy from or adapt all or part of the work
# in a fashion requiring copyright permission, other than the making of an
# exact copy. The resulting work is called a "modified version" of the
# earlier work or a work "based on" the earlier work.
#
# A "covered work" means either the unmodified Program or a work based
# on the Program.
#
# To "propagate" a work means to do anything with it that, without
# permission, would make you directly or secondarily liable for
# infringement under applicable copyright law, except executing it on a
# computer or modifying a private copy. Propagation includes copying,
# distribution (with or without modification), making available to the
# public, and in some countries other activities as well.
#
# To "convey" a work means any kind of propagation that enables other
# parties to make or receive copies. Mere interaction with a user through
# a computer network, with no transfer of a copy, is not conveying.
#
# An interactive user interface displays "Appropriate Legal Notices"
# to the extent that it includes a convenient and prominently visible
# feature that (1) displays an appropriate copyright notice, and (2)
# tells the user that there is no warranty for the work (except to the
# extent that warranties are provided), that licensees may convey the
# work under this License, and how to view a copy of this License. If
# the interface presents a list of user commands or options, such as a
# menu, a prominent item in the list meets this criterion.
#
# 1. Source Code.
#
# The "source code" for a work means the preferred form of the work
# for making modifications to it. "Object code" means any non-source
# form of a work.
#
# A "Standard Interface" means an interface that either is an official
# standard defined by a recognized standards body, or, in the case of
# interfaces specified for a particular programming language, one that
# is widely used among developers working in that language.
#
# The "System Libraries" of an executable work include anything, other
# than the work as a whole, that (a) is included in the normal form of
# packaging a Major Component, but which is not part of that Major
# Component, and (b) serves only to enable use of the work with that
# Major Component, or to implement a Standard Interface for which an
# implementation is available to the public in source code form. A
# "Major Component", in this context, means a major essential component
# (kernel, window system, and so on) of the specific operating system
# (if any) on which the executable work runs, or a compiler used to
# produce the work, or an object code interpreter used to run it.
#
# The "Corresponding Source" for a work in object code form means all
# the source code needed to generate, install, and (for an executable
# work) run the object code and to modify the work, including scripts to
# control those activities. However, it does not include the work's
# System Libraries, or general-purpose tools or generally available free
# programs which are used unmodified in performing those activities but
# which are not part of the work. For example, Corresponding Source
# includes interface definition files associated with source files for
# the work, and the source code for shared libraries and dynamically
# linked subprograms that the work is specifically designed to require,
# such as by intimate data communication or control flow between those
# subprograms and other parts of the work.
#
# The Corresponding Source need not include anything that users
# can regenerate automatically from other parts of the Corresponding
# Source.
#
# The Corresponding Source for a work in source code form is that
# same work.
#
# 2. Basic Permissions.
#
# All rights granted under this License are granted for the term of
# copyright on the Program, and are irrevocable provided the stated
# conditions are met. This License explicitly affirms your unlimited
# permission to run the unmodified Program. The output from running a
# covered work is covered by this License only if the output, given its
# content, constitutes a covered work. This License acknowledges your
# rights of fair use or other equivalent, as provided by copyright law.
#
# You may make, run and propagate covered works that you do not
# convey, without conditions so long as your license otherwise remains
# in force. You may convey covered works to others for the sole purpose
# of having them make modifications exclusively for you, or provide you
# with facilities for running those works, provided that you comply with
# the terms of this License in conveying all material for which you do
# not control copyright. Those thus making or running the covered works
# for you must do so exclusively on your behalf, under your direction
# and control, on terms that prohibit them from making any copies of
# your copyrighted material outside their relationship with you.
#
# Conveying under any other circumstances is permitted solely under
# the conditions stated below. Sublicensing is not allowed; section 10
# makes it unnecessary.
#
# 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
#
# No covered work shall be deemed part of an effective technological
# measure under any applicable law fulfilling obligations under article
# 11 of the WIPO copyright treaty adopted on 20 December 1996, or
# similar laws prohibiting or restricting circumvention of such
# measures.
#
# When you convey a covered work, you waive any legal power to forbid
# circumvention of technological measures to the extent such circumvention
# is effected by exercising rights under this License with respect to
# the covered work, and you disclaim any intention to limit operation or
# modification of the work as a means of enforcing, against the work's
# users, your or third parties' legal rights to forbid circumvention of
# technological measures.
#
# 4. Conveying Verbatim Copies.
#
# You may convey verbatim copies of the Program's source code as you
# receive it, in any medium, provided that you conspicuously and
# appropriately publish on each copy an appropriate copyright notice;
# keep intact all notices stating that this License and any
# non-permissive terms added in accord with section 7 apply to the code;
# keep intact all notices of the absence of any warranty; and give all
# recipients a copy of this License along with the Program.
#
# You may charge any price or no price for each copy that you convey,
# and you may offer support or warranty protection for a fee.
#
# 5. Conveying Modified Source Versions.
#
# You may convey a work based on the Program, or the modifications to
# produce it from the Program, in the form of source code under the
# terms of section 4, provided that you also meet all of these conditions:
#
# a) The work must carry prominent notices stating that you modified
# it, and giving a relevant date.
#
# b) The work must carry prominent notices stating that it is
# released under this License and any conditions added under section
# 7. This requirement modifies the requirement in section 4 to
# "keep intact all notices".
#
# c) You must license the entire work, as a whole, under this
# License to anyone who comes into possession of a copy. This
# License will therefore apply, along with any applicable section 7
# additional terms, to the whole of the work, and all its parts,
# regardless of how they are packaged. This License gives no
# permission to license the work in any other way, but it does not
# invalidate such permission if you have separately received it.
#
# d) If the work has interactive user interfaces, each must display
# Appropriate Legal Notices; however, if the Program has interactive
# interfaces that do not display Appropriate Legal Notices, your
# work need not make them do so.
#
# A compilation of a covered work with other separate and independent
# works, which are not by their nature extensions of the covered work,
# and which are not combined with it such as to form a larger program,
# in or on a volume of a storage or distribution medium, is called an
# "aggregate" if the compilation and its resulting copyright are not
# used to limit the access or legal rights of the compilation's users
# beyond what the individual works permit. Inclusion of a covered work
# in an aggregate does not cause this License to apply to the other
# parts of the aggregate.
#
# 6. Conveying Non-Source Forms.
#
# You may convey a covered work in object code form under the terms
# of sections 4 and 5, provided that you also convey the
# machine-readable Corresponding Source under the terms of this License,
# in one of these ways:
#
# a) Convey the object code in, or embodied in, a physical product
# (including a physical distribution medium), accompanied by the
# Corresponding Source fixed on a durable physical medium
# customarily used for software interchange.
#
# b) Convey the object code in, or embodied in, a physical product
# (including a physical distribution medium), accompanied by a
# written offer, valid for at least three years and valid for as
# long as you offer spare parts or customer support for that product
# model, to give anyone who possesses the object code either (1) a
# copy of the Corresponding Source for all the software in the
# product that is covered by this License, on a durable physical
# medium customarily used for software interchange, for a price no
# more than your reasonable cost of physically performing this
# conveying of source, or (2) access to copy the
# Corresponding Source from a network server at no charge.
#
# c) Convey individual copies of the object code with a copy of the
# written offer to provide the Corresponding Source. This
# alternative is allowed only occasionally and noncommercially, and
# only if you received the object code with such an offer, in accord
# with subsection 6b.
#
# d) Convey the object code by offering access from a designated
# place (gratis or for a charge), and offer equivalent access to the
# Corresponding Source in the same way through the same place at no
# further charge. You need not require recipients to copy the
# Corresponding Source along with the object code. If the place to
# copy the object code is a network server, the Corresponding Source
# may be on a different server (operated by you or a third party)
# that supports equivalent copying facilities, provided you maintain
# clear directions next to the object code saying where to find the
# Corresponding Source. Regardless of what server hosts the
# Corresponding Source, you remain obligated to ensure that it is
# available for as long as needed to satisfy these requirements.
#
# e) Convey the object code using peer-to-peer transmission, provided
# you inform other peers where the object code and Corresponding
# Source of the work are being offered to the general public at no
# charge under subsection 6d.
#
# A separable portion of the object code, whose source code is excluded
# from the Corresponding Source as a System Library, need not be
# included in conveying the object code work.
#
# A "User Product" is either (1) a "consumer product", which means any
# tangible personal property which is normally used for personal, family,
# or household purposes, or (2) anything designed or sold for incorporation
# into a dwelling. In determining whether a product is a consumer product,
# doubtful cases shall be resolved in favor of coverage. For a particular
# product received by a particular user, "normally used" refers to a
# typical or common use of that class of product, regardless of the status
# of the particular user or of the way in which the particular user
# actually uses, or expects or is expected to use, the product. A product
# is a consumer product regardless of whether the product has substantial
# commercial, industrial or non-consumer uses, unless such uses represent
# the only significant mode of use of the product.
#
# "Installation Information" for a User Product means any methods,
# procedures, authorization keys, or other information required to install
# and execute modified versions of a covered work in that User Product from
# a modified version of its Corresponding Source. The information must
# suffice to ensure that the continued functioning of the modified object
# code is in no case prevented or interfered with solely because
# modification has been made.
#
# If you convey an object code work under this section in, or with, or
# specifically for use in, a User Product, and the conveying occurs as
# part of a transaction in which the right of possession and use of the
# User Product is transferred to the recipient in perpetuity or for a
# fixed term (regardless of how the transaction is characterized), the
# Corresponding Source conveyed under this section must be accompanied
# by the Installation Information. But this requirement does not apply
# if neither you nor any third party retains the ability to install
# modified object code on the User Product (for example, the work has
# been installed in ROM).
#
# The requirement to provide Installation Information does not include a
# requirement to continue to provide support service, warranty, or updates
# for a work that has been modified or installed by the recipient, or for
# the User Product in which it has been modified or installed. Access to a
# network may be denied when the modification itself materially and
# adversely affects the operation of the network or violates the rules and
# protocols for communication across the network.
#
# Corresponding Source conveyed, and Installation Information provided,
# in accord with this section must be in a format that is publicly
# documented (and with an implementation available to the public in
# source code form), and must require no special password or key for
# unpacking, reading or copying.
#
# 7. Additional Terms.
#
# "Additional permissions" are terms that supplement the terms of this
# License by making exceptions from one or more of its conditions.
# Additional permissions that are applicable to the entire Program shall
# be treated as though they were included in this License, to the extent
# that they are valid under applicable law. If additional permissions
# apply only to part of the Program, that part may be used separately
# under those permissions, but the entire Program remains governed by
# this License without regard to the additional permissions.
#
# When you convey a copy of a covered work, you may at your option
# remove any additional permissions from that copy, or from any part of
# it. (Additional permissions may be written to require their own
# removal in certain cases when you modify the work.) You may place
# additional permissions on material, added by you to a covered work,
# for which you have or can give appropriate copyright permission.
#
# Notwithstanding any other provision of this License, for material you
# add to a covered work, you may (if authorized by the copyright holders of
# that material) supplement the terms of this License with terms:
#
# a) Disclaiming warranty or limiting liability differently from the
# terms of sections 15 and 16 of this License; or
#
# b) Requiring preservation of specified reasonable legal notices or
# author attributions in that material or in the Appropriate Legal
# Notices displayed by works containing it; or
#
# c) Prohibiting misrepresentation of the origin of that material, or
# requiring that modified versions of such material be marked in
# reasonable ways as different from the original version; or
#
# d) Limiting the use for publicity purposes of names of licensors or
# authors of the material; or
#
# e) Declining to grant rights under trademark law for use of some
# trade names, trademarks, or service marks; or
#
# f) Requiring indemnification of licensors and authors of that
# material by anyone who conveys the material (or modified versions of
# it) with contractual assumptions of liability to the recipient, for
# any liability that these contractual assumptions directly impose on
# those licensors and authors.
#
# All other non-permissive additional terms are considered "further
# restrictions" within the meaning of section 10. If the Program as you
# received it, or any part of it, contains a notice stating that it is
# governed by this License along with a term that is a further
# restriction, you may remove that term. If a license document contains
# a further restriction but permits relicensing or conveying under this
# License, you may add to a covered work material governed by the terms
# of that license document, provided that the further restriction does
# not survive such relicensing or conveying.
#
# If you add terms to a covered work in accord with this section, you
# must place, in the relevant source files, a statement of the
# additional terms that apply to those files, or a notice indicating
# where to find the applicable terms.
#
# Additional terms, permissive or non-permissive, may be stated in the
# form of a separately written license, or stated as exceptions;
# the above requirements apply either way.
#
# 8. Termination.
#
# You may not propagate or modify a covered work except as expressly
# provided under this License. Any attempt otherwise to propagate or
# modify it is void, and will automatically terminate your rights under
# this License (including any patent licenses granted under the third
# paragraph of section 11).
#
# However, if you cease all violation of this License, then your
# license from a particular copyright holder is reinstated (a)
# provisionally, unless and until the copyright holder explicitly and
# finally terminates your license, and (b) permanently, if the copyright
# holder fails to notify you of the violation by some reasonable means
# prior to 60 days after the cessation.
#
# Moreover, your license from a particular copyright holder is
# reinstated permanently if the copyright holder notifies you of the
# violation by some reasonable means, this is the first time you have
# received notice of violation of this License (for any work) from that
# copyright holder, and you cure the violation prior to 30 days after
# your receipt of the notice.
#
# Termination of your rights under this section does not terminate the
# licenses of parties who have received copies or rights from you under
# this License. If your rights have been terminated and not permanently
# reinstated, you do not qualify to receive new licenses for the same
# material under section 10.
#
# 9. Acceptance Not Required for Having Copies.
#
# You are not required to accept this License in order to receive or
# run a copy of the Program. Ancillary propagation of a covered work
# occurring solely as a consequence of using peer-to-peer transmission
# to receive a copy likewise does not require acceptance. However,
# nothing other than this License grants you permission to propagate or
# modify any covered work. These actions infringe copyright if you do
# not accept this License. Therefore, by modifying or propagating a
# covered work, you indicate your acceptance of this License to do so.
#
# 10. Automatic Licensing of Downstream Recipients.
#
# Each time you convey a covered work, the recipient automatically
# receives a license from the original licensors, to run, modify and
# propagate that work, subject to this License. You are not responsible
# for enforcing compliance by third parties with this License.
#
# An "entity transaction" is a transaction transferring control of an
# organization, or substantially all assets of one, or subdividing an
# organization, or merging organizations. If propagation of a covered
# work results from an entity transaction, each party to that
# transaction who receives a copy of the work also receives whatever
# licenses to the work the party's predecessor in interest had or could
# give under the previous paragraph, plus a right to possession of the
# Corresponding Source of the work from the predecessor in interest, if
# the predecessor has it or can get it with reasonable efforts.
#
# You may not impose any further restrictions on the exercise of the
# rights granted or affirmed under this License. For example, you may
# not impose a license fee, royalty, or other charge for exercise of
# rights granted under this License, and you may not initiate litigation
# (including a cross-claim or counterclaim in a lawsuit) alleging that
# any patent claim is infringed by making, using, selling, offering for
# sale, or importing the Program or any portion of it.
#
# 11. Patents.
#
# A "contributor" is a copyright holder who authorizes use under this
# License of the Program or a work on which the Program is based. The
# work thus licensed is called the contributor's "contributor version".
#
# A contributor's "essential patent claims" are all patent claims
# owned or controlled by the contributor, whether already acquired or
# hereafter acquired, that would be infringed by some manner, permitted
# by this License, of making, using, or selling its contributor version,
# but do not include claims that would be infringed only as a
# consequence of further modification of the contributor version. For
# purposes of this definition, "control" includes the right to grant
# patent sublicenses in a manner consistent with the requirements of
# this License.
#
# Each contributor grants you a non-exclusive, worldwide, royalty-free
# patent license under the contributor's essential patent claims, to
# make, use, sell, offer for sale, import and otherwise run, modify and
# propagate the contents of its contributor version.
#
# In the following three paragraphs, a "patent license" is any express
# agreement or commitment, however denominated, not to enforce a patent
# (such as an express permission to practice a patent or covenant not to
# sue for patent infringement). To "grant" such a patent license to a
# party means to make such an agreement or commitment not to enforce a
# patent against the party.
#
# If you convey a covered work, knowingly relying on a patent license,
# and the Corresponding Source of the work is not available for anyone
# to copy, free of charge and under the terms of this License, through a
# publicly available network server or other readily accessible means,
# then you must either (1) cause the Corresponding Source to be so
# available, or (2) arrange to deprive yourself of the benefit of the
# patent license for this particular work, or (3) arrange, in a manner
# consistent with the requirements of this License, to extend the patent
# license to downstream recipients. "Knowingly relying" means you have
# actual knowledge that, but for the patent license, your conveying the
# covered work in a country, or your recipient's use of the covered work
# in a country, would infringe one or more identifiable patents in that
# country that you have reason to believe are valid.
#
# If, pursuant to or in connection with a single transaction or
# arrangement, you convey, or propagate by procuring conveyance of, a
# covered work, and grant a patent license to some of the parties
# receiving the covered work authorizing them to use, propagate, modify
# or convey a specific copy of the covered work, then the patent license
# you grant is automatically extended to all recipients of the covered
# work and works based on it.
#
# A patent license is "discriminatory" if it does not include within
# the scope of its coverage, prohibits the exercise of, or is
# conditioned on the non-exercise of one or more of the rights that are
# specifically granted under this License. You may not convey a covered
# work if you are a party to an arrangement with a third party that is
# in the business of distributing software, under which you make payment
# to the third party based on the extent of your activity of conveying
# the work, and under which the third party grants, to any of the
# parties who would receive the covered work from you, a discriminatory
# patent license (a) in connection with copies of the covered work
# conveyed by you (or copies made from those copies), or (b) primarily
# for and in connection with specific products or compilations that
# contain the covered work, unless you entered into that arrangement,
# or that patent license was granted, prior to 28 March 2007.
#
# Nothing in this License shall be construed as excluding or limiting
# any implied license or other defenses to infringement that may
# otherwise be available to you under applicable patent law.
#
# 12. No Surrender of Others' Freedom.
#
# If conditions are imposed on you (whether by court order, agreement or
# otherwise) that contradict the conditions of this License, they do not
# excuse you from the conditions of this License. If you cannot convey a
# covered work so as to satisfy simultaneously your obligations under this
# License and any other pertinent obligations, then as a consequence you may
# not convey it at all. For example, if you agree to terms that obligate you
# to collect a royalty for further conveying from those to whom you convey
# the Program, the only way you could satisfy both those terms and this
# License would be to refrain entirely from conveying the Program.
#
# 13. Use with the GNU Affero General Public License.
#
# Notwithstanding any other provision of this License, you have
# permission to link or combine any covered work with a work licensed
# under version 3 of the GNU Affero General Public License into a single
# combined work, and to convey the resulting work. The terms of this
# License will continue to apply to the part which is the covered work,
# but the special requirements of the GNU Affero General Public License,
# section 13, concerning interaction through a network will apply to the
# combination as such.
#
# 14. Revised Versions of this License.
#
# The Free Software Foundation may publish revised and/or new versions of
# the GNU General Public License from time to time. Such new versions will
# be similar in spirit to the present version, but may differ in detail to
# address new problems or concerns.
#
# Each version is given a distinguishing version number. If the
# Program specifies that a certain numbered version of the GNU General
# Public License "or any later version" applies to it, you have the
# option of following the terms and conditions either of that numbered
# version or of any later version published by the Free Software
# Foundation. If the Program does not specify a version number of the
# GNU General Public License, you may choose any version ever published
# by the Free Software Foundation.
#
# If the Program specifies that a proxy can decide which future
# versions of the GNU General Public License can be used, that proxy's
# public statement of acceptance of a version permanently authorizes you
# to choose that version for the Program.
#
# Later license versions may give you additional or different
# permissions. However, no additional obligations are imposed on any
# author or copyright holder as a result of your choosing to follow a
# later version.
#
# 15. Disclaimer of Warranty.
#
# THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
# APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
# HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
# OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
# IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
# ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
#
# 16. Limitation of Liability.
#
# IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
# WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
# THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
# GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
# USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
# DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
# PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
# EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGES.
#
# 17. Interpretation of Sections 15 and 16.
#
# If the disclaimer of warranty and limitation of liability provided
# above cannot be given local legal effect according to their terms,
# reviewing courts shall apply local law that most closely approximates
# an absolute waiver of all civil liability in connection with the
# Program, unless a warranty or assumption of liability accompanies a
# copy of the Program in return for a fee.
#
# END OF TERMS AND CONDITIONS
#
# How to Apply These Terms to Your New Programs
#
# If you develop a new program, and you want it to be of the greatest
# possible use to the public, the best way to achieve this is to make it
# free software which everyone can redistribute and change under these terms.
#
# To do so, attach the following notices to the program. It is safest
# to attach them to the start of each source file to most effectively
# state the exclusion of warranty; and each file should have at least
# the "copyright" line and a pointer to where the full notice is found.
#
# <one line to give the program's name and a brief idea of what it does.>
# Copyright (C) <year> <name of author>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Also add information on how to contact you by electronic and paper mail.
#
# If the program does terminal interaction, make it output a short
# notice like this when it starts in an interactive mode:
#
# <program> Copyright (C) <year> <name of author>
# This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
# This is free software, and you are welcome to redistribute it
# under certain conditions; type `show c' for details.
#
# The hypothetical commands `show w' and `show c' should show the appropriate
# parts of the General Public License. Of course, your program's commands
# might be different; for a GUI interface, you would use an "about box".
#
# You should also get your employer (if you work as a programmer) or school,
# if any, to sign a "copyright disclaimer" for the program, if necessary.
# For more information on this, and how to apply and follow the GNU GPL, see
# <http://www.gnu.org/licenses/>.
#
# The GNU General Public License does not permit incorporating your program
# into proprietary programs. If your program is a subroutine library, you
# may consider it more useful to permit linking proprietary applications with
# the library. If this is what you want to do, use the GNU Lesser General
# Public License instead of this License. But first, please read
# <http://www.gnu.org/philosophy/why-not-lgpl.html>.
#
|
"""Simulate detachment limited sediment transport.
Landlab component that simulates detachment limited sediment transport is more
general than the stream power component. Doesn't require the upstream node
order, links to flow receiver and flow receiver fields. Instead, takes in
the discharge values on NODES calculated by the OverlandFlow class and
erodes the landscape in response to the output discharge.
As of right now, this component relies on the OverlandFlow component
for stability. There are no stability criteria implemented in this class.
To ensure model stability, use StreamPowerEroder or FastscapeEroder
components instead.
.. codeauthor:: NAME import numpy as np
>>> from landlab import RasterModelGrid
>>> from landlab.components import DetachmentLtdErosion
Create a grid on which to calculate detachment ltd sediment transport.
>>> grid = RasterModelGrid((4, 5))
The grid will need some data to provide the detachment limited sediment
transport component. To check the names of the fields that provide input to
the detachment ltd transport component, use the *input_var_names* class
property.
Create fields of data for each of these input variables.
>>> grid.at_node['topographic__elevation'] = np.array([
... 0., 0., 0., 0., 0.,
... 1., 1., 1., 1., 1.,
... 2., 2., 2., 2., 2.,
... 3., 3., 3., 3., 3.])
Using the set topography, now we will calculate slopes on all nodes.
>>> grid.at_node['topographic__slope'] = np.array([
... -0. , -0. , -0. , -0. , -0,
... 0.70710678, 1. , 1. , 1. , 0.70710678,
... 0.70710678, 1. , 1. , 1. , 0.70710678,
... 0.70710678, 1. , 1. , 1. , 0.70710678])
Now we will arbitrarily add water discharge to each node for simplicity.
>>> grid.at_node['surface_water__discharge'] = np.array([
... 30., 30., 30., 30., 30.,
... 20., 20., 20., 20., 20.,
... 10., 10., 10., 10., 10.,
... 5., 5., 5., 5., 5.])
Instantiate the `DetachmentLtdErosion` component to work on this grid, and
run it. In this simple case, we need to pass it a time step ('dt')
>>> dt = 10.0
>>> dle = DetachmentLtdErosion(grid)
>>> dle.erode(dt=dt)
After calculating the erosion rate, the elevation field is updated in the
grid. Use the *output_var_names* property to see the names of the fields that
have been changed.
>>> dle.output_var_names
('topographic__elevation',)
The `topographic__elevation` field is defined at nodes.
>>> dle.var_loc('topographic__elevation')
'node'
Now we test to see how the topography changed as a function of the erosion
rate.
>>> grid.at_node['topographic__elevation'] # doctest: +NORMALIZE_WHITESPACE
array([ 0. , 0. , 0. , 0. , 0. ,
0.99993675, 0.99991056, 0.99991056, 0.99991056, 0.99993675,
1.99995528, 1.99993675, 1.99993675, 1.99993675, 1.99995528,
2.99996838, 2.99995528, 2.99995528, 2.99995528, 2.99996838])
""" |
"""
Basic functions used by several sub-packages and
useful to have in the main name-space.
Type Handling
-------------
================ ===================
iscomplexobj Test for complex object, scalar result
isrealobj Test for real object, scalar result
iscomplex Test for complex elements, array result
isreal Test for real elements, array result
imag Imaginary part
real Real part
real_if_close Turns complex number with tiny imaginary part to real
isneginf Tests for negative infinity, array result
isposinf Tests for positive infinity, array result
isnan Tests for nans, array result
isinf Tests for infinity, array result
isfinite Tests for finite numbers, array result
isscalar True if argument is a scalar
nan_to_num Replaces NaN's with 0 and infinities with large numbers
cast Dictionary of functions to force cast to each type
common_type Determine the minimum common type code for a group
of arrays
mintypecode Return minimal allowed common typecode.
================ ===================
Index Tricks
------------
================ ===================
mgrid Method which allows easy construction of N-d
'mesh-grids'
``r_`` Append and construct arrays: turns slice objects into
ranges and concatenates them, for 2d arrays appends rows.
index_exp Konrad Hinsen's index_expression class instance which
can be useful for building complicated slicing syntax.
================ ===================
Useful Functions
----------------
================ ===================
select Extension of where to multiple conditions and choices
extract Extract 1d array from flattened array according to mask
insert Insert 1d array of values into Nd array according to mask
linspace Evenly spaced samples in linear space
logspace Evenly spaced samples in logarithmic space
fix Round x to nearest integer towards zero
mod Modulo mod(x,y) = x % y except keeps sign of y
amax Array maximum along axis
amin Array minimum along axis
ptp Array max-min along axis
cumsum Cumulative sum along axis
prod Product of elements along axis
cumprod Cumluative product along axis
diff Discrete differences along axis
angle Returns angle of complex argument
unwrap Unwrap phase along given axis (1-d algorithm)
sort_complex Sort a complex-array (based on real, then imaginary)
trim_zeros Trim the leading and trailing zeros from 1D array.
vectorize A class that wraps a Python function taking scalar
arguments into a generalized function which can handle
arrays of arguments using the broadcast rules of
numerix Python.
================ ===================
Shape Manipulation
------------------
================ ===================
squeeze Return a with length-one dimensions removed.
atleast_1d Force arrays to be >= 1D
atleast_2d Force arrays to be >= 2D
atleast_3d Force arrays to be >= 3D
vstack Stack arrays vertically (row on row)
hstack Stack arrays horizontally (column on column)
column_stack Stack 1D arrays as columns into 2D array
dstack Stack arrays depthwise (along third dimension)
stack Stack arrays along a new axis
split Divide array into a list of sub-arrays
hsplit Split into columns
vsplit Split into rows
dsplit Split along third dimension
================ ===================
Matrix (2D Array) Manipulations
-------------------------------
================ ===================
fliplr 2D array with columns flipped
flipud 2D array with rows flipped
rot90 Rotate a 2D array a multiple of 90 degrees
eye Return a 2D array with ones down a given diagonal
diag Construct a 2D array from a vector, or return a given
diagonal from a 2D array.
mat Construct a Matrix
bmat Build a Matrix from blocks
================ ===================
Polynomials
-----------
================ ===================
poly1d A one-dimensional polynomial class
poly Return polynomial coefficients from roots
roots Find roots of polynomial given coefficients
polyint Integrate polynomial
polyder Differentiate polynomial
polyadd Add polynomials
polysub Substract polynomials
polymul Multiply polynomials
polydiv Divide polynomials
polyval Evaluate polynomial at given argument
================ ===================
Iterators
---------
================ ===================
Arrayterator A buffered iterator for big arrays.
================ ===================
Import Tricks
-------------
================ ===================
ppimport Postpone module import until trying to use it
ppimport_attr Postpone module import until trying to use its attribute
ppresolve Import postponed module and return it.
================ ===================
Machine Arithmetics
-------------------
================ ===================
machar_single Single precision floating point arithmetic parameters
machar_double Double precision floating point arithmetic parameters
================ ===================
Threading Tricks
----------------
================ ===================
ParallelExec Execute commands in parallel thread.
================ ===================
1D Array Set Operations
-----------------------
Set operations for 1D numeric arrays based on sort() function.
================ ===================
ediff1d Array difference (auxiliary function).
unique Unique elements of an array.
intersect1d Intersection of 1D arrays with unique elements.
setxor1d Set exclusive-or of 1D arrays with unique elements.
in1d Test whether elements in a 1D array are also present in
another array.
union1d Union of 1D arrays with unique elements.
setdiff1d Set difference of 1D arrays with unique elements.
================ ===================
""" |
"""
==========================================
Statistical functions (:mod:`scipy.stats`)
==========================================
.. module:: scipy.stats
This module contains a large number of probability distributions as
well as a growing library of statistical functions.
Each univariate distribution is an instance of a subclass of `rv_continuous`
(`rv_discrete` for discrete distributions):
.. autosummary::
:toctree: generated/
rv_continuous
rv_discrete
Continuous distributions
========================
.. autosummary::
:toctree: generated/
alpha -- Alpha
anglit -- Anglit
arcsine -- Arcsine
beta -- Beta
betaprime -- Beta Prime
bradford -- Bradford
burr -- Burr (Type III)
burr12 -- Burr (Type XII)
cauchy -- Cauchy
chi -- Chi
chi2 -- Chi-squared
cosine -- Cosine
dgamma -- Double Gamma
dweibull -- Double Weibull
erlang -- Erlang
expon -- Exponential
exponnorm -- Exponentially Modified Normal
exponweib -- Exponentiated Weibull
exponpow -- Exponential Power
f -- F (Snecdor F)
fatiguelife -- Fatigue Life (Birnbaum-Saunders)
fisk -- Fisk
foldcauchy -- Folded Cauchy
foldnorm -- Folded Normal
frechet_r -- Frechet Right Sided, Extreme Value Type II (Extreme LB) or weibull_min
frechet_l -- Frechet Left Sided, Weibull_max
genlogistic -- Generalized Logistic
gennorm -- Generalized normal
genpareto -- Generalized Pareto
genexpon -- Generalized Exponential
genextreme -- Generalized Extreme Value
gausshyper -- Gauss Hypergeometric
gamma -- Gamma
gengamma -- Generalized gamma
genhalflogistic -- Generalized Half Logistic
gilbrat -- Gilbrat
gompertz -- Gompertz (Truncated Gumbel)
gumbel_r -- Right Sided Gumbel, Log-Weibull, Fisher-Tippett, Extreme Value Type I
gumbel_l -- Left Sided Gumbel, etc.
halfcauchy -- Half Cauchy
halflogistic -- Half Logistic
halfnorm -- Half Normal
halfgennorm -- Generalized Half Normal
hypsecant -- Hyperbolic Secant
invgamma -- Inverse Gamma
invgauss -- Inverse Gaussian
invweibull -- Inverse Weibull
johnsonsb -- NAME
johnsonsu -- NAME
kappa4 -- Kappa 4 parameter
kappa3 -- Kappa 3 parameter
ksone -- Kolmogorov-Smirnov one-sided (no stats)
kstwobign -- Kolmogorov-Smirnov two-sided test for Large N (no stats)
laplace -- Laplace
levy -- Levy
levy_l
levy_stable
logistic -- Logistic
loggamma -- Log-Gamma
loglaplace -- Log-Laplace (Log Double Exponential)
lognorm -- Log-Normal
lomax -- Lomax (Pareto of the second kind)
maxwell -- Maxwell
mielke -- Mielke's Beta-Kappa
nakagami -- Nakagami
ncx2 -- Non-central chi-squared
ncf -- Non-central F
nct -- Non-central Student's T
norm -- Normal (Gaussian)
pareto -- Pareto
pearson3 -- Pearson type III
powerlaw -- Power-function
powerlognorm -- Power log normal
powernorm -- Power normal
rdist -- R-distribution
reciprocal -- Reciprocal
rayleigh -- Rayleigh
rice -- Rice
recipinvgauss -- Reciprocal Inverse Gaussian
semicircular -- Semicircular
skewnorm -- Skew normal
t -- Student's T
trapz -- Trapezoidal
triang -- Triangular
truncexpon -- Truncated Exponential
truncnorm -- Truncated Normal
tukeylambda -- Tukey-Lambda
uniform -- Uniform
vonmises -- Von-Mises (Circular)
vonmises_line -- Von-Mises (Line)
wald -- Wald
weibull_min -- Minimum Weibull (see Frechet)
weibull_max -- Maximum Weibull (see Frechet)
wrapcauchy -- Wrapped Cauchy
Multivariate distributions
==========================
.. autosummary::
:toctree: generated/
multivariate_normal -- Multivariate normal distribution
matrix_normal -- Matrix normal distribution
dirichlet -- Dirichlet
wishart -- Wishart
invwishart -- Inverse Wishart
special_ortho_group -- SO(N) group
ortho_group -- O(N) group
random_correlation -- random correlation matrices
Discrete distributions
======================
.. autosummary::
:toctree: generated/
bernoulli -- Bernoulli
binom -- Binomial
boltzmann -- Boltzmann (Truncated Discrete Exponential)
dlaplace -- Discrete Laplacian
geom -- Geometric
hypergeom -- Hypergeometric
logser -- Logarithmic (Log-Series, Series)
nbinom -- Negative Binomial
planck -- Planck (Discrete Exponential)
poisson -- Poisson
randint -- Discrete Uniform
skellam -- Skellam
zipf -- Zipf
Statistical functions
=====================
Several of these functions have a similar version in scipy.stats.mstats
which work for masked arrays.
.. autosummary::
:toctree: generated/
describe -- Descriptive statistics
gmean -- Geometric mean
hmean -- Harmonic mean
kurtosis -- Fisher or Pearson kurtosis
kurtosistest --
mode -- Modal value
moment -- Central moment
normaltest --
skew -- Skewness
skewtest --
kstat --
kstatvar --
tmean -- Truncated arithmetic mean
tvar -- Truncated variance
tmin --
tmax --
tstd --
tsem --
variation -- Coefficient of variation
find_repeats
trim_mean
.. autosummary::
:toctree: generated/
cumfreq
histogram2
histogram
itemfreq
percentileofscore
scoreatpercentile
relfreq
.. autosummary::
:toctree: generated/
binned_statistic -- Compute a binned statistic for a set of data.
binned_statistic_2d -- Compute a 2-D binned statistic for a set of data.
binned_statistic_dd -- Compute a d-D binned statistic for a set of data.
.. autosummary::
:toctree: generated/
obrientransform
signaltonoise
bayes_mvs
mvsdist
sem
zmap
zscore
iqr
.. autosummary::
:toctree: generated/
sigmaclip
threshold
trimboth
trim1
.. autosummary::
:toctree: generated/
f_oneway
pearsonr
spearmanr
pointbiserialr
kendalltau
linregress
theilslopes
f_value
.. autosummary::
:toctree: generated/
ttest_1samp
ttest_ind
ttest_ind_from_stats
ttest_rel
kstest
chisquare
power_divergence
ks_2samp
mannwhitneyu
tiecorrect
rankdata
ranksums
wilcoxon
kruskal
friedmanchisquare
combine_pvalues
ss
square_of_sums
jarque_bera
.. autosummary::
:toctree: generated/
ansari
bartlett
levene
shapiro
anderson
anderson_ksamp
binom_test
fligner
median_test
mood
.. autosummary::
:toctree: generated/
boxcox
boxcox_normmax
boxcox_llf
entropy
.. autosummary::
:toctree: generated/
chisqprob
betai
Circular statistical functions
==============================
.. autosummary::
:toctree: generated/
circmean
circvar
circstd
Contingency table functions
===========================
.. autosummary::
:toctree: generated/
chi2_contingency
contingency.expected_freq
contingency.margins
fisher_exact
Plot-tests
==========
.. autosummary::
:toctree: generated/
ppcc_max
ppcc_plot
probplot
boxcox_normplot
Masked statistics functions
===========================
.. toctree::
stats.mstats
Univariate and multivariate kernel density estimation (:mod:`scipy.stats.kde`)
==============================================================================
.. autosummary::
:toctree: generated/
gaussian_kde
For many more stat related functions install the software R and the
interface package rpy.
""" |
"""
Introduction
============
SqlSoup provides a convenient way to access database tables without
having to declare table or mapper classes ahead of time.
Suppose we have a database with users, books, and loans tables
(corresponding to the PyWebOff dataset, if you're curious). For
testing purposes, we'll create this db as follows::
>>> from sqlalchemy import create_engine
>>> e = create_engine('sqlite:///:memory:')
>>> for sql in _testsql: e.execute(sql) #doctest: +ELLIPSIS
<...
Creating a SqlSoup gateway is just like creating an SQLAlchemy
engine::
>>> from sqlalchemy.ext.sqlsoup import SqlSoup
>>> db = SqlSoup('sqlite:///:memory:')
or, you can re-use an existing metadata or engine::
>>> db = SqlSoup(MetaData(e))
You can optionally specify a schema within the database for your
SqlSoup::
# >>> db.schema = myschemaname
Loading objects
===============
Loading objects is as easy as this::
>>> users = db.users.all()
>>> users.sort()
>>> users
[MappedUsers(name='NAME NAME MappedUsers(name='Bhargan NAME course, letting the database do the sort is better::
>>> db.users.order_by(db.users.name).all()
[MappedUsers(name='Bhargan NAME MappedUsers(name='NAME NAME access is intuitive::
>>> users[0].email
u'EMAIL'
Of course, you don't want to load all users very often. Let's add a
WHERE clause. Let's also switch the order_by to DESC while we're at
it::
>>> from sqlalchemy import or_, and_, desc
>>> where = or_(db.users.name=='Bhargan NAME db.users.email=='EMAIL')
>>> db.users.filter(where).order_by(desc(db.users.name)).all()
[MappedUsers(name='NAME NAME MappedUsers(name='Bhargan NAMEemail='EMAIL',password='basepair',classname=None,admin=1)]
You can also use .first() (to retrieve only the first object from a query) or
.one() (like .first when you expect exactly one user -- it will raise an
exception if more were returned)::
>>> db.users.filter(db.users.name=='Bhargan NAME
MappedUsers(name='Bhargan NAMEemail='EMAIL',password='basepair',classname=None,admin=1)
Since name is the primary key, this is equivalent to
>>> db.users.get('Bhargan NAME
MappedUsers(name='Bhargan NAMEemail='EMAIL',password='basepair',classname=None,admin=1)
This is also equivalent to
>>> db.users.filter_by(name='Bhargan NAME
MappedUsers(name='Bhargan NAMEemail='EMAIL',password='basepair',classname=None,admin=1)
filter_by is like filter, but takes kwargs instead of full clause expressions.
This makes it more concise for simple queries like this, but you can't do
complex queries like the or\_ above or non-equality based comparisons this way.
Full query documentation
------------------------
Get, filter, filter_by, order_by, limit, and the rest of the
query methods are explained in detail in the `SQLAlchemy documentation`__.
__ http://www.sqlalchemy.org/docs/04/ormtutorial.html#datamapping_querying
Modifying objects
=================
Modifying objects is intuitive::
>>> user = _
>>> user.email = 'EMAIL'
>>> db.flush()
(SqlSoup leverages the sophisticated SQLAlchemy unit-of-work code, so
multiple updates to a single object will be turned into a single
``UPDATE`` statement when you flush.)
To finish covering the basics, let's insert a new loan, then delete
it::
>>> book_id = db.books.filter_by(title='Regional Variation in Moss').first().id
>>> db.loans.insert(book_id=book_id, user_name=user.name)
MappedLoans(book_id=2,user_name='Bhargan NAMEloan_date=None)
>>> db.flush()
>>> loan = db.loans.filter_by(book_id=2, user_name='Bhargan NAME
>>> db.delete(loan)
>>> db.flush()
You can also delete rows that have not been loaded as objects. Let's
do our insert/delete cycle once more, this time using the loans
table's delete method. (For SQLAlchemy experts: note that no flush()
call is required since this delete acts at the SQL level, not at the
Mapper level.) The same where-clause construction rules apply here as
to the select methods.
::
>>> db.loans.insert(book_id=book_id, user_name=user.name)
MappedLoans(book_id=2,user_name='Bhargan NAMEloan_date=None)
>>> db.flush()
>>> db.loans.delete(db.loans.book_id==2)
You can similarly update multiple rows at once. This will change the
book_id to 1 in all loans whose book_id is 2::
>>> db.loans.update(db.loans.book_id==2, book_id=1)
>>> db.loans.filter_by(book_id=1).all()
[MappedLoans(book_id=1,user_name='NAME NAME 7, 12, 0, 0))]
Joins
=====
Occasionally, you will want to pull out a lot of data from related
tables all at once. In this situation, it is far more efficient to
have the database perform the necessary join. (Here we do not have *a
lot of data* but hopefully the concept is still clear.) SQLAlchemy is
smart enough to recognize that loans has a foreign key to users, and
uses that as the join condition automatically.
::
>>> join1 = db.join(db.users, db.loans, isouter=True)
>>> join1.filter_by(name='NAME NAME
[MappedJoin(name='NAME Student',email='EMAIL',password='student',classname=None,admin=0,book_id=1,user_name='NAME NAME 7, 12, 0, 0))]
If you're unfortunate enough to be using MySQL with the default MyISAM
storage engine, you'll have to specify the join condition manually,
since MyISAM does not store foreign keys. Here's the same join again,
with the join condition explicitly specified::
>>> db.join(db.users, db.loans, db.users.name==db.loans.user_name, isouter=True)
<class 'sqlalchemy.ext.sqlsoup.MappedJoin'>
You can compose arbitrarily complex joins by combining Join objects
with tables or other joins. Here we combine our first join with the
books table::
>>> join2 = db.join(join1, db.books)
>>> join2.all()
[MappedJoin(name='NAME Student',email='EMAIL',password='student',classname=None,admin=0,book_id=1,user_name='NAME NAME 7, 12, 0, 0),id=1,title='Mustards I Have Known',published_year='1989',authors='Jones')]
If you join tables that have an identical column name, wrap your join
with `with_labels`, to disambiguate columns with their table name
(.c is short for .columns)::
>>> db.with_labels(join1).c.keys()
[u'users_name', u'users_email', u'users_password', u'users_classname', u'users_admin', u'loans_book_id', u'loans_user_name', u'loans_loan_date']
You can also join directly to a labeled object::
>>> labeled_loans = db.with_labels(db.loans)
>>> db.join(db.users, labeled_loans, isouter=True).c.keys()
[u'name', u'email', u'password', u'classname', u'admin', u'loans_book_id', u'loans_user_name', u'loans_loan_date']
Advanced Use
============
Accessing the Session
---------------------
SqlSoup uses a SessionContext to provide thread-local sessions. You
can get a reference to the current one like this::
>>> from sqlalchemy.ext.sqlsoup import objectstore
>>> session = objectstore.current
Now you have access to all the standard session-based SA features,
such as transactions. (SqlSoup's ``flush()`` is normally
transactionalized, but you can perform manual transaction management
if you need a transaction to span multiple flushes.)
Mapping arbitrary Selectables
-----------------------------
SqlSoup can map any SQLAlchemy ``Selectable`` with the map
method. Let's map a ``Select`` object that uses an aggregate function;
we'll use the SQLAlchemy ``Table`` that SqlSoup introspected as the
basis. (Since we're not mapping to a simple table or join, we need to
tell SQLAlchemy how to find the *primary key* which just needs to be
unique within the select, and not necessarily correspond to a *real*
PK in the database.)
::
>>> from sqlalchemy import select, func
>>> b = db.books._table
>>> s = select([b.c.published_year, func.count('*').label('n')], from_obj=[b], group_by=[b.c.published_year])
>>> s = s.alias('years_with_count')
>>> years_with_count = db.map(s, primary_key=[s.c.published_year])
>>> years_with_count.filter_by(published_year='1989').all()
[MappedBooks(published_year='1989',n=1)]
Obviously if we just wanted to get a list of counts associated with
book years once, raw SQL is going to be less work. The advantage of
mapping a Select is reusability, both standalone and in Joins. (And if
you go to full SQLAlchemy, you can perform mappings like this directly
to your object models.)
An easy way to save mapped selectables like this is to just hang them on
your db object::
>>> db.years_with_count = years_with_count
Python is flexible like that!
Raw SQL
-------
SqlSoup works fine with SQLAlchemy's `text block support`__.
__ http://www.sqlalchemy.org/docs/04/sqlexpression.html#sql_text
You can also access the SqlSoup's `engine` attribute to compose SQL
directly. The engine's ``execute`` method corresponds to the one of a
DBAPI cursor, and returns a ``ResultProxy`` that has ``fetch`` methods
you would also see on a cursor::
>>> rp = db.bind.execute('select name, email from users order by name')
>>> for name, email in rp.fetchall(): print name, email
Bhargan Basepair EMAIL
NAME Student EMAIL can also pass this engine object to other SQLAlchemy constructs.
Extra tests
===========
Boring tests here. Nothing of real expository value.
::
>>> db.users.filter_by(classname=None).order_by(db.users.name).all()
[MappedUsers(name='Bhargan NAMEemail='EMAIL',password='basepair',classname=None,admin=1), MappedUsers(name='NAME NAME
>>> db.nopk
Traceback (most recent call last):
...
PKNotFoundError: table 'nopk' does not have a primary key defined [columns: i]
>>> db.nosuchtable
Traceback (most recent call last):
...
NoSuchTableError: nosuchtable
>>> years_with_count.insert(published_year='2007', n=1)
Traceback (most recent call last):
...
InvalidRequestError: SQLSoup can only modify mapped Tables (found: Alias)
[tests clear()]
>>> db.loans.count()
1
>>> _ = db.loans.insert(book_id=1, user_name='Bhargan NAME
>>> db.clear()
>>> db.flush()
>>> db.loans.count()
1
""" |
"""
============
Array basics
============
Array types and conversions between types
=========================================
NumPy supports a much greater variety of numerical types than Python does.
This section shows which are available, and how to modify an array's data-type.
============ ==========================================================
Data type Description
============ ==========================================================
``bool_`` Boolean (True or False) stored as a byte
``int_`` Default integer type (same as C ``long``; normally either
``int64`` or ``int32``)
intc Identical to C ``int`` (normally ``int32`` or ``int64``)
intp Integer used for indexing (same as C ``ssize_t``; normally
either ``int32`` or ``int64``)
int8 Byte (-128 to 127)
int16 Integer (-32768 to 32767)
int32 Integer (-2147483648 to 2147483647)
int64 Integer (-9223372036854775808 to 9223372036854775807)
uint8 Unsigned integer (0 to 255)
uint16 Unsigned integer (0 to 65535)
uint32 Unsigned integer (0 to 4294967295)
uint64 Unsigned integer (0 to 18446744073709551615)
``float_`` Shorthand for ``float64``.
float16 Half precision float: sign bit, 5 bits exponent,
10 bits mantissa
float32 Single precision float: sign bit, 8 bits exponent,
23 bits mantissa
float64 Double precision float: sign bit, 11 bits exponent,
52 bits mantissa
``complex_`` Shorthand for ``complex128``.
complex64 Complex number, represented by two 32-bit floats (real
and imaginary components)
complex128 Complex number, represented by two 64-bit floats (real
and imaginary components)
============ ==========================================================
Additionally to ``intc`` the platform dependent C integer types ``short``,
``long``, ``longlong`` and their unsigned versions are defined.
NumPy numerical types are instances of ``dtype`` (data-type) objects, each
having unique characteristics. Once you have imported NumPy using
::
>>> import numpy as np
the dtypes are available as ``np.bool_``, ``np.float32``, etc.
Advanced types, not listed in the table above, are explored in
section :ref:`structured_arrays`.
There are 5 basic numerical types representing booleans (bool), integers (int),
unsigned integers (uint) floating point (float) and complex. Those with numbers
in their name indicate the bitsize of the type (i.e. how many bits are needed
to represent a single value in memory). Some types, such as ``int`` and
``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit
vs. 64-bit machines). This should be taken into account when interfacing
with low-level code (such as C or Fortran) where the raw memory is addressed.
Data-types can be used as functions to convert python numbers to array scalars
(see the array scalar section for an explanation), python sequences of numbers
to arrays of that type, or as arguments to the dtype keyword that many numpy
functions or methods accept. Some examples::
>>> import numpy as np
>>> x = np.float32(1.0)
>>> x
1.0
>>> y = np.int_([1,2,4])
>>> y
array([1, 2, 4])
>>> z = np.arange(3, dtype=np.uint8)
>>> z
array([0, 1, 2], dtype=uint8)
Array types can also be referred to by character codes, mostly to retain
backward compatibility with older packages such as Numeric. Some
documentation may still refer to these, for example::
>>> np.array([1, 2, 3], dtype='f')
array([ 1., 2., 3.], dtype=float32)
We recommend using dtype objects instead.
To convert the type of an array, use the .astype() method (preferred) or
the type itself as a function. For example: ::
>>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE
array([ 0., 1., 2.])
>>> np.int8(z)
array([0, 1, 2], dtype=int8)
Note that, above, we use the *Python* float object as a dtype. NumPy knows
that ``int`` refers to ``np.int_``, ``bool`` means ``np.bool_``,
that ``float`` is ``np.float_`` and ``complex`` is ``np.complex_``.
The other data-types do not have Python equivalents.
To determine the type of an array, look at the dtype attribute::
>>> z.dtype
dtype('uint8')
dtype objects also contain information about the type, such as its bit-width
and its byte-order. The data type can also be used indirectly to query
properties of the type, such as whether it is an integer::
>>> d = np.dtype(int)
>>> d
dtype('int32')
>>> np.issubdtype(d, np.integer)
True
>>> np.issubdtype(d, np.floating)
False
Array Scalars
=============
NumPy generally returns elements of arrays as array scalars (a scalar
with an associated dtype). Array scalars differ from Python scalars, but
for the most part they can be used interchangeably (the primary
exception is for versions of Python older than v2.x, where integer array
scalars cannot act as indices for lists and tuples). There are some
exceptions, such as when code requires very specific attributes of a scalar
or when it checks specifically whether a value is a Python scalar. Generally,
problems are easily fixed by explicitly converting array scalars
to Python scalars, using the corresponding Python type function
(e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``).
The primary advantage of using array scalars is that
they preserve the array type (Python may not have a matching scalar type
available, e.g. ``int16``). Therefore, the use of array scalars ensures
identical behaviour between arrays and scalars, irrespective of whether the
value is inside an array or not. NumPy scalars also have many of the same
methods arrays do.
Extended Precision
==================
Python's floating-point numbers are usually 64-bit floating-point numbers,
nearly equivalent to ``np.float64``. In some unusual situations it may be
useful to use floating-point numbers with more precision. Whether this
is possible in numpy depends on the hardware and on the development
environment: specifically, x86 machines provide hardware floating-point
with 80-bit precision, and while most C compilers provide this as their
``long double`` type, MSVC (standard for Windows builds) makes
``long double`` identical to ``double`` (64 bits). NumPy makes the
compiler's ``long double`` available as ``np.longdouble`` (and
``np.clongdouble`` for the complex numbers). You can find out what your
numpy provides with ``np.finfo(np.longdouble)``.
NumPy does not provide a dtype with more precision than C
``long double``\\s; in particular, the 128-bit IEEE quad precision
data type (FORTRAN's ``REAL*16``\\) is not available.
For efficient memory alignment, ``np.longdouble`` is usually stored
padded with zero bits, either to 96 or 128 bits. Which is more efficient
depends on hardware and development environment; typically on 32-bit
systems they are padded to 96 bits, while on 64-bit systems they are
typically padded to 128 bits. ``np.longdouble`` is padded to the system
default; ``np.float96`` and ``np.float128`` are provided for users who
want specific padding. In spite of the names, ``np.float96`` and
``np.float128`` provide only as much precision as ``np.longdouble``,
that is, 80 bits on most x86 machines and 64 bits in standard
Windows builds.
Be warned that even if ``np.longdouble`` offers more precision than
python ``float``, it is easy to lose that extra precision, since
python often forces values to pass through ``float``. For example,
the ``%`` formatting operator requires its arguments to be converted
to standard python types, and it is therefore impossible to preserve
extended precision even if many decimal places are requested. It can
be useful to test your code with the value
``1 + np.finfo(np.longdouble).eps``.
""" |
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
"""Doctest for method/function calls.
We're going the use these types for extra testing
>>> from UserList import UserList
>>> from UserDict import UserDict
We're defining four helper functions
>>> def e(a,b):
... print a, b
>>> def f(*a, **k):
... print a, test_support.sortdict(k)
>>> def g(x, *y, **z):
... print x, y, test_support.sortdict(z)
>>> def h(j=1, a=2, h=3):
... print j, a, h
Argument list examples
>>> f()
() {}
>>> f(1)
(1,) {}
>>> f(1, 2)
(1, 2) {}
>>> f(1, 2, 3)
(1, 2, 3) {}
>>> f(1, 2, 3, *(4, 5))
(1, 2, 3, 4, 5) {}
>>> f(1, 2, 3, *[4, 5])
(1, 2, 3, 4, 5) {}
>>> f(1, 2, 3, *UserList([4, 5]))
(1, 2, 3, 4, 5) {}
Here we add keyword arguments
>>> f(1, 2, 3, **{'a':4, 'b':5})
(1, 2, 3) {'a': 4, 'b': 5}
>>> f(1, 2, 3, *[4, 5], **{'a':6, 'b':7})
(1, 2, 3, 4, 5) {'a': 6, 'b': 7}
>>> f(1, 2, 3, x=4, y=5, *(6, 7), **{'a':8, 'b': 9})
(1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5}
>>> f(1, 2, 3, **UserDict(a=4, b=5))
(1, 2, 3) {'a': 4, 'b': 5}
>>> f(1, 2, 3, *(4, 5), **UserDict(a=6, b=7))
(1, 2, 3, 4, 5) {'a': 6, 'b': 7}
>>> f(1, 2, 3, x=4, y=5, *(6, 7), **UserDict(a=8, b=9))
(1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5}
Examples with invalid arguments (TypeErrors). We're also testing the function
names in the exception messages.
Verify clearing of SF bug #733667
>>> e(c=4)
Traceback (most recent call last):
...
TypeError: e() got an unexpected keyword argument 'c'
>>> g()
Traceback (most recent call last):
...
TypeError: g() takes at least 1 argument (0 given)
>>> g(*())
Traceback (most recent call last):
...
TypeError: g() takes at least 1 argument (0 given)
>>> g(*(), **{})
Traceback (most recent call last):
...
TypeError: g() takes at least 1 argument (0 given)
>>> g(1)
1 () {}
>>> g(1, 2)
1 (2,) {}
>>> g(1, 2, 3)
1 (2, 3) {}
>>> g(1, 2, 3, *(4, 5))
1 (2, 3, 4, 5) {}
>>> class Nothing: pass
...
>>> g(*Nothing())
Traceback (most recent call last):
...
TypeError: g() argument after * must be a sequence
>>> class Nothing:
... def __len__(self): return 5
...
>>> g(*Nothing())
Traceback (most recent call last):
...
TypeError: g() argument after * must be a sequence
>>> class Nothing:
... def __len__(self): return 5
... def __getitem__(self, i):
... if i<3: return i
... else: raise IndexError(i)
...
>>> g(*Nothing())
0 (1, 2) {}
>>> class Nothing:
... def __init__(self): self.c = 0
... def __iter__(self): return self
... def next(self):
... if self.c == 4:
... raise StopIteration
... c = self.c
... self.c += 1
... return c
...
>>> g(*Nothing())
0 (1, 2, 3) {}
Make sure that the function doesn't stomp the dictionary
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> d2 = d.copy()
>>> g(1, d=4, **d)
1 () {'a': 1, 'b': 2, 'c': 3, 'd': 4}
>>> d == d2
True
What about willful misconduct?
>>> def saboteur(**kw):
... kw['x'] = 'm'
... return kw
>>> d = {}
>>> kw = saboteur(a=1, **d)
>>> d
{}
>>> g(1, 2, 3, **{'x': 4, 'y': 5})
Traceback (most recent call last):
...
TypeError: g() got multiple values for keyword argument 'x'
>>> f(**{1:2})
Traceback (most recent call last):
...
TypeError: f() keywords must be strings
>>> h(**{'e': 2})
Traceback (most recent call last):
...
TypeError: h() got an unexpected keyword argument 'e'
>>> h(*h)
Traceback (most recent call last):
...
TypeError: h() argument after * must be a sequence
>>> dir(*h)
Traceback (most recent call last):
...
TypeError: dir() argument after * must be a sequence
>>> None(*h)
Traceback (most recent call last):
...
TypeError: NoneType argument after * must be a sequence
>>> h(**h)
Traceback (most recent call last):
...
TypeError: h() argument after ** must be a mapping
>>> dir(**h)
Traceback (most recent call last):
...
TypeError: dir() argument after ** must be a mapping
>>> None(**h)
Traceback (most recent call last):
...
TypeError: NoneType argument after ** must be a mapping
>>> dir(b=1, **{'b': 1})
Traceback (most recent call last):
...
TypeError: dir() got multiple values for keyword argument 'b'
Another helper function
>>> def f2(*a, **b):
... return a, b
>>> d = {}
>>> for i in xrange(512):
... key = 'k%d' % i
... d[key] = i
>>> a, b = f2(1, *(2,3), **d)
>>> len(a), len(b), b == d
(3, 512, True)
>>> class Foo:
... def method(self, arg1, arg2):
... return arg1+arg2
>>> x = Foo()
>>> Foo.method(*(x, 1, 2))
3
>>> Foo.method(x, *(1, 2))
3
>>> Foo.method(*(1, 2, 3))
Traceback (most recent call last):
...
TypeError: unbound method method() must be called with Foo instance as \
first argument (got int instance instead)
>>> Foo.method(1, *[2, 3])
Traceback (most recent call last):
...
TypeError: unbound method method() must be called with Foo instance as \
first argument (got int instance instead)
A PyCFunction that takes only positional parameters shoud allow an
empty keyword dictionary to pass without a complaint, but raise a
TypeError if te dictionary is not empty
>>> try:
... silence = id(1, *{})
... True
... except:
... False
True
>>> id(1, **{'foo': 1})
Traceback (most recent call last):
...
TypeError: id() takes no keyword arguments
""" |
"""
=============================
Subclassing ndarray in python
=============================
Credits
-------
This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses.
Introduction
------------
Subclassing ndarray is relatively simple, but it has some complications
compared to other Python objects. On this page we explain the machinery
that allows you to subclass ndarray, and the implications for
implementing a subclass.
ndarrays and object creation
============================
Subclassing ndarray is complicated by the fact that new instances of
ndarray classes can come about in three different ways. These are:
#. Explicit constructor call - as in ``MySubClass(params)``. This is
the usual route to Python instance creation.
#. View casting - casting an existing ndarray as a given subclass
#. New from template - creating a new instance from a template
instance. Examples include returning slices from a subclassed array,
creating return types from ufuncs, and copying arrays. See
:ref:`new-from-template` for more details
The last two are characteristics of ndarrays - in order to support
things like array slicing. The complications of subclassing ndarray are
due to the mechanisms numpy has to support these latter two routes of
instance creation.
.. _view-casting:
View casting
------------
*View casting* is the standard ndarray mechanism by which you take an
ndarray of any subclass, and return a view of the array as another
(specified) subclass:
>>> import numpy as np
>>> # create a completely useless ndarray subclass
>>> class C(np.ndarray): pass
>>> # create a standard ndarray
>>> arr = np.zeros((3,))
>>> # take a view of it, as our useless subclass
>>> c_arr = arr.view(C)
>>> type(c_arr)
<class 'C'>
.. _new-from-template:
Creating new from template
--------------------------
New instances of an ndarray subclass can also come about by a very
similar mechanism to :ref:`view-casting`, when numpy finds it needs to
create a new instance from a template instance. The most obvious place
this has to happen is when you are taking slices of subclassed arrays.
For example:
>>> v = c_arr[1:]
>>> type(v) # the view is of type 'C'
<class 'C'>
>>> v is c_arr # but it's a new instance
False
The slice is a *view* onto the original ``c_arr`` data. So, when we
take a view from the ndarray, we return a new ndarray, of the same
class, that points to the data in the original.
There are other points in the use of ndarrays where we need such views,
such as copying arrays (``c_arr.copy()``), creating ufunc output arrays
(see also :ref:`array-wrap`), and reducing methods (like
``c_arr.mean()``.
Relationship of view casting and new-from-template
--------------------------------------------------
These paths both use the same machinery. We make the distinction here,
because they result in different input to your methods. Specifically,
:ref:`view-casting` means you have created a new instance of your array
type from any potential subclass of ndarray. :ref:`new-from-template`
means you have created a new instance of your class from a pre-existing
instance, allowing you - for example - to copy across attributes that
are particular to your subclass.
Implications for subclassing
----------------------------
If we subclass ndarray, we need to deal not only with explicit
construction of our array type, but also :ref:`view-casting` or
:ref:`new-from-template`. Numpy has the machinery to do this, and this
machinery that makes subclassing slightly non-standard.
There are two aspects to the machinery that ndarray uses to support
views and new-from-template in subclasses.
The first is the use of the ``ndarray.__new__`` method for the main work
of object initialization, rather then the more usual ``__init__``
method. The second is the use of the ``__array_finalize__`` method to
allow subclasses to clean up after the creation of views and new
instances from templates.
A brief Python primer on ``__new__`` and ``__init__``
=====================================================
``__new__`` is a standard Python method, and, if present, is called
before ``__init__`` when we create a class instance. See the `python
__new__ documentation
<http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail.
For example, consider the following Python code:
.. testcode::
class C(object):
def __new__(cls, *args):
print 'Cls in __new__:', cls
print 'Args in __new__:', args
return object.__new__(cls, *args)
def __init__(self, *args):
print 'type(self) in __init__:', type(self)
print 'Args in __init__:', args
meaning that we get:
>>> c = C('hello')
Cls in __new__: <class 'C'>
Args in __new__: ('hello',)
type(self) in __init__: <class 'C'>
Args in __init__: ('hello',)
When we call ``C('hello')``, the ``__new__`` method gets its own class
as first argument, and the passed argument, which is the string
``'hello'``. After python calls ``__new__``, it usually (see below)
calls our ``__init__`` method, with the output of ``__new__`` as the
first argument (now a class instance), and the passed arguments
following.
As you can see, the object can be initialized in the ``__new__``
method or the ``__init__`` method, or both, and in fact ndarray does
not have an ``__init__`` method, because all the initialization is
done in the ``__new__`` method.
Why use ``__new__`` rather than just the usual ``__init__``? Because
in some cases, as for ndarray, we want to be able to return an object
of some other class. Consider the following:
.. testcode::
class D(C):
def __new__(cls, *args):
print 'D cls is:', cls
print 'D args in __new__:', args
return C.__new__(C, *args)
def __init__(self, *args):
# we never get here
print 'In D __init__'
meaning that:
>>> obj = D('hello')
D cls is: <class 'D'>
D args in __new__: ('hello',)
Cls in __new__: <class 'C'>
Args in __new__: ('hello',)
>>> type(obj)
<class 'C'>
The definition of ``C`` is the same as before, but for ``D``, the
``__new__`` method returns an instance of class ``C`` rather than
``D``. Note that the ``__init__`` method of ``D`` does not get
called. In general, when the ``__new__`` method returns an object of
class other than the class in which it is defined, the ``__init__``
method of that class is not called.
This is how subclasses of the ndarray class are able to return views
that preserve the class type. When taking a view, the standard
ndarray machinery creates the new ndarray object with something
like::
obj = ndarray.__new__(subtype, shape, ...
where ``subdtype`` is the subclass. Thus the returned view is of the
same class as the subclass, rather than being of class ``ndarray``.
That solves the problem of returning views of the same type, but now
we have a new problem. The machinery of ndarray can set the class
this way, in its standard methods for taking views, but the ndarray
``__new__`` method knows nothing of what we have done in our own
``__new__`` method in order to set attributes, and so on. (Aside -
why not call ``obj = subdtype.__new__(...`` then? Because we may not
have a ``__new__`` method with the same call signature).
The role of ``__array_finalize__``
==================================
``__array_finalize__`` is the mechanism that numpy provides to allow
subclasses to handle the various ways that new instances get created.
Remember that subclass instances can come about in these three ways:
#. explicit constructor call (``obj = MySubClass(params)``). This will
call the usual sequence of ``MySubClass.__new__`` then (if it exists)
``MySubClass.__init__``.
#. :ref:`view-casting`
#. :ref:`new-from-template`
Our ``MySubClass.__new__`` method only gets called in the case of the
explicit constructor call, so we can't rely on ``MySubClass.__new__`` or
``MySubClass.__init__`` to deal with the view casting and
new-from-template. It turns out that ``MySubClass.__array_finalize__``
*does* get called for all three methods of object creation, so this is
where our object creation housekeeping usually goes.
* For the explicit constructor call, our subclass will need to create a
new ndarray instance of its own class. In practice this means that
we, the authors of the code, will need to make a call to
``ndarray.__new__(MySubClass,...)``, or do view casting of an existing
array (see below)
* For view casting and new-from-template, the equivalent of
``ndarray.__new__(MySubClass,...`` is called, at the C level.
The arguments that ``__array_finalize__`` recieves differ for the three
methods of instance creation above.
The following code allows us to look at the call sequences and arguments:
.. testcode::
import numpy as np
class C(np.ndarray):
def __new__(cls, *args, **kwargs):
print 'In __new__ with class %s' % cls
return np.ndarray.__new__(cls, *args, **kwargs)
def __init__(self, *args, **kwargs):
# in practice you probably will not need or want an __init__
# method for your subclass
print 'In __init__ with class %s' % self.__class__
def __array_finalize__(self, obj):
print 'In array_finalize:'
print ' self type is %s' % type(self)
print ' obj type is %s' % type(obj)
Now:
>>> # Explicit constructor
>>> c = C((10,))
In __new__ with class <class 'C'>
In array_finalize:
self type is <class 'C'>
obj type is <type 'NoneType'>
In __init__ with class <class 'C'>
>>> # View casting
>>> a = np.arange(10)
>>> cast_a = a.view(C)
In array_finalize:
self type is <class 'C'>
obj type is <type 'numpy.ndarray'>
>>> # Slicing (example of new-from-template)
>>> cv = c[:1]
In array_finalize:
self type is <class 'C'>
obj type is <class 'C'>
The signature of ``__array_finalize__`` is::
def __array_finalize__(self, obj):
``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our
own class (``self``) as well as the object from which the view has been
taken (``obj``). As you can see from the output above, the ``self`` is
always a newly created instance of our subclass, and the type of ``obj``
differs for the three instance creation methods:
* When called from the explicit constructor, ``obj`` is ``None``
* When called from view casting, ``obj`` can be an instance of any
subclass of ndarray, including our own.
* When called in new-from-template, ``obj`` is another instance of our
own subclass, that we might use to update the new ``self`` instance.
Because ``__array_finalize__`` is the only method that always sees new
instances being created, it is the sensible place to fill in instance
defaults for new object attributes, among other tasks.
This may be clearer with an example.
Simple example - adding an extra attribute to ndarray
-----------------------------------------------------
.. testcode::
import numpy as np
class InfoArray(np.ndarray):
def __new__(subtype, shape, dtype=float, buffer=None, offset=0,
strides=None, order=None, info=None):
# Create the ndarray instance of our type, given the usual
# ndarray input arguments. This will call the standard
# ndarray constructor, but return an object of our type.
# It also triggers a call to InfoArray.__array_finalize__
obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides,
order)
# set the new 'info' attribute to the value passed
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# ``self`` is a new object resulting from
# ndarray.__new__(InfoArray, ...), therefore it only has
# attributes that the ndarray.__new__ constructor gave it -
# i.e. those of a standard ndarray.
#
# We could have got to the ndarray.__new__ call in 3 ways:
# From an explicit constructor - e.g. InfoArray():
# obj is None
# (we're in the middle of the InfoArray.__new__
# constructor, and self.info will be set when we return to
# InfoArray.__new__)
if obj is None: return
# From view casting - e.g arr.view(InfoArray):
# obj is arr
# (type(obj) can be InfoArray)
# From new-from-template - e.g infoarr[:3]
# type(obj) is InfoArray
#
# Note that it is here, rather than in the __new__ method,
# that we set the default value for 'info', because this
# method sees all creation of default objects - with the
# InfoArray.__new__ constructor, but also with
# arr.view(InfoArray).
self.info = getattr(obj, 'info', None)
# We do not need to return anything
Using the object looks like this:
>>> obj = InfoArray(shape=(3,)) # explicit constructor
>>> type(obj)
<class 'InfoArray'>
>>> obj.info is None
True
>>> obj = InfoArray(shape=(3,), info='information')
>>> obj.info
'information'
>>> v = obj[1:] # new-from-template - here - slicing
>>> type(v)
<class 'InfoArray'>
>>> v.info
'information'
>>> arr = np.arange(10)
>>> cast_arr = arr.view(InfoArray) # view casting
>>> type(cast_arr)
<class 'InfoArray'>
>>> cast_arr.info is None
True
This class isn't very useful, because it has the same constructor as the
bare ndarray object, including passing in buffers and shapes and so on.
We would probably prefer the constructor to be able to take an already
formed ndarray from the usual numpy calls to ``np.array`` and return an
object.
Slightly more realistic example - attribute added to existing array
-------------------------------------------------------------------
Here is a class that takes a standard ndarray that already exists, casts
as our type, and adds an extra attribute.
.. testcode::
import numpy as np
class RealisticInfoArray(np.ndarray):
def __new__(cls, input_array, info=None):
# Input array is an already formed ndarray instance
# We first cast to be our class type
obj = np.asarray(input_array).view(cls)
# add the new attribute to the created instance
obj.info = info
# Finally, we must return the newly created object:
return obj
def __array_finalize__(self, obj):
# see InfoArray.__array_finalize__ for comments
if obj is None: return
self.info = getattr(obj, 'info', None)
So:
>>> arr = np.arange(5)
>>> obj = RealisticInfoArray(arr, info='information')
>>> type(obj)
<class 'RealisticInfoArray'>
>>> obj.info
'information'
>>> v = obj[1:]
>>> type(v)
<class 'RealisticInfoArray'>
>>> v.info
'information'
.. _array-wrap:
``__array_wrap__`` for ufuncs
-------------------------------------------------------
``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy
functions, to allow a subclass to set the type of the return value
and update attributes and metadata. Let's show how this works with an example.
First we make the same subclass as above, but with a different name and
some print statements:
.. testcode::
import numpy as np
class MySubClass(np.ndarray):
def __new__(cls, input_array, info=None):
obj = np.asarray(input_array).view(cls)
obj.info = info
return obj
def __array_finalize__(self, obj):
print 'In __array_finalize__:'
print ' self is %s' % repr(self)
print ' obj is %s' % repr(obj)
if obj is None: return
self.info = getattr(obj, 'info', None)
def __array_wrap__(self, out_arr, context=None):
print 'In __array_wrap__:'
print ' self is %s' % repr(self)
print ' arr is %s' % repr(out_arr)
# then just call the parent
return np.ndarray.__array_wrap__(self, out_arr, context)
We run a ufunc on an instance of our new array:
>>> obj = MySubClass(np.arange(5), info='spam')
In __array_finalize__:
self is MySubClass([0, 1, 2, 3, 4])
obj is array([0, 1, 2, 3, 4])
>>> arr2 = np.arange(5)+1
>>> ret = np.add(arr2, obj)
In __array_wrap__:
self is MySubClass([0, 1, 2, 3, 4])
arr is array([1, 3, 5, 7, 9])
In __array_finalize__:
self is MySubClass([1, 3, 5, 7, 9])
obj is MySubClass([0, 1, 2, 3, 4])
>>> ret
MySubClass([1, 3, 5, 7, 9])
>>> ret.info
'spam'
Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the
input with the highest ``__array_priority__`` value, in this case
``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and
``out_arr`` as the (ndarray) result of the addition. In turn, the
default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the
result to class ``MySubClass``, and called ``__array_finalize__`` -
hence the copying of the ``info`` attribute. This has all happened at the C level.
But, we could do anything we wanted:
.. testcode::
class SillySubClass(np.ndarray):
def __array_wrap__(self, arr, context=None):
return 'I lost your data'
>>> arr1 = np.arange(5)
>>> obj = arr1.view(SillySubClass)
>>> arr2 = np.arange(5)
>>> ret = np.multiply(obj, arr2)
>>> ret
'I lost your data'
So, by defining a specific ``__array_wrap__`` method for our subclass,
we can tweak the output from ufuncs. The ``__array_wrap__`` method
requires ``self``, then an argument - which is the result of the ufunc -
and an optional parameter *context*. This parameter is returned by some
ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc,
domain of the ufunc). ``__array_wrap__`` should return an instance of
its containing class. See the masked array subclass for an
implementation.
In addition to ``__array_wrap__``, which is called on the way out of the
ufunc, there is also an ``__array_prepare__`` method which is called on
the way into the ufunc, after the output arrays are created but before any
computation has been performed. The default implementation does nothing
but pass through the array. ``__array_prepare__`` should not attempt to
access the array data or resize the array, it is intended for setting the
output array type, updating attributes and metadata, and performing any
checks based on the input that may be desired before computation begins.
Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or
subclass thereof or raise an error.
Extra gotchas - custom ``__del__`` methods and ndarray.base
-----------------------------------------------------------
One of the problems that ndarray solves is keeping track of memory
ownership of ndarrays and their views. Consider the case where we have
created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``.
The two objects are looking at the same memory. Numpy keeps track of
where the data came from for a particular array or view, with the
``base`` attribute:
>>> # A normal ndarray, that owns its own data
>>> arr = np.zeros((4,))
>>> # In this case, base is None
>>> arr.base is None
True
>>> # We take a view
>>> v1 = arr[1:]
>>> # base now points to the array that it derived from
>>> v1.base is arr
True
>>> # Take a view of a view
>>> v2 = v1[1:]
>>> # base points to the view it derived from
>>> v2.base is v1
True
In general, if the array owns its own memory, as for ``arr`` in this
case, then ``arr.base`` will be None - there are some exceptions to this
- see the numpy book for more details.
The ``base`` attribute is useful in being able to tell whether we have
a view or the original array. This in turn can be useful if we need
to know whether or not to do some specific cleanup when the subclassed
array is deleted. For example, we may only want to do the cleanup if
the original array is deleted, but not the views. For an example of
how this can work, have a look at the ``memmap`` class in
``numpy.core``.
""" |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ***********************IMPORTANT NMAP LICENSE TERMS************************
# * *
# * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is *
# * also a registered trademark of Insecure.Com LLC. This program is free *
# * software; you may redistribute and/or modify it under the terms of the *
# * GNU General Public License as published by the Free Software *
# * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS *
# * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, *
# * modify, and redistribute this software under certain conditions. If *
# * you wish to embed Nmap technology into proprietary software, we sell *
# * alternative licenses (contact EMAIL Dozens of software *
# * vendors already license Nmap technology such as host discovery, port *
# * scanning, OS detection, version detection, and the Nmap Scripting *
# * Engine. *
# * *
# * Note that the GPL places important restrictions on "derivative works", *
# * yet it does not provide a detailed definition of that term. To avoid *
# * misunderstandings, we interpret that term as broadly as copyright law *
# * allows. For example, we consider an application to constitute a *
# * derivative work for the purpose of this license if it does any of the *
# * following with any software or content covered by this license *
# * ("Covered Software"): *
# * *
# * o Integrates source code from Covered Software. *
# * *
# * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db *
# * or nmap-service-probes. *
# * *
# * o Is designed specifically to execute Covered Software and parse the *
# * results (as opposed to typical shell or execution-menu apps, which will *
# * execute anything you tell them to). *
# * *
# * o Includes Covered Software in a proprietary executable installer. The *
# * installers produced by InstallShield are an example of this. Including *
# * Nmap with other software in compressed or archival form does not *
# * trigger this provision, provided appropriate open source decompression *
# * or de-archiving software is widely available for no charge. For the *
# * purposes of this license, an installer is considered to include Covered *
# * Software even if it actually retrieves a copy of Covered Software from *
# * another source during runtime (such as by downloading it from the *
# * Internet). *
# * *
# * o Links (statically or dynamically) to a library which does any of the *
# * above. *
# * *
# * o Executes a helper program, module, or script to do any of the above. *
# * *
# * This list is not exclusive, but is meant to clarify our interpretation *
# * of derived works with some common examples. Other people may interpret *
# * the plain GPL differently, so we consider this a special exception to *
# * the GPL that we apply to Covered Software. Works which meet any of *
# * these conditions must conform to all of the terms of this license, *
# * particularly including the GPL Section 3 requirements of providing *
# * source code and allowing free redistribution of the work as a whole. *
# * *
# * As another special exception to the GPL terms, Insecure.Com LLC grants *
# * permission to link the code of this program with any version of the *
# * OpenSSL library which is distributed under a license identical to that *
# * listed in the included docs/licenses/OpenSSL.txt file, and distribute *
# * linked combinations including the two. *
# * *
# * Any redistribution of Covered Software, including any derived works, *
# * must obey and carry forward all of the terms of this license, including *
# * obeying all GPL rules and restrictions. For example, source code of *
# * the whole work must be provided and free redistribution must be *
# * allowed. All GPL references to "this License", are to be treated as *
# * including the terms and conditions of this license text as well. *
# * *
# * Because this license imposes special exceptions to the GPL, Covered *
# * Work may not be combined (even as part of a larger work) with plain GPL *
# * software. The terms, conditions, and exceptions of this license must *
# * be included as well. This license is incompatible with some other open *
# * source licenses as well. In some cases we can relicense portions of *
# * Nmap or grant special permissions to use it in other open source *
# * software. Please contact EMAIL with any such requests. *
# * Similarly, we don't incorporate incompatible open source software into *
# * Covered Software without special permission from the copyright holders. *
# * *
# * If you have any questions about the licensing restrictions on using *
# * Nmap in other works, are happy to help. As mentioned above, we also *
# * offer alternative license to integrate Nmap into proprietary *
# * applications and appliances. These contracts have been sold to dozens *
# * of software vendors, and generally include a perpetual license as well *
# * as providing for priority support and updates. They also fund the *
# * continued development of Nmap. Please email EMAIL for further *
# * information. *
# * *
# * If you have received a written license agreement or contract for *
# * Covered Software stating terms other than these, you may choose to use *
# * and redistribute Covered Software under those terms instead of these. *
# * *
# * Source is provided to this software because we believe users have a *
# * right to know exactly what a program is going to do before they run it. *
# * This also allows you to audit the software for security holes (none *
# * have been found so far). *
# * *
# * Source code also allows you to port Nmap to new platforms, fix bugs, *
# * and add new features. You are highly encouraged to send your changes *
# * to the EMAIL mailing list for possible incorporation into the *
# * main distribution. By sending these changes to Fyodor or one of the *
# * Insecure.Org development mailing lists, or checking them into the Nmap *
# * source code repository, it is understood (unless you specify otherwise) *
# * that you are offering the Nmap Project (Insecure.Com LLC) the *
# * unlimited, non-exclusive right to reuse, modify, and relicense the *
# * code. Nmap will always be available Open Source, but this is important *
# * because the inability to relicense code has caused devastating problems *
# * for other Free Software projects (such as KDE and NASM). We also *
# * occasionally relicense the code to third parties as discussed above. *
# * If you wish to specify special license conditions of your *
# * contributions, just say so when you send them. *
# * *
# * This program is distributed in the hope that it will be useful, but *
# * WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap *
# * license file for more details (it's in a COPYING file included with *
# * Nmap, and also available from https://svn.nmap.org/nmap/COPYING *
# * *
# ***************************************************************************/
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2009 Veritos - NAME - www.veritos.nl
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsability of assessing all potential
# consequences resulting from its eventual inadequacies and bugs.
# End users who are looking for a ready-to-use solution with commercial
# garantees and support are strongly adviced to contract a Free Software
# Service Company like Veritos.
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
##############################################################################
#
# Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger).
# Deze module werkt niet in OpenERP versie 4 en lager.
#
# Status 1.0 - getest op OpenERP 5.0.3
#
# Versie IP_ADDRESS
# account.account.type
# Basis gelegd voor alle account type
#
# account.account.template
# Basis gelegd met alle benodigde grootboekrekeningen welke via een menu-
# structuur gelinkt zijn aan rubrieken 1 t/m 9.
# De grootboekrekeningen gelinkt aan de account.account.type
# Deze links moeten nog eens goed nagelopen worden.
#
# account.chart.template
# Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren,
# bank, inkoop en verkoop boeken en de BTW configuratie.
#
# Versie IP_ADDRESS
# account.tax.code.template
# Basis gelegd voor de BTW configuratie (structuur)
# Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt?
#
# account.tax.template
# De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende
# grootboekrekeningen
#
# Versie IP_ADDRESS
# Opschonen van de code en verwijderen van niet gebruikte componenten.
# Versie IP_ADDRESS
# Aanpassen a_expense van 3000 -> 7000
# record id='btw_code_5b' op negatieve waarde gezet
# Versie IP_ADDRESS
# BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde.
# Versie IP_ADDRESS
# Account Receivable en Payable goed gedefinieerd.
# Versie IP_ADDRESS
# Alle user_type_xxx velden goed gedefinieerd.
# Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren.
# Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren.
# Versie IP_ADDRESS
# Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging)
# versie IP_ADDRESS
# Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity
# versie IP_ADDRESS
# Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht.
# versie IP_ADDRESS
# BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd.
# versie IP_ADDRESS - Switch to English
# Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense
# Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
|
"""Script to generate reports on translator classes from Doxygen sources.
The main purpose of the script is to extract the information from sources
related to internationalization (the translator classes). It uses the
information to generate documentation (language.doc,
translator_report.txt) from templates (language.tpl, maintainers.txt).
Simply run the script without parameters to get the reports and
documentation for all supported languages. If you want to generate the
translator report only for some languages, pass their codes as arguments
to the script. In that case, the language.doc will not be generated.
Example:
python translator.py en nl cz
Originally, the script was written in Perl and was known as translator.pl.
The last Perl version was dated 2002/05/21 (plus some later corrections)
NAME (prikryl at atlas dot cz)
History:
--------
2002/05/21 - This was the last Perl version.
2003/05/16 - List of language marks can be passed as arguments.
2004/01/24 - Total reimplementation started: classes TrManager, and Transl.
2004/02/05 - First version that produces translator report. No language.doc yet.
2004/02/10 - First fully functional version that generates both the translator
report and the documentation. It is a bit slower than the
Perl version, but is much less tricky and much more flexible.
It also solves some problems that were not solved by the Perl
version. The translator report content should be more useful
for developers.
2004/02/11 - Some tuning-up to provide more useful information.
2004/04/16 - Added new tokens to the tokenizer (to remove some warnings).
2004/05/25 - Added from __future__ import generators not to force Python 2.3.
2004/06/03 - Removed dependency on textwrap module.
2004/07/07 - Fixed the bug in the fill() function.
2004/07/21 - Better e-mail mangling for HTML part of language.doc.
- Plural not used for reporting a single missing method.
- Removal of not used translator adapters is suggested only
when the report is not restricted to selected languages
explicitly via script arguments.
2004/07/26 - Better reporting of not-needed adapters.
2004/10/04 - Reporting of not called translator methods added.
2004/10/05 - Modified to check only doxygen/src sources for the previous report.
2005/02/28 - Slight modification to generate "mailto.txt" auxiliary file.
2005/08/15 - Doxygen's root directory determined primarily from DOXYGEN
environment variable. When not found, then relatively to the script.
2007/03/20 - The "translate me!" searched in comments and reported if found.
2008/06/09 - Warning when the MAX_DOT_GRAPH_HEIGHT is still part of trLegendDocs().
2009/05/09 - Changed HTML output to fit it with XHTML DTD
2009/09/02 - Added percentage info to the report (implemented / to be implemented).
2010/02/09 - Added checking/suggestion 'Reimplementation using UTF-8 suggested.
2010/03/03 - Added [unreachable] prefix used in maintainers.txt.
2010/05/28 - BOM skipped; minor code cleaning.
2010/05/31 - e-mail mangled already in maintainers.txt
2010/08/20 - maintainers.txt to UTF-8, related processing of unicode strings
- [any mark] introduced instead of [unreachable] only
- marks highlighted in HTML
2010/08/30 - Highlighting in what will be the table in langhowto.html modified.
2010/09/27 - The underscore in \latexonly part of the generated language.doc
was prefixed by backslash (was LaTeX related error).
2013/02/19 - Better diagnostics when translator_xx.h is too crippled.
2013/06/25 - TranslatorDecoder checks removed after removing the class.
2013/09/04 - Coloured status in langhowto. *ALMOST up-to-date* category
of translators introduced.
2014/06/16 - unified for Python 2.6+ and 3.0+
""" |
#
# ElementTree
# $Id: ElementTree.py 2326 2005-03-17 07:45:21Z USERNAME $
#
# light-weight XML support for Python 1.5.2 and later.
#
# history:
# 2001-10-20 fl created (from various sources)
# 2001-11-01 fl return root from parse method
# 2002-02-16 fl sort attributes in lexical order
# 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup
# 2002-05-01 fl finished TreeBuilder refactoring
# 2002-07-14 fl added basic namespace support to ElementTree.write
# 2002-07-25 fl added QName attribute support
# 2002-10-20 fl fixed encoding in write
# 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding
# 2002-11-27 fl accept file objects or file names for parse/write
# 2002-12-04 fl moved XMLTreeBuilder back to this module
# 2003-01-11 fl fixed entity encoding glitch for us-ascii
# 2003-02-13 fl added XML literal factory
# 2003-02-21 fl added ProcessingInstruction/PI factory
# 2003-05-11 fl added tostring/fromstring helpers
# 2003-05-26 fl added ElementPath support
# 2003-07-05 fl added makeelement factory method
# 2003-07-28 fl added more well-known namespace prefixes
# 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed
# 2003-10-31 fl markup updates
# 2003-11-15 fl fixed nested namespace bug
# 2004-03-28 fl added XMLID helper
# 2004-06-02 fl added default support to findtext
# 2004-06-08 fl fixed encoding of non-ascii element/attribute names
# 2004-08-23 fl take advantage of post-2.1 expat features
# 2005-02-01 fl added iterparse implementation
# 2005-03-02 fl fixed iterparse support for pre-2.2 versions
# 2012-06-29 EMAIL Made all classes new-style
# 2012-07-02 EMAIL Include dist. ElementPath
# 2013-02-27 EMAIL renamed module files, kept namespace.
#
# Copyright (c) 1999-2005 by NAME All rights reserved.
#
# EMAIL
# http://www.pythonware.com
#
# --------------------------------------------------------------------
# The ElementTree toolkit is
#
# Copyright (c) 1999-2005 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
# Licensed to PSF under a Contributor Agreement.
# See http://www.python.org/2.4/license for licensing details.
|
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
"""
=============================
Byteswapping and byte order
=============================
Introduction to byte ordering and ndarrays
==========================================
The ``ndarray`` is an object that provide a python array interface to data
in memory.
It often happens that the memory that you want to view with an array is
not of the same byte ordering as the computer on which you are running
Python.
For example, I might be working on a computer with a little-endian CPU -
such as an Intel Pentium, but I have loaded some data from a file
written by a computer that is big-endian. Let's say I have loaded 4
bytes from a file written by a Sun (big-endian) computer. I know that
these 4 bytes represent two 16-bit integers. On a big-endian machine, a
two-byte integer is stored with the Most Significant Byte (MSB) first,
and then the Least Significant Byte (LSB). Thus the bytes are, in memory order:
#. MSB integer 1
#. LSB integer 1
#. MSB integer 2
#. LSB integer 2
Let's say the two integers were in fact 1 and 770. Because 770 = 256 *
3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2.
The bytes I have loaded from the file would have these contents:
>>> big_end_str = chr(0) + chr(1) + chr(3) + chr(2)
>>> big_end_str
'\\x00\\x01\\x03\\x02'
We might want to use an ``ndarray`` to access these integers. In that
case, we can create an array around this memory, and tell numpy that
there are two integers, and that they are 16 bit and big-endian:
>>> import numpy as np
>>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_str)
>>> big_end_arr[0]
1
>>> big_end_arr[1]
770
Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian'
(``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For
example, if our data represented a single unsigned 4-byte little-endian
integer, the dtype string would be ``<u4``.
In fact, why don't we try that?
>>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_str)
>>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3
True
Returning to our ``big_end_arr`` - in this case our underlying data is
big-endian (data endianness) and we've set the dtype to match (the dtype
is also big-endian). However, sometimes you need to flip these around.
.. warning::
Scalars currently do not include byte order information, so extracting
a scalar from an array will return an integer in native byte order.
Hence:
>>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder
True
Changing byte ordering
======================
As you can imagine from the introduction, there are two ways you can
affect the relationship between the byte ordering of the array and the
underlying memory it is looking at:
* Change the byte-ordering information in the array dtype so that it
interprets the undelying data as being in a different byte order.
This is the role of ``arr.newbyteorder()``
* Change the byte-ordering of the underlying data, leaving the dtype
interpretation as it was. This is what ``arr.byteswap()`` does.
The common situations in which you need to change byte ordering are:
#. Your data and dtype endianess don't match, and you want to change
the dtype so that it matches the data.
#. Your data and dtype endianess don't match, and you want to swap the
data so that they match the dtype
#. Your data and dtype endianess match, but you want the data swapped
and the dtype to reflect this
Data and dtype endianness don't match, change dtype to match data
-----------------------------------------------------------------
We make something where they don't match:
>>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_str)
>>> wrong_end_dtype_arr[0]
256
The obvious fix for this situation is to change the dtype so it gives
the correct endianness:
>>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder()
>>> fixed_end_dtype_arr[0]
1
Note the the array has not changed in memory:
>>> fixed_end_dtype_arr.tobytes() == big_end_str
True
Data and type endianness don't match, change data to match dtype
----------------------------------------------------------------
You might want to do this if you need the data in memory to be a certain
ordering. For example you might be writing the memory out to a file
that needs a certain byte ordering.
>>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap()
>>> fixed_end_mem_arr[0]
1
Now the array *has* changed in memory:
>>> fixed_end_mem_arr.tobytes() == big_end_str
False
Data and dtype endianness match, swap data and dtype
----------------------------------------------------
You may have a correctly specified array dtype, but you need the array
to have the opposite byte order in memory, and you want the dtype to
match so the array values make sense. In this case you just do both of
the previous operations:
>>> swapped_end_arr = big_end_arr.byteswap().newbyteorder()
>>> swapped_end_arr[0]
1
>>> swapped_end_arr.tobytes() == big_end_str
False
An easier way of casting the data to a specific dtype and byte ordering
can be achieved with the ndarray astype method:
>>> swapped_end_arr = big_end_arr.astype('<i2')
>>> swapped_end_arr[0]
1
>>> swapped_end_arr.tobytes() == big_end_str
False
""" |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ***********************IMPORTANT NMAP LICENSE TERMS************************
# * *
# * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is *
# * also a registered trademark of Insecure.Com LLC. This program is free *
# * software; you may redistribute and/or modify it under the terms of the *
# * GNU General Public License as published by the Free Software *
# * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS *
# * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, *
# * modify, and redistribute this software under certain conditions. If *
# * you wish to embed Nmap technology into proprietary software, we sell *
# * alternative licenses (contact EMAIL Dozens of software *
# * vendors already license Nmap technology such as host discovery, port *
# * scanning, OS detection, version detection, and the Nmap Scripting *
# * Engine. *
# * *
# * Note that the GPL places important restrictions on "derivative works", *
# * yet it does not provide a detailed definition of that term. To avoid *
# * misunderstandings, we interpret that term as broadly as copyright law *
# * allows. For example, we consider an application to constitute a *
# * derivative work for the purpose of this license if it does any of the *
# * following with any software or content covered by this license *
# * ("Covered Software"): *
# * *
# * o Integrates source code from Covered Software. *
# * *
# * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db *
# * or nmap-service-probes. *
# * *
# * o Is designed specifically to execute Covered Software and parse the *
# * results (as opposed to typical shell or execution-menu apps, which will *
# * execute anything you tell them to). *
# * *
# * o Includes Covered Software in a proprietary executable installer. The *
# * installers produced by InstallShield are an example of this. Including *
# * Nmap with other software in compressed or archival form does not *
# * trigger this provision, provided appropriate open source decompression *
# * or de-archiving software is widely available for no charge. For the *
# * purposes of this license, an installer is considered to include Covered *
# * Software even if it actually retrieves a copy of Covered Software from *
# * another source during runtime (such as by downloading it from the *
# * Internet). *
# * *
# * o Links (statically or dynamically) to a library which does any of the *
# * above. *
# * *
# * o Executes a helper program, module, or script to do any of the above. *
# * *
# * This list is not exclusive, but is meant to clarify our interpretation *
# * of derived works with some common examples. Other people may interpret *
# * the plain GPL differently, so we consider this a special exception to *
# * the GPL that we apply to Covered Software. Works which meet any of *
# * these conditions must conform to all of the terms of this license, *
# * particularly including the GPL Section 3 requirements of providing *
# * source code and allowing free redistribution of the work as a whole. *
# * *
# * As another special exception to the GPL terms, Insecure.Com LLC grants *
# * permission to link the code of this program with any version of the *
# * OpenSSL library which is distributed under a license identical to that *
# * listed in the included docs/licenses/OpenSSL.txt file, and distribute *
# * linked combinations including the two. *
# * *
# * Any redistribution of Covered Software, including any derived works, *
# * must obey and carry forward all of the terms of this license, including *
# * obeying all GPL rules and restrictions. For example, source code of *
# * the whole work must be provided and free redistribution must be *
# * allowed. All GPL references to "this License", are to be treated as *
# * including the special and conditions of the license text as well. *
# * *
# * Because this license imposes special exceptions to the GPL, Covered *
# * Work may not be combined (even as part of a larger work) with plain GPL *
# * software. The terms, conditions, and exceptions of this license must *
# * be included as well. This license is incompatible with some other open *
# * source licenses as well. In some cases we can relicense portions of *
# * Nmap or grant special permissions to use it in other open source *
# * software. Please contact EMAIL with any such requests. *
# * Similarly, we don't incorporate incompatible open source software into *
# * Covered Software without special permission from the copyright holders. *
# * *
# * If you have any questions about the licensing restrictions on using *
# * Nmap in other works, are happy to help. As mentioned above, we also *
# * offer alternative license to integrate Nmap into proprietary *
# * applications and appliances. These contracts have been sold to dozens *
# * of software vendors, and generally include a perpetual license as well *
# * as providing for priority support and updates. They also fund the *
# * continued development of Nmap. Please email EMAIL for *
# * further information. *
# * *
# * If you received these files with a written license agreement or *
# * contract stating terms other than the terms above, then that *
# * alternative license agreement takes precedence over these comments. *
# * *
# * Source is provided to this software because we believe users have a *
# * right to know exactly what a program is going to do before they run it. *
# * This also allows you to audit the software for security holes (none *
# * have been found so far). *
# * *
# * Source code also allows you to port Nmap to new platforms, fix bugs, *
# * and add new features. You are highly encouraged to send your changes *
# * to the EMAIL mailing list for possible incorporation into the *
# * main distribution. By sending these changes to Fyodor or one of the *
# * Insecure.Org development mailing lists, or checking them into the Nmap *
# * source code repository, it is understood (unless you specify otherwise) *
# * that you are offering the Nmap Project (Insecure.Com LLC) the *
# * unlimited, non-exclusive right to reuse, modify, and relicense the *
# * code. Nmap will always be available Open Source, but this is important *
# * because the inability to relicense code has caused devastating problems *
# * for other Free Software projects (such as KDE and NASM). We also *
# * occasionally relicense the code to third parties as discussed above. *
# * If you wish to specify special license conditions of your *
# * contributions, just say so when you send them. *
# * *
# * This program is distributed in the hope that it will be useful, but *
# * WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap *
# * license file for more details (it's in a COPYING file included with *
# * Nmap, and also available from https://svn.nmap.org/nmap/COPYING *
# * *
# ***************************************************************************/
# This prints the normal (text) output of a single scan. Ideas for further
# development:
#
# Print the topology graphic. The graphic is already made with Cairo so the same
# code can be used to draw on the print context.
#
# Print in color with highlighting, like NmapOutputViewer.
#
# Add a header to each page with the Nmap command and page number.
#
# Add options to the print dialog to control the font, coloring, and anything
# else. This might go in a separate Print Setup dialog.
|
"""
This page is in the table of contents.
Carve is a script to carve a shape into svg slice layers.
The carve manual page is at:
http://www.bitsfrombytes.com/wiki/index.php?title=Skeinforge_Carve
On the Arcol Blog a method of deriving the layer thickness is posted. That article "Machine Calibrating" is at:
http://blog.arcol.hu/?p=157
==Settings==
===Add Layer Template to SVG===
Default is on.
When selected, the layer template will be added to the svg output, which adds javascript control boxes. So 'Add Layer Template to SVG' should be selected when the svg will be viewed in a browser.
When off, no controls will be added, the svg output will only include the fabrication paths. So 'Add Layer Template to SVG' should be deselected when the svg will be used by other software, like Inkscape.
===Bridge Thickness Multiplier===
Default is one.
Defines the the ratio of the thickness on the bridge layers over the thickness of the typical non bridge layers.
===Extra Decimal Places===
Default is one.
Defines the number of extra decimal places export will output compared to the number of decimal places in the layer thickness. The higher the 'Extra Decimal Places', the more significant figures the output numbers will have.
===Import Coarseness===
Default is one.
When a triangle mesh has holes in it, the triangle mesh slicer switches over to a slow algorithm that spans gaps in the mesh. The higher the 'Import Coarseness' setting, the wider the gaps in the mesh it will span. An import coarseness of one means it will span gaps of the perimeter width.
===Infill in Direction of Bridges===
Default is on.
When selected, the infill will be in the direction of bridges across gaps, so that the fill will be able to span a bridge easier.
===Layer Thickness===
Default is 0.4 mm.
Defines the thickness of the extrusion layer at default extruder speed, this is the most important carve setting.
===Layers===
Carve slices from bottom to top. To get a single layer, set the "Layers From" to zero and the "Layers To" to one. The 'Layers From' until 'Layers To' range is a python slice.
====Layers From====
Default is zero.
Defines the index of the bottom layer that will be carved. If the 'Layers From' is the default zero, the carving will start from the lowest layer. If the 'Layers From' index is negative, then the carving will start from the 'Layers From' index below the top layer.
====Layers To====
Default is a huge number, which will be limited to the highest index layer.
Defines the index of the top layer that will be carved. If the 'Layers To' index is a huge number like the default, the carving will go to the top of the model. If the 'Layers To' index is negative, then the carving will go to the 'Layers To' index below the top layer.
===Mesh Type===
Default is 'Correct Mesh'.
====Correct Mesh====
When selected, the mesh will be accurately carved, and if a hole is found, carve will switch over to the algorithm that spans gaps.
====Unproven Mesh====
When selected, carve will use the gap spanning algorithm from the start. The problem with the gap spanning algothm is that it will span gaps, even if there is not actually a gap in the model.
===Perimeter Width over Thickness===
Default is 1.8.
Defines the ratio of the extrusion perimeter width to the layer thickness. The higher the value the more the perimeter will be inset, the default is 1.8. A ratio of one means the extrusion is a circle, a typical ratio of 1.8 means the extrusion is a wide oval. These values should be measured from a test extrusion line.
===SVG Viewer===
Default is webbrowser.
If the 'SVG Viewer' is set to the default 'webbrowser', the scalable vector graphics file will be sent to the default browser to be opened. If the 'SVG Viewer' is set to a program name, the scalable vector graphics file will be sent to that program to be opened.
==Examples==
The following examples carve the file Screw Holder Bottom.stl. The examples are run in a terminal in the folder which contains Screw Holder Bottom.stl and carve.py.
> python carve.py
This brings up the carve dialog.
> python carve.py Screw Holder Bottom.stl
The carve tool is parsing the file:
Screw Holder Bottom.stl
..
The carve tool has created the file:
.. Screw Holder Bottom_carve.svg
> python
Python 2.5.1 (r251:54863, Sep 22 2007, 01:43:31)
[GCC 4.2.1 (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import carve
>>> carve.main()
This brings up the carve dialog.
>>> carve.writeOutput('Screw Holder Bottom.stl')
The carve tool is parsing the file:
Screw Holder Bottom.stl
..
The carve tool has created the file:
.. Screw Holder Bottom_carve.svg
""" |
#!/usr/bin/env python
# (c) 2013, NAME <EMAIL>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
#
# Author: NAME <EMAIL>
#
# Description:
# This module queries local or remote Docker daemons and generates
# inventory information.
#
# This plugin does not support targeting of specific hosts using the --host
# flag. Instead, it queries the Docker API for each container, running
# or not, and returns this data all once.
#
# The plugin returns the following custom attributes on Docker containers:
# docker_args
# docker_config
# docker_created
# docker_driver
# docker_exec_driver
# docker_host_config
# docker_hostname_path
# docker_hosts_path
# docker_id
# docker_image
# docker_name
# docker_network_settings
# docker_path
# docker_resolv_conf_path
# docker_state
# docker_volumes
# docker_volumes_rw
#
# Requirements:
# The docker-py module: https://github.com/dotcloud/docker-py
#
# Notes:
# A config file can be used to configure this inventory module, and there
# are several environment variables that can be set to modify the behavior
# of the plugin at runtime:
# DOCKER_CONFIG_FILE
# DOCKER_HOST
# DOCKER_VERSION
# DOCKER_TIMEOUT
# DOCKER_PRIVATE_SSH_PORT
# DOCKER_DEFAULT_IP
#
# Environment Variables:
# environment variable: DOCKER_CONFIG_FILE
# description:
# - A path to a Docker inventory hosts/defaults file in YAML format
# - A sample file has been provided, colocated with the inventory
# file called 'docker.yml'
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_HOST
# description:
# - The socket on which to connect to a Docker daemon API
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_VERSION
# description:
# - Version of the Docker API to use
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_TIMEOUT
# description:
# - Timeout in seconds for connections to Docker daemon API
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_PRIVATE_SSH_PORT
# description:
# - The private port (container port) on which SSH is listening
# for connections
# default: 22
# required: false
# environment variable: DOCKER_DEFAULT_IP
# description:
# - This environment variable overrides the container SSH connection
# IP address (aka, 'ansible_ssh_host')
#
# This option allows one to override the ansible_ssh_host whenever
# Docker has exercised its default behavior of binding private ports
# to all interfaces of the Docker host. This behavior, when dealing
# with remote Docker hosts, does not allow Ansible to determine
# a proper host IP address on which to connect via SSH to containers.
# By default, this inventory module assumes all IP_ADDRESS-exposed
# ports to be bound to localhost:<port>. To override this
# behavior, for example, to bind a container's SSH port to the public
# interface of its host, one must manually set this IP.
#
# It is preferable to begin to launch Docker containers with
# ports exposed on publicly accessible IP addresses, particularly
# if the containers are to be targeted by Ansible for remote
# configuration, not accessible via localhost SSH connections.
#
# Docker containers can be explicitly exposed on IP addresses by
# a) starting the daemon with the --ip argument
# b) running containers with the -P/--publish ip::containerPort
# argument
# default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker
# required: false
#
# Examples:
# Use the config file:
# DOCKER_CONFIG_FILE=./docker.yml docker.py --list
#
# Connect to docker instance on localhost port 4243
# DOCKER_HOST=tcp://localhost:4243 docker.py --list
#
# Any container's ssh port exposed on IP_ADDRESS will mapped to
# another IP address (where Ansible will attempt to connect via SSH)
# DOCKER_DEFAULT_IP=IP_ADDRESS docker.py --list
|
"""
This is a procedural interface to the matplotlib object-oriented
plotting library.
The following plotting commands are provided; the majority have
Matlab(TM) analogs and similar argument.
_Plotting commands
acorr - plot the autocorrelation function
annotate - annotate something in the figure
arrow - add an arrow to the axes
axes - Create a new axes
axhline - draw a horizontal line across axes
axvline - draw a vertical line across axes
axhspan - draw a horizontal bar across axes
axvspan - draw a vertical bar across axes
axis - Set or return the current axis limits
bar - make a bar chart
barh - a horizontal bar chart
broken_barh - a set of horizontal bars with gaps
box - set the axes frame on/off state
boxplot - make a box and whisker plot
cla - clear current axes
clabel - label a contour plot
clf - clear a figure window
clim - adjust the color limits of the current image
close - close a figure window
colorbar - add a colorbar to the current figure
cohere - make a plot of coherence
contour - make a contour plot
contourf - make a filled contour plot
csd - make a plot of cross spectral density
delaxes - delete an axes from the current figure
draw - Force a redraw of the current figure
errorbar - make an errorbar graph
figlegend - make legend on the figure rather than the axes
figimage - make a figure image
figtext - add text in figure coords
figure - create or change active figure
fill - make filled polygons
findobj - recursively find all objects matching some criteria
gca - return the current axes
gcf - return the current figure
gci - get the current image, or None
getp - get a handle graphics property
grid - set whether gridding is on
hist - make a histogram
hold - set the axes hold state
ioff - turn interaction mode off
ion - turn interaction mode on
isinteractive - return True if interaction mode is on
imread - load image file into array
imshow - plot image data
ishold - return the hold state of the current axes
legend - make an axes legend
loglog - a log log plot
matshow - display a matrix in a new figure preserving aspect
pcolor - make a pseudocolor plot
pcolormesh - make a pseudocolor plot using a quadrilateral mesh
pie - make a pie chart
plot - make a line plot
plot_date - plot dates
plotfile - plot column data from an ASCII tab/space/comma delimited file
pie - pie charts
polar - make a polar plot on a PolarAxes
psd - make a plot of power spectral density
quiver - make a direction field (arrows) plot
rc - control the default params
rgrids - customize the radial grids and labels for polar
savefig - save the current figure
scatter - make a scatter plot
setp - set a handle graphics property
semilogx - log x axis
semilogy - log y axis
show - show the figures
specgram - a spectrogram plot
spy - plot sparsity pattern using markers or image
stem - make a stem plot
subplot - make a subplot (numrows, numcols, axesnum)
subplots_adjust - change the params controlling the subplot positions of current figure
subplot_tool - launch the subplot configuration tool
suptitle - add a figure title
table - add a table to the plot
text - add some text at location x,y to the current axes
thetagrids - customize the radial theta grids and labels for polar
title - add a title to the current axes
xcorr - plot the autocorrelation function of x and y
xlim - set/get the xlimits
ylim - set/get the ylimits
xticks - set/get the xticks
yticks - set/get the yticks
xlabel - add an xlabel to the current axes
ylabel - add a ylabel to the current axes
autumn - set the default colormap to autumn
bone - set the default colormap to bone
cool - set the default colormap to cool
copper - set the default colormap to copper
flag - set the default colormap to flag
gray - set the default colormap to gray
hot - set the default colormap to hot
hsv - set the default colormap to hsv
jet - set the default colormap to jet
pink - set the default colormap to pink
prism - set the default colormap to prism
spring - set the default colormap to spring
summer - set the default colormap to summer
winter - set the default colormap to winter
spectral - set the default colormap to spectral
_Event handling
connect - register an event handler
disconnect - remove a connected event handler
_Matrix commands
cumprod - the cumulative product along a dimension
cumsum - the cumulative sum along a dimension
detrend - remove the mean or besdt fit line from an array
diag - the k-th diagonal of matrix
diff - the n-th differnce of an array
eig - the eigenvalues and eigen vectors of v
eye - a matrix where the k-th diagonal is ones, else zero
find - return the indices where a condition is nonzero
fliplr - flip the rows of a matrix up/down
flipud - flip the columns of a matrix left/right
linspace - a linear spaced vector of N values from min to max inclusive
logspace - a log spaced vector of N values from min to max inclusive
meshgrid - repeat x and y to make regular matrices
ones - an array of ones
rand - an array from the uniform distribution [0,1]
randn - an array from the normal distribution
rot90 - rotate matrix k*90 degress counterclockwise
squeeze - squeeze an array removing any dimensions of length 1
tri - a triangular matrix
tril - a lower triangular matrix
triu - an upper triangular matrix
vander - the Vandermonde matrix of vector x
svd - singular value decomposition
zeros - a matrix of zeros
_Probability
levypdf - The levy probability density function from the char. func.
normpdf - The Gaussian probability density function
rand - random numbers from the uniform distribution
randn - random numbers from the normal distribution
_Statistics
corrcoef - correlation coefficient
cov - covariance matrix
amax - the maximum along dimension m
mean - the mean along dimension m
median - the median along dimension m
amin - the minimum along dimension m
norm - the norm of vector x
prod - the product along dimension m
ptp - the max-min along dimension m
std - the standard deviation along dimension m
asum - the sum along dimension m
_Time series analysis
bartlett - M-point Bartlett window
blackman - M-point Blackman window
cohere - the coherence using average periodiogram
csd - the cross spectral density using average periodiogram
fft - the fast Fourier transform of vector x
hamming - M-point Hamming window
hanning - M-point Hanning window
hist - compute the histogram of x
kaiser - M length Kaiser window
psd - the power spectral density using average periodiogram
sinc - the sinc function of array x
_Dates
date2num - convert python datetimes to numeric representation
drange - create an array of numbers for date plots
num2date - convert numeric type (float days since 0001) to datetime
_Other
angle - the angle of a complex array
griddata - interpolate irregularly distributed data to a regular grid
load - load ASCII data into array
polyfit - fit x, y to an n-th order polynomial
polyval - evaluate an n-th order polynomial
roots - the roots of the polynomial coefficients in p
save - save an array to an ASCII file
trapz - trapezoidal integration
__end
""" |
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |
# -*- encoding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2009 Veritos - NAME - www.veritos.nl
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsability of assessing all potential
# consequences resulting from its eventual inadequacies and bugs.
# End users who are looking for a ready-to-use solution with commercial
# garantees and support are strongly adviced to contract a Free Software
# Service Company like Veritos.
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
##############################################################################
#
# Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger).
# Deze module werkt niet in OpenERP versie 4 en lager.
#
# Status 1.0 - getest op OpenERP 5.0.3
#
# Versie IP_ADDRESS
# account.account.type
# Basis gelegd voor alle account type
#
# account.account.template
# Basis gelegd met alle benodigde grootboekrekeningen welke via een menu-
# structuur gelinkt zijn aan rubrieken 1 t/m 9.
# De grootboekrekeningen gelinkt aan de account.account.type
# Deze links moeten nog eens goed nagelopen worden.
#
# account.chart.template
# Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren,
# bank, inkoop en verkoop boeken en de BTW configuratie.
#
# Versie IP_ADDRESS
# account.tax.code.template
# Basis gelegd voor de BTW configuratie (structuur)
# Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt?
#
# account.tax.template
# De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende
# grootboekrekeningen
#
# Versie IP_ADDRESS
# Opschonen van de code en verwijderen van niet gebruikte componenten.
# Versie IP_ADDRESS
# Aanpassen a_expense van 3000 -> 7000
# record id='btw_code_5b' op negatieve waarde gezet
# Versie IP_ADDRESS
# BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde.
# Versie IP_ADDRESS
# Account Receivable en Payable goed gedefinieerd.
# Versie IP_ADDRESS
# Alle user_type_xxx velden goed gedefinieerd.
# Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren.
# Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren.
# Versie IP_ADDRESS
# Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging)
# versie IP_ADDRESS
# Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity
# versie IP_ADDRESS
# Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht.
# versie IP_ADDRESS
# BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd.
# versie IP_ADDRESS - Switch to English
# Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense
# Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
|
#ff0023
#ff001e
#ff0018
#ff0012
#ff000c
#ff1100
#ff1700
#ff1d00
#ff2200
#ff2800
#ff2d00
#ff3200
#ff3600
#ff3b00
#ff4000
#ff4400
#ff4900
#ff4d00
#ff5100
#ff5600
#ff5a00
#ff5e00
#ff6200
#ff6600
#ff6a00
#ff6e00
#ff7200
#ff7600
#ff7a00
#ff7e00
#ff8200
#ff8600
#ff8a00
#ff8d00
#ff9100
#ff9500
#ff9900
#ff9c00
#ffa000
#ffa400
#ffa700
#ffab00
#ffae00
#ffb200
#ffb500
#ffb900
#ffbc00
#ffc000
#ffc300
#ffc700
#ffca00
#ffce00
#ffd100
#ffd500
#ffd800
#ffdb00
#ffdf00
#ffe200
#ffe500
#ffe900
#ffec00
#ffef00
#fff300
#fff600
#fff900
#fffc00
#feff00
#fbff00
#f8ff00
#f5ff00
#f1ff00
#eeff00
#ebff00
#e7ff00
#e4ff00
#e1ff00
#ddff00
#daff00
#d7ff00
#d3ff00
#d0ff00
#ccff00
#c9ff00
#c6ff00
#c2ff00
#bfff00
#bbff00
#b8ff00
#b4ff00
#b1ff00
#adff00
#a9ff00
#a6ff00
#a2ff00
#9fff00
#9bff00
#97ff00
#93ff00
#90ff00
#8cff00
#88ff00
#84ff00
#81ff00
#7dff00
#79ff00
#75ff00
#71ff00
#6dff00
#69ff00
#65ff00
#61ff00
#5dff00
#58ff00
#54ff00
#50ff00
#4bff00
#47ff00
#43ff00
#3eff00
#39ff00
#35ff00
#30ff00
#2bff00
#26ff00
#20ff00
#1bff00
#15ff00
#0eff00
#00ff0e
#00ff15
#00ff1b
#00ff20
#00ff26
#00ff2b
#00ff30
#00ff35
#00ff39
#00ff3e
#00ff43
#00ff47
#00ff4b
#00ff50
#00ff54
#00ff58
#00ff5d
#00ff61
#00ff65
#00ff69
#00ff6d
#00ff71
#00ff75
#00ff79
#00ff7d
#00ff81
#00ff84
#00ff88
#00ff8c
#00ff90
#00ff93
#00ff97
#00ff9b
#00ff9f
#00ffa2
#00ffa6
#00ffa9
#00ffad
#00ffb1
#00ffb4
#00ffb8
#00ffbb
#00ffbf
#00ffc2
#00ffc6
#00ffc9
#00ffcc
#00ffd0
#00ffd3
#00ffd7
#00ffda
#00ffdd
#00ffe1
#00ffe4
#00ffe7
#00ffeb
#00ffee
#00fff1
#00fff5
#00fff8
#00fffb
#00fffe
#00fcff
#00f9ff
#00f6ff
#00f3ff
#00efff
#00ecff
#00e9ff
#00e5ff
#00e2ff
#00dfff
#00dbff
#00d8ff
#00d5ff
#00d1ff
#00ceff
#00caff
#00c7ff
#00c3ff
#00c0ff
#00bcff
#00b9ff
#00b5ff
#00b2ff
#00aeff
#00abff
#00a7ff
#00a4ff
#00a0ff
#009cff
#0099ff
#0095ff
#0091ff
#008dff
#008aff
#0086ff
#0082ff
#007eff
#007aff
#0076ff
#0072ff
#006eff
#006aff
#0066ff
#0062ff
#005eff
#005aff
#0056ff
#0051ff
#004dff
#0049ff
#0044ff
#0040ff
#003bff
#0036ff
#0032ff
#002dff
#0028ff
#0022ff
#001dff
#0017ff
#0011ff
#0c00ff
#1200ff
#1800ff
#1e00ff
#2300ff
|
"""
Wrappers to LAPACK library
==========================
flapack -- wrappers for Fortran [*] LAPACK routines
clapack -- wrappers for ATLAS LAPACK routines
calc_lwork -- calculate optimal lwork parameters
get_lapack_funcs -- query for wrapper functions.
[*] If ATLAS libraries are available then Fortran routines
actually use ATLAS routines and should perform equally
well to ATLAS routines.
Module flapack
++++++++++++++
In the following all function names are shown without
type prefix (s,d,c,z). Optimal values for lwork can
be computed using calc_lwork module.
Linear Equations
----------------
Drivers::
lu,piv,x,info = gesv(a,b,overwrite_a=0,overwrite_b=0)
lub,piv,x,info = gbsv(kl,ku,ab,b,overwrite_ab=0,overwrite_b=0)
c,x,info = posv(a,b,lower=0,overwrite_a=0,overwrite_b=0)
Computational routines::
lu,piv,info = getrf(a,overwrite_a=0)
x,info = getrs(lu,piv,b,trans=0,overwrite_b=0)
inv_a,info = getri(lu,piv,lwork=min_lwork,overwrite_lu=0)
c,info = potrf(a,lower=0,clean=1,overwrite_a=0)
x,info = potrs(c,b,lower=0,overwrite_b=0)
inv_a,info = potri(c,lower=0,overwrite_c=0)
inv_c,info = trtri(c,lower=0,unitdiag=0,overwrite_c=0)
Linear Least Squares (LLS) Problems
-----------------------------------
Drivers::
v,x,s,rank,info = gelss(a,b,cond=-1.0,lwork=min_lwork,overwrite_a=0,overwrite_b=0)
Computational routines::
qr,tau,info = geqrf(a,lwork=min_lwork,overwrite_a=0)
q,info = orgqr|ungqr(qr,tau,lwork=min_lwork,overwrite_qr=0,overwrite_tau=1)
Generalized Linear Least Squares (LSE and GLM) Problems
-------------------------------------------------------
Standard Eigenvalue and Singular Value Problems
-----------------------------------------------
Drivers::
w,v,info = syev|heev(a,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0)
w,v,info = syevd|heevd(a,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0)
w,v,info = syevr|heevr(a,compute_v=1,lower=0,vrange=,irange=,atol=-1.0,lwork=min_lwork,overwrite_a=0)
t,sdim,(wr,wi|w),vs,info = gees(select,a,compute_v=1,sort_t=0,lwork=min_lwork,select_extra_args=(),overwrite_a=0)
wr,(wi,vl|w),vr,info = geev(a,compute_vl=1,compute_vr=1,lwork=min_lwork,overwrite_a=0)
u,s,vt,info = gesdd(a,compute_uv=1,lwork=min_lwork,overwrite_a=0)
Computational routines::
ht,tau,info = gehrd(a,lo=0,hi=n-1,lwork=min_lwork,overwrite_a=0)
ba,lo,hi,pivscale,info = gebal(a,scale=0,permute=0,overwrite_a=0)
Generalized Eigenvalue and Singular Value Problems
--------------------------------------------------
Drivers::
w,v,info = sygv|hegv(a,b,itype=1,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0,overwrite_b=0)
w,v,info = sygvd|hegvd(a,b,itype=1,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0,overwrite_b=0)
(alphar,alphai|alpha),beta,vl,vr,info = ggev(a,b,compute_vl=1,compute_vr=1,lwork=min_lwork,overwrite_a=0,overwrite_b=0)
Auxiliary routines
------------------
a,info = lauum(c,lower=0,overwrite_c=0)
a = laswp(a,piv,k1=0,k2=len(piv)-1,off=0,inc=1,overwrite_a=0)
Module clapack
++++++++++++++
Linear Equations
----------------
Drivers::
lu,piv,x,info = gesv(a,b,rowmajor=1,overwrite_a=0,overwrite_b=0)
c,x,info = posv(a,b,lower=0,rowmajor=1,overwrite_a=0,overwrite_b=0)
Computational routines::
lu,piv,info = getrf(a,rowmajor=1,overwrite_a=0)
x,info = getrs(lu,piv,b,trans=0,rowmajor=1,overwrite_b=0)
inv_a,info = getri(lu,piv,rowmajor=1,overwrite_lu=0)
c,info = potrf(a,lower=0,clean=1,rowmajor=1,overwrite_a=0)
x,info = potrs(c,b,lower=0,rowmajor=1,overwrite_b=0)
inv_a,info = potri(c,lower=0,rowmajor=1,overwrite_c=0)
inv_c,info = trtri(c,lower=0,unitdiag=0,rowmajor=1,overwrite_c=0)
Auxiliary routines
------------------
a,info = lauum(c,lower=0,rowmajor=1,overwrite_c=0)
Module calc_lwork
+++++++++++++++++
Optimal lwork is maxwrk. Default is minwrk.
minwrk,maxwrk = gehrd(prefix,n,lo=0,hi=n-1)
minwrk,maxwrk = gesdd(prefix,m,n,compute_uv=1)
minwrk,maxwrk = gelss(prefix,m,n,nrhs)
minwrk,maxwrk = getri(prefix,n)
minwrk,maxwrk = geev(prefix,n,compute_vl=1,compute_vr=1)
minwrk,maxwrk = heev(prefix,n,lower=0)
minwrk,maxwrk = syev(prefix,n,lower=0)
minwrk,maxwrk = gees(prefix,n,compute_v=1)
minwrk,maxwrk = geqrf(prefix,m,n)
minwrk,maxwrk = gqr(prefix,m,n)
""" |
"""
Basic functions used by several sub-packages and
useful to have in the main name-space.
Type Handling
-------------
================ ===================
iscomplexobj Test for complex object, scalar result
isrealobj Test for real object, scalar result
iscomplex Test for complex elements, array result
isreal Test for real elements, array result
imag Imaginary part
real Real part
real_if_close Turns complex number with tiny imaginary part to real
isneginf Tests for negative infinity, array result
isposinf Tests for positive infinity, array result
isnan Tests for nans, array result
isinf Tests for infinity, array result
isfinite Tests for finite numbers, array result
isscalar True if argument is a scalar
nan_to_num Replaces NaN's with 0 and infinities with large numbers
cast Dictionary of functions to force cast to each type
common_type Determine the minimum common type code for a group
of arrays
mintypecode Return minimal allowed common typecode.
================ ===================
Index Tricks
------------
================ ===================
mgrid Method which allows easy construction of N-d
'mesh-grids'
``r_`` Append and construct arrays: turns slice objects into
ranges and concatenates them, for 2d arrays appends rows.
index_exp Konrad Hinsen's index_expression class instance which
can be useful for building complicated slicing syntax.
================ ===================
Useful Functions
----------------
================ ===================
select Extension of where to multiple conditions and choices
extract Extract 1d array from flattened array according to mask
insert Insert 1d array of values into Nd array according to mask
linspace Evenly spaced samples in linear space
logspace Evenly spaced samples in logarithmic space
fix Round x to nearest integer towards zero
mod Modulo mod(x,y) = x % y except keeps sign of y
amax Array maximum along axis
amin Array minimum along axis
ptp Array max-min along axis
cumsum Cumulative sum along axis
prod Product of elements along axis
cumprod Cumluative product along axis
diff Discrete differences along axis
angle Returns angle of complex argument
unwrap Unwrap phase along given axis (1-d algorithm)
sort_complex Sort a complex-array (based on real, then imaginary)
trim_zeros Trim the leading and trailing zeros from 1D array.
vectorize A class that wraps a Python function taking scalar
arguments into a generalized function which can handle
arrays of arguments using the broadcast rules of
numerix Python.
================ ===================
Shape Manipulation
------------------
================ ===================
squeeze Return a with length-one dimensions removed.
atleast_1d Force arrays to be >= 1D
atleast_2d Force arrays to be >= 2D
atleast_3d Force arrays to be >= 3D
vstack Stack arrays vertically (row on row)
hstack Stack arrays horizontally (column on column)
column_stack Stack 1D arrays as columns into 2D array
dstack Stack arrays depthwise (along third dimension)
stack Stack arrays along a new axis
split Divide array into a list of sub-arrays
hsplit Split into columns
vsplit Split into rows
dsplit Split along third dimension
================ ===================
Matrix (2D Array) Manipulations
-------------------------------
================ ===================
fliplr 2D array with columns flipped
flipud 2D array with rows flipped
rot90 Rotate a 2D array a multiple of 90 degrees
eye Return a 2D array with ones down a given diagonal
diag Construct a 2D array from a vector, or return a given
diagonal from a 2D array.
mat Construct a Matrix
bmat Build a Matrix from blocks
================ ===================
Polynomials
-----------
================ ===================
poly1d A one-dimensional polynomial class
poly Return polynomial coefficients from roots
roots Find roots of polynomial given coefficients
polyint Integrate polynomial
polyder Differentiate polynomial
polyadd Add polynomials
polysub Subtract polynomials
polymul Multiply polynomials
polydiv Divide polynomials
polyval Evaluate polynomial at given argument
================ ===================
Iterators
---------
================ ===================
Arrayterator A buffered iterator for big arrays.
================ ===================
Import Tricks
-------------
================ ===================
ppimport Postpone module import until trying to use it
ppimport_attr Postpone module import until trying to use its attribute
ppresolve Import postponed module and return it.
================ ===================
Machine Arithmetics
-------------------
================ ===================
machar_single Single precision floating point arithmetic parameters
machar_double Double precision floating point arithmetic parameters
================ ===================
Threading Tricks
----------------
================ ===================
ParallelExec Execute commands in parallel thread.
================ ===================
Array Set Operations
-----------------------
Set operations for numeric arrays based on sort() function.
================ ===================
unique Unique elements of an array.
isin Test whether each element of an ND array is present
anywhere within a second array.
ediff1d Array difference (auxiliary function).
intersect1d Intersection of 1D arrays with unique elements.
setxor1d Set exclusive-or of 1D arrays with unique elements.
in1d Test whether elements in a 1D array are also present in
another array.
union1d Union of 1D arrays with unique elements.
setdiff1d Set difference of 1D arrays with unique elements.
================ ===================
""" |
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), NAME <EMAIL>, 2012-2013
# Copyright (c), NAME <EMAIL>, 2015
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# The match_hostname function and supporting code is under the terms and
# conditions of the Python Software Foundation License. They were taken from
# the Python3 standard library and adapted for use in Python2. See comments in the
# source for which code precisely is under this License. PSF License text
# follows:
#
# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
# --------------------------------------------
#
# 1. This LICENSE AGREEMENT is between the Python Software Foundation
# ("PSF"), and the Individual or Organization ("Licensee") accessing and
# otherwise using this software ("Python") in source or binary form and
# its associated documentation.
#
# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
# grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
# analyze, test, perform and/or display publicly, prepare derivative works,
# distribute, and otherwise use Python alone or in any derivative version,
# provided, however, that PSF's License Agreement and PSF's notice of copyright,
# i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
# 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are
# retained in Python alone or in any derivative version prepared by Licensee.
#
# 3. In the event Licensee prepares a derivative work that is based on
# or incorporates Python or any part thereof, and wants to make
# the derivative work available to others as provided herein, then
# Licensee hereby agrees to include in any such work a brief summary of
# the changes made to Python.
#
# 4. PSF is making Python available to Licensee on an "AS IS"
# basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
# IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
# INFRINGE ANY THIRD PARTY RIGHTS.
#
# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
#
# 6. This License Agreement will automatically terminate upon a material
# breach of its terms and conditions.
#
# 7. Nothing in this License Agreement shall be deemed to create any
# relationship of agency, partnership, or joint venture between PSF and
# Licensee. This License Agreement does not grant permission to use PSF
# trademarks or trade name in a trademark sense to endorse or promote
# products or services of Licensee, or any third party.
#
# 8. By copying, installing or otherwise using Python, Licensee
# agrees to be bound by the terms and conditions of this License
# Agreement.
|
"""
Author: NAME 3/19/2015
updated: 10/23/2015
Indexed priority queue (binary heap). Uses hash function for fast random look-ups.
Supports:
-select key O(1)
-insert key O(log n)
-delete key O(log n)
-extract min O(log n)
-change priority O(log n)
-peek (select top element) O(1)
-heapify (transform list) O(n)
Usage:
1) Create a new (empty) heap instance:
>>> my_heap = MinIPQ()
2) Insert a key-priority value pair via `insert()` method -- the heap
invariance will automatically be maintained:
>>> my_heap.insert('dee', 12)
3) We can delete item from heap by its key:
>>> my_heap.delete('genghis')
4) We can extract lowest priority item from heap, returning the key-priority
value pair:
>>> my_heap.extract_min()
5) We can change priority value of a key (NOTE: will raise error if multiple
instances of given key are detected):
>>> my_heap.change_priority('dee', 7)
6) We can return the top element (most priority) of heap without extracting it:
>>> my_heap.peek()
7) Finally, we can build a heap from an existing array in linear time:
>>> some_data_set = MinIPQ(some_list)
Limitations:
-Items inserted into heap must not be mutable objects (e.g. arrays,
dicts, etc.).
-Changing priorities is unsupported if multiple non-unique keys exist in heap.
In order to avoid confusion between the term "key" in a priority queue (i.e.
'priority key') and the term "key" in a hash/dict, we will refer to "priority
value" as the value that determines the placement of said item/object in the
heap.
Supports multiple keys/items of same value. This is due the implementation of
the internal data structures. the MinIPQ() object holds two abstract
collections: a heap of key-priority value pairs and a dict that maps the keys
to their positions in the heap.
The heap data structure is implemented as a 2D list whose elements are 2-lists,
the first element of the inner array being the key/item and the second element
being the priority value. For example, our heap could look like this:
>>> my_heap = [['shiva', 35], ['lakshmi', 164], ['dee', 684], ['vlad', 285], ['dee', 275], ['dee', 824], ['shiva', 1132]]
The internal dict mapping (called `position`) follows this format:
{ item: [index_in_heap [, index_in_heap] }
So for example:
>>> {'vlad': [3], 'shiva': [0, 6], 'lakshmi': [1], 'dee': [5, 2, 4]}
If we wanted to delete key, say, 'dee' from our heap, we would do this:
>>> my_heap.delete('dee')
The delete() method will lookup the key 'dee' in the internal hash; it will
find the key and see that its associated value -- an array -- it will pop the
last element in this array. If, after popping, the array is empty, the hash key
will be deleted as well. The popped value is the position (index) in the heap
-- which is also happens to be an array. Using this index, the function will
pop the object -- ['dee', 275] -- from the heap array, then go on to maintain
the heap invariance as expected.
TODO:
1) To avoid confusion, rename `position` to something else (e.g.
"occurrence stack")
2) Support for kwargs
3) Make argument input more strict -- i.e. don't support numbers, input args
must be collections (dicts, arrays)
4 Perhaps make all getters return a hash instead of array?
5) Add MaxIPQ() class.
6) Implement heapify as a loop rather than recursion.
7) Implement polymorphism for hash-like syntax.
8) Add more heap operations: update/replace, merge.
9) Implement magic methods.
10) Implement print function as described in docstring.
11) One solution to the 'how to change_priority() for multiple non-unique keys'
problem is to look up the key, and use this info to specify the particular
key-priority value pair to change.
to
""" |
"""
[2015-03-02] Challenge #204 [Easy] Remembering your lines
https://www.reddit.com/r/dailyprogrammer/comments/2xoxum/20150302_challenge_204_easy_remembering_your_lines/
#Description
I didn't always want to be a computer programmer, you know. I used to have dreams, dreams of standing on the world
stage, being one of the great actors of my generation!
Alas, my acting career was brief, lasting exactly as long as one high-school production of Macbeth. I played old King
Duncan, who gets brutally murdered by Macbeth in the beginning of Act II. It was just as well, really, because I had a
terribly hard time remembering all those lines!
For instance: I would remember that Act IV started with the three witches brewing up some sort of horrible potion,
filled will all sorts nasty stuff, but except for "Eye of newt", I couldn't for the life of me remember what was in it!
Today, with our modern computers and internet, such a question is easy to settle: you simply open up [the full text of
the play](https://gist.githubusercontent.com/Quackmatic/f8deb2b64dd07ea0985d/raw/macbeth.txt) and press Ctrl-F (or
Cmd-F, if you're on a Mac) and search for "Eye of newt".
And, indeed, here's the passage:
Fillet of a fenny snake,
In the caldron boil and bake;
Eye of newt, and toe of frog,
Wool of bat, and tongue of dog,
Adder's fork, and blind-worm's sting,
Lizard's leg, and howlet's wing,—
For a charm of powerful trouble,
Like a hell-broth boil and bubble.
Sounds delicious!
In today's challenge, we will automate this process. You will be given the full text of Shakespeare's Macbeth, and then
a phrase that's used somewhere in it. You will then output the full passage of dialog where the phrase appears.
#Formal inputs & outputs
##Input description
First off all, you're going to need a full copy of the play, which you can find here:
[macbeth.txt](https://gist.githubusercontent.com/Quackmatic/f8deb2b64dd07ea0985d/raw/macbeth.txt). Either right click
and save it to your local computer, or open it and copy the contents into a local file.
This version of the play uses consistent formatting, and should be especially easy for computers to parse. I recommend
perusing it briefly to get a feel for how it's formatted, but in particular you should notice that all lines of dialog
are indented 4 spaces, and only dialog is indented that far.
(edit: thanks to /u/Elite6809 for spotting some formatting errors. I've replaced the link with the fixed version)
Second, you will be given a single line containing a phrase that appears exactly once somewhere in the text of the
play. You can assume that the phrase in the input uses the same case as the phrase in the source material, and that the
full input is contained in a single line.
##Output description
You will output the line containing the quote, as well all the lines directly above and below it which are also dialog
lines. In other words, output the whole "passage".
All the dialog in the source material is indented 4 spaces, you can choose to keep that indent for your output, or you
can remove, whichever you want.
#Examples
##Input 1
Eye of newt
##Output 1
Fillet of a fenny snake,
In the caldron boil and bake;
Eye of newt, and toe of frog,
Wool of bat, and tongue of dog,
Adder's fork, and blind-worm's sting,
Lizard's leg, and howlet's wing,—
For a charm of powerful trouble,
Like a hell-broth boil and bubble.
##Input 2
rugged Russian bear
##Output 2
What man dare, I dare:
Approach thou like the rugged Russian bear,
The arm'd rhinoceros, or the Hyrcan tiger;
Take any shape but that, and my firm nerves
Shall never tremble: or be alive again,
And dare me to the desert with thy sword;
If trembling I inhabit then, protest me
The baby of a girl. Hence, horrible shadow!
Unreal mockery, hence!
#Challenge inputs
#Input 1
break this enterprise
#Input 2
Yet who would have thought
#Bonus
If you're itching to do a little bit more work on this, output some more information in addition to the passage: which
act and scene the quote appears, all characters with speaking parts in that scene, as well as who spoke the quote. For
the second example input, it might look something like this:
ACT III
SCENE IV
Characters in scene: NAME ROSS, NAME NAME NAME NAME
Spoken by NAME
What man dare, I dare:
Approach thou like the rugged Russian bear,
The arm'd rhinoceros, or the Hyrcan tiger;
Take any shape but that, and my firm nerves
Shall never tremble: or be alive again,
And dare me to the desert with thy sword;
If trembling I inhabit then, protest me
The baby of a girl. Hence, horrible shadow!
Unreal mockery, hence!
#Notes
As always, if you wish to suggest a problem for future consideration, head on over to /r/dailyprogrammer_ideas and add
your suggestion there.
In closing, I'd like to mention that this is the first challenge I've posted since becoming a moderator for this
subreddit. I'd like to thank the rest of the mods for thinking I'm good enough to be part of the team. I hope you will
like my problems, and I'll hope I get to post many more fun challenges for you in the future!
""" |
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
#
# XML-RPC CLIENT LIBRARY
# $Id$
#
# an XML-RPC client interface for Python.
#
# the marshalling and response parser code can also be used to
# implement XML-RPC servers.
#
# Notes:
# this version is designed to work with Python 2.1 or newer.
#
# History:
# 1999-01-14 fl Created
# 1999-01-15 fl Changed dateTime to use localtime
# 1999-01-16 fl Added Binary/base64 element, default to RPC2 service
# 1999-01-19 fl Fixed array data element (from Skip Montanaro)
# 1999-01-21 fl Fixed dateTime constructor, etc.
# 1999-02-02 fl Added fault handling, handle empty sequences, etc.
# 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro)
# 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8)
# 2000-11-28 fl Changed boolean to check the truth value of its argument
# 2001-02-24 fl Added encoding/Unicode/SafeTransport patches
# 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1)
# 2001-03-28 fl Make sure response tuple is a singleton
# 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2)
# 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser
# 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup)
# 2001-10-01 fl Remove containers from memo cache when done with them
# 2001-10-01 fl Use faster escape method (80% dumps speedup)
# 2001-10-02 fl More dumps microtuning
# 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow
# 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems)
# 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments
# 2002-04-16 fl Added __str__ methods to datetime/binary wrappers
# 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version
# 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type
# 2003-02-27 gvr Remove apply calls
# 2003-04-24 sm Use cStringIO if available
# 2003-04-25 ak Add support for nil
# 2003-06-15 gn Add support for time.struct_time
# 2003-07-12 gp Correct marshalling of Faults
# 2003-10-31 mvl Add multicall support
# 2004-08-20 mvl Bump minimum supported Python version to 2.1
#
# Copyright (c) 1999-2002 by Secret Labs AB.
# Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com
#
# --------------------------------------------------------------------
# The XML-RPC client interface is
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
#
# things to look into some day:
# TODO: sort out True/False/boolean issues for Python 2.3
|
"""
This is a procedural interface to the matplotlib object-oriented
plotting library.
The following plotting commands are provided; the majority have
Matlab(TM) analogs and similar argument.
_Plotting commands
acorr - plot the autocorrelation function
annotate - annotate something in the figure
arrow - add an arrow to the axes
axes - Create a new axes
axhline - draw a horizontal line across axes
axvline - draw a vertical line across axes
axhspan - draw a horizontal bar across axes
axvspan - draw a vertical bar across axes
axis - Set or return the current axis limits
bar - make a bar chart
barh - a horizontal bar chart
broken_barh - a set of horizontal bars with gaps
box - set the axes frame on/off state
boxplot - make a box and whisker plot
cla - clear current axes
clabel - label a contour plot
clf - clear a figure window
clim - adjust the color limits of the current image
close - close a figure window
colorbar - add a colorbar to the current figure
cohere - make a plot of coherence
contour - make a contour plot
contourf - make a filled contour plot
csd - make a plot of cross spectral density
delaxes - delete an axes from the current figure
draw - Force a redraw of the current figure
errorbar - make an errorbar graph
figlegend - make legend on the figure rather than the axes
figimage - make a figure image
figtext - add text in figure coords
figure - create or change active figure
fill - make filled polygons
findobj - recursively find all objects matching some criteria
gca - return the current axes
gcf - return the current figure
gci - get the current image, or None
getp - get a handle graphics property
grid - set whether gridding is on
hist - make a histogram
hold - set the axes hold state
ioff - turn interaction mode off
ion - turn interaction mode on
isinteractive - return True if interaction mode is on
imread - load image file into array
imshow - plot image data
ishold - return the hold state of the current axes
legend - make an axes legend
loglog - a log log plot
matshow - display a matrix in a new figure preserving aspect
pcolor - make a pseudocolor plot
pcolormesh - make a pseudocolor plot using a quadrilateral mesh
pie - make a pie chart
plot - make a line plot
plot_date - plot dates
plotfile - plot column data from an ASCII tab/space/comma delimited file
pie - pie charts
polar - make a polar plot on a PolarAxes
psd - make a plot of power spectral density
quiver - make a direction field (arrows) plot
rc - control the default params
rgrids - customize the radial grids and labels for polar
savefig - save the current figure
scatter - make a scatter plot
setp - set a handle graphics property
semilogx - log x axis
semilogy - log y axis
show - show the figures
specgram - a spectrogram plot
spy - plot sparsity pattern using markers or image
stem - make a stem plot
subplot - make a subplot (numrows, numcols, axesnum)
subplots_adjust - change the params controlling the subplot positions of current figure
subplot_tool - launch the subplot configuration tool
suptitle - add a figure title
table - add a table to the plot
text - add some text at location x,y to the current axes
thetagrids - customize the radial theta grids and labels for polar
title - add a title to the current axes
xcorr - plot the autocorrelation function of x and y
xlim - set/get the xlimits
ylim - set/get the ylimits
xticks - set/get the xticks
yticks - set/get the yticks
xlabel - add an xlabel to the current axes
ylabel - add a ylabel to the current axes
autumn - set the default colormap to autumn
bone - set the default colormap to bone
cool - set the default colormap to cool
copper - set the default colormap to copper
flag - set the default colormap to flag
gray - set the default colormap to gray
hot - set the default colormap to hot
hsv - set the default colormap to hsv
jet - set the default colormap to jet
pink - set the default colormap to pink
prism - set the default colormap to prism
spring - set the default colormap to spring
summer - set the default colormap to summer
winter - set the default colormap to winter
spectral - set the default colormap to spectral
_Event handling
connect - register an event handler
disconnect - remove a connected event handler
_Matrix commands
cumprod - the cumulative product along a dimension
cumsum - the cumulative sum along a dimension
detrend - remove the mean or besdt fit line from an array
diag - the k-th diagonal of matrix
diff - the n-th differnce of an array
eig - the eigenvalues and eigen vectors of v
eye - a matrix where the k-th diagonal is ones, else zero
find - return the indices where a condition is nonzero
fliplr - flip the rows of a matrix up/down
flipud - flip the columns of a matrix left/right
linspace - a linear spaced vector of N values from min to max inclusive
logspace - a log spaced vector of N values from min to max inclusive
meshgrid - repeat x and y to make regular matrices
ones - an array of ones
rand - an array from the uniform distribution [0,1]
randn - an array from the normal distribution
rot90 - rotate matrix k*90 degress counterclockwise
squeeze - squeeze an array removing any dimensions of length 1
tri - a triangular matrix
tril - a lower triangular matrix
triu - an upper triangular matrix
vander - the Vandermonde matrix of vector x
svd - singular value decomposition
zeros - a matrix of zeros
_Probability
levypdf - The levy probability density function from the char. func.
normpdf - The Gaussian probability density function
rand - random numbers from the uniform distribution
randn - random numbers from the normal distribution
_Statistics
corrcoef - correlation coefficient
cov - covariance matrix
amax - the maximum along dimension m
mean - the mean along dimension m
median - the median along dimension m
amin - the minimum along dimension m
norm - the norm of vector x
prod - the product along dimension m
ptp - the max-min along dimension m
std - the standard deviation along dimension m
asum - the sum along dimension m
_Time series analysis
bartlett - M-point Bartlett window
blackman - M-point Blackman window
cohere - the coherence using average periodiogram
csd - the cross spectral density using average periodiogram
fft - the fast Fourier transform of vector x
hamming - M-point Hamming window
hanning - M-point Hanning window
hist - compute the histogram of x
kaiser - M length Kaiser window
psd - the power spectral density using average periodiogram
sinc - the sinc function of array x
_Dates
date2num - convert python datetimes to numeric representation
drange - create an array of numbers for date plots
num2date - convert numeric type (float days since 0001) to datetime
_Other
angle - the angle of a complex array
griddata - interpolate irregularly distributed data to a regular grid
load - load ASCII data into array
polyfit - fit x, y to an n-th order polynomial
polyval - evaluate an n-th order polynomial
roots - the roots of the polynomial coefficients in p
save - save an array to an ASCII file
trapz - trapezoidal integration
__end
""" |
#
##=========================================================================
#class ActionDialog(object):
# """ActionDialog wraps the dialog you are interacting with
#
# It provides support for finding controls using attribute access,
# item access and the _control(...) method.
#
# You can dump information from a dialgo to XML using the write_() method
#
# A screenshot of the dialog can be taken using the underlying wrapped
# HWND ie. my_action_dlg.wrapped_win.CaptureAsImage().save("dlg.png").
# This is only available if you have PIL installed (fails silently
# otherwise).
# """
# def __init__(self, hwnd, app = None, props = None):
# """Initialises an ActionDialog object
#
# ::
# hwnd (required) The handle of the dialog
# app An instance of an Application Object
# props future use (when we have an XML file for reference)
#
# """
#
# #self.wrapped_win = controlactions.add_actions(
# # controls.WrapHandle(hwnd))
# self.wrapped_win = controls.WrapHandle(hwnd)
#
# self.app = app
#
# dlg_controls = [self.wrapped_win, ]
# dlg_controls.extend(self.wrapped_win.Children)
#
# def __getattr__(self, key):
# "Attribute access - defer to item access"
# return self[key]
#
# def __getitem__(self, attr):
# "find the control that best matches attr"
# # if it is an integer - just return the
# # child control at that index
# if isinstance(attr, (int, long)):
# return self.wrapped_win.Children[attr]
#
# # so it should be a string
# # check if it is an attribute of the wrapped win first
# try:
# return getattr(self.wrapped_win, attr)
# except (AttributeError, UnicodeEncodeError):
# pass
#
# # find the control that best matches our attribute
# ctrl = findbestmatch.find_best_control_match(
# attr, self.wrapped_win.Children)
#
# # add actions to the control and return it
# return ctrl
#
# def write_(self, filename):
# "Write the dialog an XML file (requires elementtree)"
# if self.app and self.app.xmlpath:
# filename = os.path.join(self.app.xmlpath, filename + ".xml")
#
# controls = [self.wrapped_win]
# controls.extend(self.wrapped_win.Children)
# props = [ctrl.GetProperties() for ctrl in controls]
#
# XMLHelpers.WriteDialogToFile(filename, props)
#
# def control_(self, **kwargs):
# "Find the control that matches the arguments and return it"
#
# # add the restriction for this particular process
# kwargs['parent'] = self.wrapped_win
# kwargs['process'] = self.app.process
# kwargs['top_level_only'] = False
#
# # try and find the dialog (waiting for a max of 1 second
# ctrl = findwindows.find_window(**kwargs)
# #win = ActionDialog(win, self)
#
# return controls.WrapHandle(ctrl)
#
#
#
#
##=========================================================================
#def _WalkDialogControlAttribs(app, attr_path):
# "Try and resolve the dialog and 2nd attribute, return both"
# if len(attr_path) != 2:
# raise RuntimeError("Expecting only 2 items in the attribute path")
#
# # get items to select between
# # default options will filter hidden and disabled controls
# # and will default to top level windows only
# wins = findwindows.find_windows(process = app.process)
#
# # wrap each so that find_best_control_match works well
# wins = [controls.WrapHandle(win) for win in wins]
#
# # if an integer has been specified
# if isinstance(attr_path[0], (int, long)):
# dialogWin = wins[attr_path[0]]
# else:
# # try to find the item
# dialogWin = findbestmatch.find_best_control_match(attr_path[0], wins)
#
# # already wrapped
# dlg = ActionDialog(dialogWin, app)
#
# # for each of the other attributes ask the
# attr_value = dlg
# for attr in attr_path[1:]:
# try:
# attr_value = getattr(attr_value, attr)
# except UnicodeEncodeError:
# attr_value = attr_value[attr]
#
# return dlg, attr_value
#
#
##=========================================================================
#class _DynamicAttributes(object):
# "Class that builds attributes until they are ready to be resolved"
#
# def __init__(self, app):
# "Initialize the attributes"
# self.app = app
# self.attr_path = []
#
# def __getattr__(self, attr):
# "Attribute access - defer to item access"
# return self[attr]
#
# def __getitem__(self, attr):
# "Item access[] for getting dialogs and controls from an application"
#
# # do something with this one
# # and return a copy of ourselves with some
# # data related to that attribute
#
# self.attr_path.append(attr)
#
# # if we have a lenght of 2 then we have either
# # dialog.attribute
# # or
# # dialog.control
# # so go ahead and resolve
# if len(self.attr_path) == 2:
# dlg, final = _wait_for_function_success(
# _WalkDialogControlAttribs, self.app, self.attr_path)
#
# # seing as we may already have a reference to the dialog
# # we need to strip off the control so that our dialog
# # reference is not messed up
# self.attr_path = self.attr_path[:-1]
#
# return final
#
# # we didn't hit the limit so continue collecting the
# # next attribute in the chain
# return self
#
#
##=========================================================================
#def _wait_for_function_success(func, *args, **kwargs):
# """Retry the dialog up to timeout trying every time_interval seconds
#
# timeout defaults to 1 second
# time_interval defaults to .09 of a second """
# if kwargs.has_key('time_interval'):
# time_interval = kwargs['time_interval']
# del kwargs['time_interval']
# else:
# time_interval = window_retry_interval
#
# if kwargs.has_key('timeout'):
# timeout = kwargs['timeout']
# del kwargs['timeout']
# else:
# timeout = window_find_timeout
#
#
# # keep going until we either hit the return (success)
# # or an exception is raised (timeout)
# while 1:
# try:
# return func(*args, **kwargs)
# except:
# if timeout > 0:
# time.sleep (time_interval)
# timeout -= time_interval
# else:
# raise
#
#
#
|
"""
# ggame
The simple cross-platform sprite and game platform for Brython Server (Pygame, Tkinter to follow?).
Ggame stands for a couple of things: "good game" (of course!) and also "git game" or "github game"
because it is designed to operate with [Brython Server](http://runpython.com) in concert with
Github as a backend file store.
Ggame is **not** intended to be a full-featured gaming API, with every bell and whistle. Ggame is
designed primarily as a tool for teaching computer programming, recognizing that the ability
to create engaging and interactive games is a powerful motivator for many progamming students.
Accordingly, any functional or performance enhancements that *can* be reasonably implemented
by the user are left as an exercise.
## Functionality Goals
The ggame library is intended to be trivially easy to use. For example:
from ggame import App, ImageAsset, Sprite
# Create a displayed object at 100,100 using an image asset
Sprite(ImageAsset("ggame/bunny.png"), (100,100))
# Create the app, with a 500x500 pixel stage
app = App(500,500)
# Run the app
app.run()
## Overview
There are three major components to the `ggame` system: Assets, Sprites and the App.
### Assets
Asset objects (i.e. `ggame.ImageAsset`, etc.) typically represent separate files that
are provided by the "art department". These might be background images, user interface
images, or images that represent objects in the game. In addition, `ggame.SoundAsset`
is used to represent sound files (`.wav` or `.mp3` format) that can be played in the
game.
Ggame also extends the asset concept to include graphics that are generated dynamically
at run-time, such as geometrical objects, e.g. rectangles, lines, etc.
### Sprites
All of the visual aspects of the game are represented by instances of `ggame.Sprite` or
subclasses of it.
### App
Every ggame application must create a single instance of the `ggame.App` class (or
a sub-class of it). Creating an instance of the `ggame.App` class will initiate
creation of a pop-up window on your browser. Executing the app's `run` method will
begin the process of refreshing the visual assets on the screen.
### Events
No game is complete without a player and players produce events. Your code handles user
input by registering to receive keyboard and mouse events using `ggame.App.listenKeyEvent` and
`ggame.App.listenMouseEvent` methods.
## Execution Environment
Ggame is designed to be executed in a web browser using [Brython](http://brython.info/),
[Pixi.js](http://www.pixijs.com/) and [Buzz](http://buzz.jaysalvat.com/). The easiest
way to do this is by executing from [runpython](http://runpython.com), with source
code residing on [github](http://github.com).
When using [runpython](http://runpython.com), you will have to configure your browser
to allow popup windows.
To use Ggame in your own application, you will minimally need to create a folder called
`ggame` in your project. Within `ggame`, copy the `ggame.py`, `sysdeps.py` and
`__init__.py` files from the [ggame project](https://github.com/BrythonServer/ggame).
### Include Ggame as a Git Subtree
From the same directory as your own python sources (note: you must have an existing git
repository with committed files in order for the following to work properly),
execute the following terminal commands:
git remote add -f ggame https://github.com/BrythonServer/ggame.git
git merge -s ours --no-commit ggame/master
mkdir ggame
git read-tree --prefix=ggame/ -u ggame/master
git commit -m "Merge ggame project as our subdirectory"
If you want to pull in updates from ggame in the future:
git pull -s subtree ggame master
You can see an example of how a ggame subtree is used by examining the
[Brython Server Spacewar](https://github.com/BrythonServer/Spacewar) repo on Github.
## Geometry
When referring to screen coordinates, note that the x-axis of the computer screen
is *horizontal* with the zero position on the left hand side of the screen. The
y-axis is *vertical* with the zero position at the **top** of the screen.
Increasing positive y-coordinates correspond to the downward direction on the
computer screen. Note that this is **different** from the way you may have learned
about x and y coordinates in math class!
""" |
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#"""
#You must run this test module using nose (chant nosetests from the command line)
#** There are some issues with nose, offset by the fact that it does multi-thread and setup_module better than unittest
#* This is NOT a TestCase ... it could be except that unittest screws up setup_module
#* nosetests may hang in some ERROR conditions. SIGHUP, SIGINT and SIGSTP are not noticed. SIGKILL (-9) works
#* You should NOT pass command line arguments to nosetests. You can pass them, but it causes trouble:
#* Nosetests passes them into the test environment which breaks socorro's configuration behavior
#* You can set NOSE_WHATEVER envariables prior to running if you need to. See nosetests --help
#* some useful envariables:
#* NOSE_VERBOSE=x where x in [0, # Prints only 'OK' at end of test run
#* 1, # default: Prints one '.' per test like unittest
#* x >= 2, # Prints first comment line if exists, else the function name per test
#* ]
#* NOSE_WHERE=directory_path[,directoroy_path[,...]] : run only tests in these directories. Note commas
#* NOSE_ATTR=attrspec[,attrspec ...] : run only tests for which at least one attrspec evaluates true.
#* Accepts '!attr' and 'attr=False'. Does NOT accept natural python syntax ('atter != True', 'not attr')
#* NOSE_NOCAPTURE=TrueValue : nosetests normally captures stdout and only displays it if the test has fail or error.
#* print debugging works with this envariable, or you can instead print to stderr or use a logger
#*
#* With NOSE_VERBOSE > 1, you may see "functionName(self): (slow=N)" for some tests. N is the max seconds waiting
#"""
#import copy
#import datetime as dt
#import errno
#import logging
#import logging.handlers
#import os
#import re
#import shutil
#import signal
#import threading
#import time
#import traceback
#import psycopg2
#from nose.tools import *
#import socorro.database.postgresql as soc_pg
#import socorro.database.database as sdatabase
#import socorro.lib.ConfigurationManager as configurationManager
#import socorro.monitor.monitor as monitor
#import socorro.unittest.testlib.createJsonDumpStore as createJDS
#import socorro.unittest.testlib.dbtestutil as dbtestutil
#from socorro.unittest.testlib.testDB import TestDB
#from socorro.unittest.testlib.util import runInOtherProcess
#import socorro.unittest.testlib.util as tutil
#from socorro.lib.datetimeutil import utc_now
#import monitorTestconfig as testConfig
#import socorro.database.schema as schema
#class Me: # not quite "self"
#"""
#I need stuff to be initialized once per module. Rather than having a bazillion globals, lets just have 'me'
#"""
#pass
#me = None
#loglineS = '^[1-9][0-9]{3}-[0-9]{2}-[0-9]{2}.*'
#loglineRE = re.compile(loglineS)
#def setup_module():
#global me
#if me:
#return
## else initialize
## print "MODULE setup"
#me = Me()
#me.markingTemplate = "MARK %s: %s"
#me.startMark = 'start'
#me.endMark = 'end'
#me.testDB = TestDB()
#me.config = configurationManager.newConfiguration(configurationModule = testConfig, applicationName='Testing Monitor')
#tutil.nosePrintModule(__file__)
#myDir = os.path.split(__file__)[0]
#if not myDir: myDir = '.'
#replDict = {'testDir':'%s'%myDir}
#for i in me.config:
#try:
#me.config[i] = me.config.get(i)%(replDict)
#except:
#pass
#knownTests = [x for x in dir(TestMonitor) if x.startswith('test')]
#me.logWasExtracted = {}
#for t in knownTests:
#me.logWasExtracted[t] = False
#me.logger = monitor.logger
#me.logger.setLevel(logging.DEBUG)
#me.logFilePathname = me.config.logFilePathname
#logfileDir = os.path.split(me.config.logFilePathname)[0]
#try:
#os.makedirs(logfileDir)
#except OSError,x:
#if errno.EEXIST != x.errno: raise
#f = open(me.config.logFilePathname,'w')
#f.close()
#fileLog = logging.FileHandler(me.logFilePathname, 'a')
#fileLog.setLevel(logging.DEBUG)
#fileLogFormatter = logging.Formatter(me.config.logFileLineFormatString)
#fileLog.setFormatter(fileLogFormatter)
#me.logger.addHandler(fileLog)
#me.database = sdatabase.Database(me.config)
##me.dsn = "host=%s dbname=%s user=%s password=%s" % (me.config.databaseHost,me.config.databaseName,
##me.config.databaseUserName,me.config.databasePassword)
#def teardown_module():
#global me
#logging.shutdown()
#try:
#os.unlink(me.logFilePathname)
#except OSError,x:
#if errno.ENOENT != x.errno:
#raise
#class TestMonitor:
#markingLog = False
#def setUp(self):
#global me
#self.connection = me.database.connection()
##self.connection = psycopg2.connect(me.dsn)
## just in case there was a crash on prior run
#me.testDB.removeDB(me.config,me.logger)
#me.testDB.createDB(me.config,me.logger)
#def tearDown(self):
#global me
##import socorro.database.postgresql as db_pg #DEBUG
##print "\ntearDown",db_pg.connectionStatus(self.connection)
#me.testDB.removeDB(me.config,me.logger)
##try:
##shutil.rmtree(me.config.storageRoot)
##except OSError,x:
##pass
##try:
##shutil.rmtree(me.config.deferredStorageRoot)
##except OSError,x:
##pass
##try:
##if me.config.saveSuccessfulMinidumpsTo:
##shutil.rmtree(me.config.saveSuccessfulMinidumpsTo)
##except OSError,x:
##pass
##try:
##if me.config.saveFailedMinidumpsTo:
##shutil.rmtree(me.config.saveFailedMinidumpsTo)
##except OSError,x:
##pass
#self.connection.close()
#def markLog(self):
#global me
#testName = traceback.extract_stack()[-2][2]
#if TestMonitor.markingLog:
#TestMonitor.markingLog = False
#me.logger.info(me.markingTemplate%(testName,me.endMark))
## print (' ==== <<%s>> '+me.markingTemplate)%(os.getpid(),testName,me.endMark) #DEBUG
#else:
#TestMonitor.markingLog = True
#me.logger.info(me.markingTemplate%(testName,me.startMark))
## print (' ==== <<%s>> '+me.markingTemplate)%(os.getpid(),testName,me.startMark) #DEBUG
#def extractLogSegment(self):
#global me
#testName = traceback.extract_stack()[-2][2]
## print ' ==== <<%s>> EXTRACTING: %s (%s)'%(os.getpid(),testName,me.logWasExtracted[testName]) #DEBUG
#if me.logWasExtracted[testName]:
#return []
#try:
#file = open(me.config.logFilePathname)
#except IOError,x:
#if errno.ENOENT != x.errno:
#raise
#else:
#return []
#me.logWasExtracted[testName] = True
#startTag = me.markingTemplate%(testName,me.startMark)
#stopTag = me.markingTemplate%(testName,me.endMark)
#lines = file.readlines()
#segment = []
#i = 0
#while i < len(lines):
#if not startTag in lines[i]:
#i += 1
#continue
#else:
#i += 1
#try:
#while not stopTag in lines[i]:
#segment.append(lines[i].strip())
#i += 1
#except IndexError:
#pass
#break
#return segment
#def testConstructor(self):
#"""
#testConstructor(self):
#Constructor must fail if any of a lot of configuration details are missing
#Constructor must succeed if all config is present
#Constructor should never log anything
#"""
## print 'TEST: testConstructor'
#global me
#requiredConfigs = [
#"databaseHost",
#"databaseName",
#"databaseUserName",
#"databasePassword",
##"storageRoot",
##"deferredStorageRoot",
##"jsonFileSuffix",
##"dumpFileSuffix",
#"processorCheckInTime",
#"standardLoopDelay",
#"cleanupJobsLoopDelay",
#"priorityLoopDelay",
##"saveSuccessfulMinidumpsTo",
##"saveFailedMinidumpsTo",
#]
#cc = copy.copy(me.config)
#self.markLog()
#for rc in requiredConfigs:
#del(cc[rc])
#try:
#m = monitor.Monitor(cc)
#assert False, "expected to raise some kind of exception for missing %s" % (rc)
#except Exception,x:
#pass
#cc[rc] = me.config[rc]
#monitor.Monitor(me.config) # expect this to work. If it raises an error, we'll see it
#self.markLog()
#seg = self.extractLogSegment()
#cleanSeg = []
#for line in seg:
#if 'Constructor has set the following values' in line:
#continue
#if line.startswith('self.'):
#continue
#if 'creating crashStorePool' in line:
#continue
#cleanSeg.append(line)
#assert [] == cleanSeg, 'expected no logging for constructor call (success or failure) but %s'%(str(cleanSeg))
#def runStartChild(self):
#global me
#try:
#m = monitor.Monitor(me.config)
#m.start()
#me.logger.fail("This line forces a wrong count in later assertions: We expected a SIGTERM before getting here.")
## following sequence of except: handles both 2.4.x and 2.5.x hierarchy
#except SystemExit,x:
#me.logger.info("CHILD SystemExit in %s: %s [%s]"%(threading.currentThread().getName(),type(x),x))
#os._exit(0)
#except KeyboardInterrupt,x:
#me.logger.info("CHILD KeyboardInterrupt in %s: %s [%s]"%(threading.currentThread().getName(),type(x),x))
#os._exit(0)
#except Exception,x:
#me.logger.info("CHILD Exception in %s: %s [%s]"%(threading.currentThread().getName(),type(x),x))
#os._exit(0)
#def testStart(self):
#"""
#testStart(self): (slow=2)
#This test may run for a second or two
#start does:
#a lot of logging ... and there really isn't much else to test, so we are testing that. Ugh.
#For this one, we won't pay attention to what stops the threads
#"""
#global me
#self.markLog()
#runInOtherProcess(self.runStartChild,logger=me.logger)
#self.markLog()
#seg = self.extractLogSegment()
#prior = ''
#dateWalk = 0
#connectionClosed = 0
#priorityConnect = 0
#priorityQuit = 0
#priorityDone = 0
#cleanupStart = 0
#cleanupQuit = 0
#cleanupDone = 0
#for i in seg:
#data = i.split(None,4)
#if 4 < len(data):
#date,tyme,level,dash,msg = i.split(None,4)
#else:
#msg = i
#if msg.startswith('MainThread'):
#if 'connection' in msg and 'closed' in msg: connectionClosed += 1
#if 'destructiveDateWalk' in msg: dateWalk += 1
#elif msg.startswith('priorityLoopingThread'):
#if 'connecting to database' in msg: priorityConnect += 1
#if 'detects quit' in msg: priorityQuit += 1
#if 'priorityLoop done' in msg: priorityDone += 1
#elif msg.startswith('jobCleanupThread'):
#if 'jobCleanupLoop starting' in msg: cleanupStart += 1
#if 'got quit' in msg: cleanupQuit += 1
#if 'jobCleanupLoop done' in msg: cleanupDone += 1
#assert 2 == dateWalk, 'expect logging for start and end of destructiveDateWalk, got %d'%(dateWalk)
#assert 2 == connectionClosed, 'expect two connection close messages, got %d' %(connectionClosed)
#assert 1 == priorityConnect, 'priorityLoop had better connect to database exactly once, got %d' %(priorityConnect)
#assert 1 == priorityQuit, 'priorityLoop should detect quit exactly once, got %d' %(priorityQuit)
#assert 1 == priorityDone, 'priorityLoop should report self done exactly once, got %d' %(priorityDone)
#assert 1 == cleanupStart, 'jobCleanup should report start exactly once, got %d' %(cleanupStart)
#assert 1 == cleanupQuit, 'jobCleanup should report quit exactly once, got %d' %(cleanupQuit)
#assert 1 == cleanupDone, 'jobCleanup should report done exactly once, got %d' %(cleanupDone)
#def testRespondToSIGHUP(self):
#"""
#testRespondToSIGHUP(self): (slow=1)
#This test may run for a second or two
#We should notice a SIGHUP and die nicely. This is exactly like testStart except that we look
#for different logging events (ugh)
#"""
#global me
#self.markLog()
#runInOtherProcess(self.runStartChild,logger=me.logger,signal=signal.SIGHUP)
#self.markLog()
#seg = self.extractLogSegment()
#kbd = 0
#sighup = 0
#sigterm = 0
#for line in seg:
#print line
#if loglineRE.match(line):
#date,tyme,level,dash,msg = line.split(None,4)
#if msg.startswith('MainThread'):
#if 'KeyboardInterrupt' in msg: kbd += 1
#if 'SIGHUP detected' in msg: sighup += 1
#if 'SIGTERM detected' in msg: sigterm += 1
#assert 1 == kbd, 'Better see exactly one keyboard interrupt, got %d' % (kbd)
#assert 1 == sighup, 'Better see exactly one sighup event, got %d' % (sighup)
#assert 0 == sigterm, 'Better not see sigterm event, got %d' % (sigterm)
#def testRespondToSIGTERM(self):
#"""
#testRespondToSIGTERM(self): (slow=1)
#This test may run for a second or two
#We should notice a SIGTERM and die nicely. This is exactly like testStart except that we look
#for different logging events (ugh)
#"""
#global me
#self.markLog()
#runInOtherProcess(self.runStartChild,signal=signal.SIGTERM)
#self.markLog()
#seg = self.extractLogSegment()
#kbd = 0
#sighup = 0
#sigterm = 0
#for line in seg:
#if loglineRE.match(line):
#date,tyme,level,dash,msg = line.split(None,4)
#if msg.startswith('MainThread'):
#if 'KeyboardInterrupt' in msg: kbd += 1
#if 'SIGTERM detected' in msg: sigterm += 1
#if 'SIGHUP detected' in msg: sighup += 1
#assert 1 == kbd, 'Better see exactly one keyboard interrupt, got %d' % (kbd)
#assert 1 == sigterm, 'Better see exactly one sigterm event, got %d' % (sigterm)
#assert 0 == sighup, 'Better not see sighup event, got %d' % (sighup)
#def testQuitCheck(self):
#"""
#testQuitCheck(self):
#This test makes sure that the main loop notices when it has been told to quit.
#"""
#global me
#mon = monitor.Monitor(me.config)
#mon.quit = True
#assert_raises(KeyboardInterrupt,mon.quitCheck)
#def quitter(self):
#time.sleep(self.timeTilQuit)
#self.mon.quit = True
#def testResponsiveSleep(self):
#"""
#testResponsiveSleep(self): (slow=4)
#This test may run for some few seconds. Shouldn't be more than 6 tops (and if so, it will have failed).
#Tests that the responsiveSleep method actually responds by raising KeyboardInterrupt.
#"""
#global me
#mon = monitor.Monitor(me.config)
#self.timeTilQuit = 2
#self.mon = mon
#quitter = threading.Thread(name='Quitter', target=self.quitter)
#quitter.start()
#assert_raises(KeyboardInterrupt,mon.responsiveSleep,5)
#quitter.join()
#def testGetDatabaseConnectionPair(self):
#"""
#testGetDatabaseConnectionPair(self):
#test that the wrapper for psycopghelper.DatabaseConnectionPool works as expected
#"""
#global me
#mon = monitor.Monitor(me.config)
#tcon,tcur = mon.getDatabaseConnectionPair()
#mcon,mcur = mon.databaseConnectionPool.connectionCursorPair()
#try:
#assert tcon == mcon
#assert tcur != mcur
#finally:
#mon.databaseConnectionPool.cleanup()
##def testGetStorageFor(self):
##"""
##testGetStorageFor(self):
##Test that the wrapper for JsonDumpStorage doesn't twist things incorrectly
##"""
##global me
##self.markLog()
##createJDS.createTestSet(createJDS.jsonFileData,jsonKwargs={'logger':me.logger},rootDir=me.config.storageRoot)
##createJDS.createTestSet(createJDS.jsonMoreData,jsonKwargs={'logger':me.logger},rootDir=me.config.deferredStorageRoot)
##mon = monitor.Monitor(me.config)
##assert_raises(monitor.UuidNotFoundException,mon.getStorageFor,'nothing')
##expected = me.config.storageRoot.rstrip(os.sep)
##got = mon.getStorageFor('0bba929f-8721-460c-dead-a43c20071025').root
##assert expected == got, 'Expected [%s] got [%s]'%(expected,got)
##expected = me.config.deferredStorageRoot.rstrip(os.sep)
##got = mon.getStorageFor('29adfb61-f75b-11dc-b6be-001320081225').root
##assert expected == got, 'Expected [%s] got [%s]'%(expected,got)
##self.markLog()
##seg = self.extractLogSegment()
##cleanSeg = []
##for lline in seg:
##line = lline.strip()
##if 'Constructor has set the following values' in line:
##continue
##if 'DEBUG - MainThread - getJson' in line:
##continue
##if line.startswith('self.'):
##continue
##cleanSeg.append(line)
##assert [] == cleanSeg, 'unexpected logging for this test: %s'%(str(cleanSeg))
##def testRemoveBadUuidFromJsonDumpStorage(self):
##"""
##testRemoveBadUuidFromJsonDumpStorage(self):
##This just wraps JsonDumpStorage. Assure we aren't futzing up the wrap (fail with non-exist uuid)
##"""
##global me
##createJDS.createTestSet(createJDS.jsonFileData,jsonKwargs={'logger':me.logger},rootDir=me.config.storageRoot)
##mon = monitor.Monitor(me.config)
##badUuid = '0bad0bad-0bad-6666-9999-0bad20001025'
##assert_raises(monitor.UuidNotFoundException,mon.removeUuidFromJsonDumpStorage,badUuid)
##def testRemoveGoodUuidFromJsonDumpStorage(self):
##"""
##testRemoveGoodUuidFromJsonDumpStorage(self):
##This really just wraps JsonDumpStorage call. Assure we aren't futzing up the wrap (succeed with existing uuids)
##"""
##global me
##createJDS.createTestSet(createJDS.jsonFileData,jsonKwargs={'logger':me.logger},rootDir=me.config.storageRoot)
##createJDS.createTestSet(createJDS.jsonMoreData,jsonKwargs={'logger':me.logger},rootDir=me.config.deferredStorageRoot)
##mon = monitor.Monitor(me.config)
##goodUuid = '0b781b88-ecbe-4cc4-dead-6bbb20081225';
### this should work the first time...
##mon.removeUuidFromJsonDumpStorage(goodUuid)
### ... and then fail the second time
##assert_raises(monitor.UuidNotFoundException,mon.removeUuidFromJsonDumpStorage, goodUuid)
#def testCompareSecondOfSequence(self):
#"""
#testCompareSecondOfSequence(self):
#Not much to test, but do it
#"""
#x = (1,10)
#y = (10,1)
#assert cmp(x,y) < 0 # check assumptions about cmp...
#assert monitor.Monitor.compareSecondOfSequence(x,y) > 0
#assert cmp(y,x) > 0
#assert monitor.Monitor.compareSecondOfSequence(y,x) < 0
#def testJobSchedulerIterNoProcs(self):
#"""
#testJobSchedulerIterNoProcs(self):
#Assure that attempts at balanced scheduling with no processor raises monitor.NoProcessorsRegisteredException
#"""
#global me
#m = monitor.Monitor(me.config)
#dbCon,dbCur = m.getDatabaseConnectionPair()
#try:
#iter = m.jobSchedulerIter(dbCur)
#assert_raises(SystemExit,iter.next)
#finally:
#m.databaseConnectionPool.cleanup()
## def testJobScheduleIter_AllOldProcessors(self):
## """
## testJobScheduleIter_AllOldProcessors(self):
## If we have only old processors, we should fail (but as of 2009-january, don't: Test is commented out)
## """
## global me
## m = monitor.Monitor(me.config)
## dbCon,dbCur = m.getDatabaseConnectionPair()
## stamp = utc_now() - dt.timedelta(minutes=10)
## dbtestutil.fillProcessorTable(dbCur, 5, stamp=stamp)
## iter = m.jobSchedulerIter(dbCur)
## assert_raises(WhatKind? iter.next)
#def testJobSchedulerIterGood(self):
#"""
#testJobSchedulerIterGood(self):
#Plain vanilla test of the balanced job scheduler.
#"""
#global me
#numProcessors = 15
#dbtestutil.fillProcessorTable(self.connection.cursor(),numProcessors)
#m = monitor.Monitor(me.config)
#dbCon,dbCur = m.getDatabaseConnectionPair()
#try:
#iter = m.jobSchedulerIter(dbCur)
#dbCon.commit()
#num = 0
#hits = dict(((1+x,0) for x in range (numProcessors)))
#for id in iter:
#num += 1
#hits[int(id)] += 1
#if num >= numProcessors: break
#for i in range(numProcessors):
#assert hits[i+1] == 1, 'At index %d, got count %d'%(i+1, hits[i+1])
#for id in iter:
#num += 1
#hits[int(id)] += 1
#if num >= 3*numProcessors: break
#finally:
#m.databaseConnectionPool.cleanup()
#for i in range(numProcessors):
#assert hits[i+1] == 3, 'At index %d, got count %d'%(i+1, hits[i+1])
#def getCurrentProcessorList(self):
#"""Useful for figuring out what is there before we call some method or other."""
#global me
#sql = "select p.id, count(j.*) from processors p left join (select owner from jobs where success is null) as j on p.id = j.owner group by p.id;"
#cur = self.connection.cursor()
#cur.execute(sql);
#self.connection.commit()
#return [(aRow[0], aRow[1]) for aRow in dbCur.fetchall()] #processorId, numberOfAssignedJobs
#def testJobScheduleIter_StartUnbalanced(self):
#"""
#testJobScheduleIter_StartUnbalanced(self):
#Assure that an unbalanced start eventually produces balanced result
#"""
#numProcessors = 5
#dbtestutil.fillProcessorTable(self.connection.cursor(),numProcessors)
#self.connection.commit()
#m = monitor.Monitor(me.config)
#dbCon,dbCur = m.getDatabaseConnectionPair()
#try:
#dbtestutil.addSomeJobs(dbCur,dict([(1+x,1+x) for x in range(numProcessors)]),logger=me.logger)
#iter = m.jobSchedulerIter(dbCur)
#num = 0
#hits = dict(((1+x,0) for x in range (numProcessors)))
#for id in iter:
#num += 1
#hits[int(id)] += 1
#me.logger.debug('HIT on %d: %d'%(id,hits[id]))
#if num >= 3*numProcessors: break
#for i in range(numProcessors):
#assert hits[i+1] == 5 - i, 'Expected num hits to be count down sequence from 5 to 1, but at idx %d, got %d'%(i+1,hits[i+1])
#me.logger.debug('ONE: At index %d, got count %d'%(i+1, hits[i+1]))
#finally:
#m.databaseConnectionPool.cleanup()
## def testJobScheduleIter_SomeOldProcessors(self):
## """
## testJobScheduleIter_SomeOldProcessors(self):
## If we have some old processors, be sure we don't see them in the iter
## As of 2009-January, that is not the case, so we have commented this test.
## """
## global me
## m = monitor.Monitor(me.config)
## dbCon,dbCur = m.etDatabaseConnectionPair() error: try:...(dbCon)...finally m.databaseConnectionPool.cleanup()
## now = utc_now() error: Use dbtestutil.datetimeNow(aCursor)
## then = now - dt.timedelta(minutes=10)
## dbtestutil.fillProcessorTable(dbCur, None, processorMap = {1:then,2:then,3:now,4:then,5:then })
## iter = m.jobScheduleIter(dbCur)
## hits = dict(((x,0) for x in range (1,6)))
## num = 0;
## for id in iter:
## num += 1
## hits[int(id)] += 1
## if num > 3: break
## for i in (1,2,4,5):
## assert hits[i] == 0, 'Expected that no old processors would be used in the iterator'
## assert hits[3] == 4, 'Expected that all the iterations would choose the one live processor'
#def testUnbalancedJobSchedulerIterNoProcs(self):
#"""
#testUnbalancedJobSchedulerIterNoProcs(self):
#With no processors, we will get a system exit
#"""
#global me
#m = monitor.Monitor(me.config)
#cur = self.connection.cursor()
#try:
#iter = m.unbalancedJobSchedulerIter(cur)
#assert_raises(SystemExit, iter.next)
#finally:
#self.connection.commit()
#def testUnbalancedJobSchedulerIter_AllOldProcs(self):
#"""
#testUnbalancedJobSchedulerIter_AllOldProcs(self):
#With only processors that are too old, we will get a system exit
#"""
#global me
#m = monitor.Monitor(me.config)
#cur = self.connection.cursor()
#try:
#stamp = dbtestutil.datetimeNow(cur) - dt.timedelta(minutes=10)
#dbtestutil.fillProcessorTable(cur, 5, stamp=stamp)
#iter = m.unbalancedJobSchedulerIter(cur)
#assert_raises(SystemExit, iter.next)
#finally:
#self.connection.commit()
#def testUnbalancedJobSchedulerIter_SomeOldProcs(self):
#"""
#testUnbalancedJobSchedulerIter_SomeOldProcs(self):
#With some processors that are too old, we will get only the young ones in some order
#"""
#global me
#m = monitor.Monitor(me.config)
#dbCon,dbCur = m.getDatabaseConnectionPair()
#try:
#now = dbtestutil.datetimeNow(dbCur)
#then = now - dt.timedelta(minutes=10)
#dbtestutil.fillProcessorTable(dbCur, None, processorMap = {1:then,2:then,3:now,4:then,5:then })
#iter = m.unbalancedJobSchedulerIter(dbCur)
#hits = dict(((x,0) for x in range (1,6)))
#num = 0;
#for id in iter:
#num += 1
#hits[int(id)] += 1
#if num > 3: break
#for i in (1,2,4,5):
#assert hits[i] == 0, 'Expected that no old processors would be used in the iterator'
#assert hits[3] == 4, 'Expected that all the iterations would choose the one live processor'
#finally:
#m.databaseConnectionPool.cleanup()
#def testUnbalancedJobSchedulerIter(self):
#"""
#testUnbalancedJobSchedulerIter(self):
#With an unbalanced load on the processors, each processor still gets the same number of hits
#"""
#global me
#numProcessors = 5
#loopCount = 3
#dbtestutil.fillProcessorTable(self.connection.cursor(),numProcessors)
#self.connection.commit()
#m = monitor.Monitor(me.config)
#dbCon,dbCur = m.getDatabaseConnectionPair()
#try:
#dbtestutil.addSomeJobs(dbCur,{1:12},logger=me.logger)
#iter = m.unbalancedJobSchedulerIter(dbCur)
#num = 0
#hits = dict(((1+x,0) for x in range (numProcessors)))
#for id in iter:
#num += 1
#hits[int(id)] += 1
#if num >= loopCount*numProcessors: break
#for i in range(numProcessors):
#assert hits[i+1] == loopCount, 'expected %d for processor %d, but got %d'%(loopCount,i+1,hits[i+1])
#finally:
#m.databaseConnectionPool.cleanup()
#def setJobSuccess(self, cursor, idTimesAndSuccessSeq):
#global me
#sql = "UPDATE jobs SET starteddatetime = %s, completeddatetime = %s, success = %s WHERE id = %s"
#for row in idTimesAndSuccessSeq:
#if row[2]: row[2] = True
#if not row[2]: row[2] = False
#cursor.executemany(sql,idTimesAndSuccessSeq)
#cursor.connection.commit()
#sql = 'SELECT id, uuid, success FROM jobs ORDER BY id'
#cursor.execute(sql)
#return cursor.fetchall()
#def jobsAllocated(self):
#global me
#m = monitor.Monitor(me.config)
#cur = self.connection.cursor()
#sql = "SELECT count(*) from jobs"
#cur.execute(sql)
#self.connection.commit()
#return cur.fetchone()[0]
##def testCleanUpCompletedAndFailedJobs_WithSaves(self):
##"""
##testCleanUpCompletedAndFailedJobs_WithSaves(self):
##The default config asks for successful and failed jobs to be saved
##"""
##global me
##cursor = self.connection.cursor()
##dbtestutil.fillProcessorTable(cursor,4)
##m = monitor.Monitor(me.config)
##createJDS.createTestSet(createJDS.jsonFileData,jsonKwargs={'logger':me.logger},rootDir=me.config.storageRoot)
##runInOtherProcess(m.standardJobAllocationLoop, stopCondition=(lambda : self.jobsAllocated() == 14),logger=me.logger)
##started = dbtestutil.datetimeNow(cursor)
##self.connection.commit()
##completed = started + dt.timedelta(microseconds=100)
##idTimesAndSuccessSeq = [
##[started,completed,True,1],
##[started,completed,True,3],
##[started,completed,True,5],
##[started,completed,True,11],
##[started,None,False,2],
##[started,None,False,4],
##[started,None,False,8],
##[started,None,False,12],
##]
##dbCon,dbCur = m.getDatabaseConnectionPair()
##try:
##jobdata = self.setJobSuccess(dbCur,idTimesAndSuccessSeq)
##m.cleanUpCompletedAndFailedJobs()
##finally:
##m.databaseConnectionPool.cleanup()
##successSave = set()
##failSave = set()
##expectSuccessSave = set()
##expectFailSave = set()
##remainBehind = set()
##for dir, dirs, files in os.walk(me.config.storageRoot):
##remainBehind.update(os.path.splitext(x)[0] for x in files)
##for d in idTimesAndSuccessSeq:
##if d[2]:
##expectSuccessSave.add(d[3])
##else:
##expectFailSave.add(d[3])
##for dir,dirs,files in os.walk(me.config.saveSuccessfulMinidumpsTo):
##successSave.update((os.path.splitext(x)[0] for x in files))
##for dir,dirs,files in os.walk(me.config.saveFailedMinidumpsTo):
##failSave.update((os.path.splitext(x)[0] for x in files))
##for x in jobdata:
##if None == x[2]:
##assert not x[1] in failSave and not x[1] in successSave, "if we didn't set success state for %s, then it wasn't copied"%(x[1])
##assert x[1] in remainBehind, "if we didn't set success state for %s, then it should remain behind"%(x[1])
##assert not x[0] in expectFailSave and not x[0] in expectSuccessSave, "database should match expectatations for id=%s"%(x[0])
##elif True == x[2]:
##assert not x[1] in failSave and x[1] in successSave, "if we set success for %s, it is copied to %s"%(x[1],me.config.saveSuccessfulMinidumpsTo)
##assert not x[0] in expectFailSave and x[0] in expectSuccessSave, "database should match expectatations for id=%s"%(x[0])
##assert not x[1] in remainBehind, "if we did set success state for %s, then it should not remain behind"%(x[1])
##elif False == x[2]:
##assert x[1] in failSave and not x[1] in successSave, "if we set failure for %s, it is copied to %s"%(x[1],me.config.saveFailedMinidumpsTo)
##assert x[0] in expectFailSave and not x[0] in expectSuccessSave, "database should match expectatations for id=%s"%(x[0])
##assert not x[1] in remainBehind, "if we did set success state for %s, then it should not remain behind"%(x[1])
##def testCleanUpCompletedAndFailedJobs_WithoutSaves(self):
##"""
##testCleanUpCompletedAndFailedJobs_WithoutSaves(self):
##First, dynamically set config to not save successful or failed jobs. They are NOT removed from the file system
##"""
##global me
##cc = copy.copy(me.config)
##cursor = self.connection.cursor()
##dbtestutil.fillProcessorTable(cursor,4)
##for conf in ['saveSuccessfulMinidumpsTo','saveFailedMinidumpsTo']:
##cc[conf] = ''
##m = monitor.Monitor(cc)
##createJDS.createTestSet(createJDS.jsonFileData,jsonKwargs={'logger':me.logger},rootDir=me.config.storageRoot)
##runInOtherProcess(m.standardJobAllocationLoop, stopCondition=(lambda : self.jobsAllocated() == 14),logger=me.logger)
##started = dbtestutil.datetimeNow(cursor)
##self.connection.commit()
##completed = started + dt.timedelta(microseconds=100)
##idTimesAndSuccessSeq = [
##[started,completed,True,1],
##[started,completed,True,3],
##[started,completed,True,5],
##[started,completed,True,11],
##[started,None,False,2],
##[started,None,False,4],
##[started,None,False,8],
##[started,None,False,12],
##]
##dbCon,dbCur = m.getDatabaseConnectionPair()
##try:
##jobdata = self.setJobSuccess(dbCur,idTimesAndSuccessSeq)
##m.cleanUpCompletedAndFailedJobs()
##finally:
##m.databaseConnectionPool.cleanup()
##successSave = set()
##failSave = set()
##expectSuccessSave = set()
##expectFailSave = set()
##for d in idTimesAndSuccessSeq:
##if d[2]:
##expectSuccessSave.add(d[3])
##else:
##expectFailSave.add(d[3])
##for dir,dirs,files in os.walk(me.config.saveSuccessfulMinidumpsTo):
##successSave.update((os.path.splitext(x)[0] for x in files))
##for dir,dirs,files in os.walk(me.config.saveFailedMinidumpsTo):
##failSave.update((os.path.splitext(x)[0] for x in files))
##remainBehind = set()
##for dir, dirs, files in os.walk(me.config.storageRoot):
##remainBehind.update(os.path.splitext(x)[0] for x in files)
##assert len(successSave) == 0, "We expect not to save any successful jobs with this setting"
##assert len(failSave) == 0, "We expect not to save any failed jobs with this setting"
##for x in jobdata:
##if None == x[2]:
##assert not x[0] in expectFailSave and not x[0] in expectSuccessSave, "database should match expectatations for id=%s"%(x[0])
##assert x[1] in remainBehind, "if we didn't set success state for %s, then it should remain behind"%(x[1])
##elif True == x[2]:
##assert not x[0] in expectFailSave and x[0] in expectSuccessSave, "database should match expectatations for id=%s"%(x[0])
##elif False == x[2]:
##assert x[0] in expectFailSave and not x[0] in expectSuccessSave, "database should match expectatations for id=%s"%(x[0])
#def testCleanUpDeadProcessors_AllDead(self):
#"""
#testCleanUpDeadProcessors_AllDead(self):
#As of 2009-01-xx, Monitor.cleanUpDeadProcessors(...) does nothing except write to a log file
#... and fail if there are no live processors
#"""
#global me
#m = monitor.Monitor(me.config)
#dbCon,dbCur = m.getDatabaseConnectionPair()
#try:
#now = dbtestutil.datetimeNow(dbCur)
#then = now - dt.timedelta(minutes=10)
#dbtestutil.fillProcessorTable(dbCur, None, processorMap = {1:then,2:then,3:then,4:then,5:then })
#assert_raises(SystemExit,m.cleanUpDeadProcessors, dbCur)
#finally:
#m.databaseConnectionPool.cleanup()
#def testQueueJob(self):
#"""
#testQueueJob(self):
#make sure jobs table starts empty
#make sure returned values reflect database values
#make sure assigned processors are correctly reflected
#make sure duplicate uuid is caught, reported, and work continues
#"""
#global me
#m = monitor.Monitor(me.config)
#sql = 'SELECT pathname,uuid,owner from jobs;'
#numProcessors = 4
#dbtestutil.fillProcessorTable(self.connection.cursor(),numProcessors)
#dbCon,dbCur = m.getDatabaseConnectionPair()
#try:
#procIdGenerator = m.jobSchedulerIter(dbCur)
#dbCur.execute(sql)
#beforeJobsData = dbCur.fetchall()
#assert 0 == len(beforeJobsData), 'There should be no queued jobs before we start our run'
#expectedHits = dict(((1+x,0) for x in range (numProcessors)))
#mapper = {}
#hits = dict(((1+x,0) for x in range (numProcessors)))
#for uuid,data in createJDS.jsonFileData.items():
#procId = m.queueJob(dbCur,uuid,procIdGenerator)
#expectedHits[procId] += 1;
#mapper[uuid] = procId
#dbCur.execute(sql)
#afterJobsData = dbCur.fetchall()
#for row in afterJobsData:
#hits[row[2]] += 1
##me.logger.debug("ASSERT %s == %s for index %s"%(mapper.get(row[1],'WHAT?'), row[2], row[1]))
#assert mapper[row[1]] == row[2], 'Expected %s from %s but got %s'%(mapper.get(row[1],"WOW"),row[1],row[2])
#for key in expectedHits.keys():
##me.logger.debug("ASSERTING %s == %s for index %s"%(expectedHits.get(key,'BAD KEY'),hits.get(key,'EVIL KEY'),key))
#assert expectedHits[key] == hits[key], "Expected count of %s for %s, but got %s"%(expectedHits[key],key,hits[key])
#self.markLog()
#dupUuid = createJDS.jsonFileData.keys()[0]
#try:
#procId = m.queueJob(dbCur,dupUuid,procIdGenerator)
#assert False, "Expected that IntegrityError would be raised queue-ing %s but it wasn't"%(dupUuid)
#except psycopg2.IntegrityError:
#pass
#except Exception,x:
#assert False, "Expected that only IntegrityError would be raised, but got %s: %s"%(type(x),x)
#self.markLog()
#finally:
#m.databaseConnectionPool.cleanup()
#def testQueuePriorityJob(self):
#"""
#testQueuePriorityJob(self):
#queuePriorityJob does:
#removes job uuid from priorityjobs table (if possible)
#add uuid to priority_jobs_NNN table for NNN the processor id
#add uuid, id, etc to jobs table with priority > 0
#"""
#global me
#m = monitor.Monitor(me.config)
#numProcessors = 4
#dbtestutil.fillProcessorTable(self.connection.cursor(),numProcessors)
#data = dbtestutil.makeJobDetails({1:2,2:2,3:3,4:3})
#dbCon,dbCur = m.getDatabaseConnectionPair()
#try:
#procIdGenerator = m.jobSchedulerIter(dbCur)
#insertSql = "INSERT into priorityjobs (uuid) VALUES (%s);"
#uuidToId = {}
#for tup in data:
#uuidToId[tup[1]] = tup[2]
#uuids = uuidToId.keys()
#for uuid in uuids:
#if uuidToId[uuid]%2:
#dbCur.execute(insertSql,[uuid])
#dbCon.commit()
#countSql = "SELECT count(*) from %s;"
#dbCur.execute(countSql%('priorityjobs'))
#priorityJobCount = dbCur.fetchone()[0]
#dbCur.execute(countSql%('jobs'))
#jobCount = dbCur.fetchone()[0]
#eachPriorityJobCount = {}
#for uuid in uuids:
#procId = m.queuePriorityJob(dbCur,uuid, procIdGenerator)
#dbCur.execute('SELECT count(*) from jobs where jobs.priority > 0')
#assert dbCur.fetchone()[0] == 1 + jobCount, 'Expect that each queuePriority will increase jobs table by one'
#jobCount += 1
#try:
#eachPriorityJobCount[procId] += 1
#except KeyError:
#eachPriorityJobCount[procId] = 1
#if uuidToId[uuid]%2:
#dbCur.execute(countSql%('priorityjobs'))
#curCount = dbCur.fetchone()[0]
#assert curCount == priorityJobCount -1, 'Expected to remove one job from priorityjobs for %s'%uuid
#priorityJobCount -= 1
#for id in eachPriorityJobCount.keys():
#dbCur.execute(countSql%('priority_jobs_%s'%id))
#count = dbCur.fetchone()[0]
#assert eachPriorityJobCount[id] == count, 'Expected that the count %s added to id %s matches %s found'%(eachPriorityJobCount[id],id,count)
#finally:
#m.databaseConnectionPool.cleanup()
#def testGetPriorityUuids(self):
#"""
#testGetPriorityUuids(self):
#Check that we find none if the priorityjobs table is empty
#Check that we find as many as we put into priorityjobs table
#"""
#global me
#m = monitor.Monitor(me.config)
#count = len(m.getPriorityUuids(self.connection.cursor()))
#assert 0 == count, 'Expect no priorityjobs unless they were added. Got %d'%(count)
#data = dbtestutil.makeJobDetails({1:2,2:2,3:3,4:3})
#insertSql = "INSERT into priorityjobs (uuid) VALUES (%s);"
#self.connection.cursor().executemany(insertSql,[ [x[1]] for x in data ])
#self.connection.commit()
#count = len(m.getPriorityUuids(self.connection.cursor()))
#self.connection.commit()
#assert len(data) == count,'expect same count in data as priorityJobs, got %d'%(count)
##def testLookForPriorityJobsAlreadyInQueue(self):
##"""
##testLookForPriorityJobsAlreadyInQueue(self):
##Check that we erase jobs from priorityjobs table if they are there
##Check that we increase by one the priority in jobs table
##Check that we add job (only) to appropriate priority_jobs_NNN table
##Check that attempting same uuid again raises IntegrityError
##"""
##global me
##numProcessors = 5
##dbtestutil.fillProcessorTable(self.connection.cursor(),numProcessors)
##m = monitor.Monitor(me.config)
##data = dbtestutil.makeJobDetails({1:2,2:2,3:3,4:3,5:2})
##dbCon,dbCur = m.getDatabaseConnectionPair()
##try:
##procIdGenerator = m.jobSchedulerIter(dbCur)
##insertSql = "INSERT into priorityjobs (uuid) VALUES (%s);"
##updateSql = "UPDATE jobs set priority = 1 where uuid = %s;"
##allUuids = [x[1] for x in data]
##priorityJobUuids = [];
##missingUuids = []
##uuidToProcId = {}
##for counter in range(len(allUuids)):
##uuid = allUuids[counter]
##if 0 == counter % 3: # add to jobs and priorityjobs table
##uuidToProcId[uuid] = m.queueJob(dbCur,data[counter][1],procIdGenerator)
##priorityJobUuids.append((uuid,))
##elif 1 == counter % 3: # add to jobs table only
##uuidToProcId[uuid] = m.queueJob(dbCur,data[counter][1],procIdGenerator)
##else: # 2== counter %3 # don't add anywhere
##missingUuids.append(uuid)
##dbCur.executemany(insertSql,priorityJobUuids)
##dbCon.commit()
##for uuid in priorityJobUuids:
##dbCur.execute(updateSql,(uuid,))
##self.markLog()
##m.lookForPriorityJobsAlreadyInQueue(dbCur,allUuids)
##self.markLog()
##seg = self.extractLogSegment()
##for line in seg:
##date,tyme,level,dash,thr,ddash,msg = line.split(None,6)
##assert thr == 'MainThread','Expected only MainThread log lines, got[%s]'%(line)
##uuid = msg.split()[2]
##assert not uuid in missingUuids, 'Found %s that should not be in missingUuids'%(uuid)
##assert uuid in uuidToProcId.keys(), 'Found %s that should be in uuidToProcId'%(uuid)
##countSql = "SELECT count(*) from %s;"
##dbCur.execute(countSql%('priorityjobs'))
##priCount = dbCur.fetchone()[0]
##assert 0 == priCount, 'Expect that all the priority jobs are removed, but found %s'%(priCount)
##countSql = "SELECT count(*) from priority_jobs_%s WHERE uuid = %%s;"
##for uuid,procid in uuidToProcId.items():
##dbCur.execute(countSql%(procid),(uuid,))
##priCount = dbCur.fetchone()[0]
##assert priCount == 1, 'Expect to find %s in priority_jobs_%s exactly once'%(uuid,procid)
##for badid in range(1,numProcessors+1):
##if badid == procid: continue
##dbCur.execute(countSql%(badid),(uuid,))
##badCount = dbCur.fetchone()[0]
##assert 0 == badCount, 'Expect to find %s ONLY in other priority_jobs_NNN, found it in priority_jobs_%s'%(uuid,badid)
##for uuid,procid in uuidToProcId.items():
##try:
##m.lookForPriorityJobsAlreadyInQueue(dbCur,(uuid,))
##assert False, 'Expected line above would raise IntegrityError or InternalError'
##except psycopg2.IntegrityError,x:
##dbCon.rollback()
##except:
##assert False, 'Expected only IntegrityError from the try block'
##finally:
##m.databaseConnectionPool.cleanup()
##def testUuidInJsonDumpStorage(self):
##"""
##testUuidInJsonDumpStorage(self):
##Test that the wrapper for JsonDumpStorage isn't all twisted up:
##assure we find something in normal and deferred storage, and miss something that isn't there
##do NOT test that the 'markAsSeen' actually works: That should be testJsonDumpStorage's job
##"""
##global me
##m = monitor.Monitor(me.config)
##createJDS.createTestSet(createJDS.jsonFileData,jsonKwargs={'logger':me.logger},rootDir=me.config.storageRoot)
##createJDS.createTestSet(createJDS.jsonMoreData,jsonKwargs={'logger':me.logger},rootDir=me.config.deferredStorageRoot)
##self.markLog()
##badUuid = '0bad0bad-0bad-6666-9999-0bad20001025'
##goodUuid = '0bba929f-8721-460c-dead-a43c20071025'
##defUuid = '29adfb61-f75b-11dc-b6be-001320081225'
##assert m.uuidInJsonDumpStorage(goodUuid), 'Dunno how that happened'
##assert m.uuidInJsonDumpStorage(defUuid), 'Dunno how that happened'
##assert not m.uuidInJsonDumpStorage(badUuid), 'Dunno how that happened'
##self.markLog()
##seg = self.extractLogSegment()
##cleanSeg = []
##for lline in seg:
##if 'DEBUG - MainThread - getJson ' in lline:
##continue
##cleanSeg.append(lline)
##assert [] == cleanSeg, "Shouldn't log for success or failure: %s"%cleanSeg
##def testLookForPriorityJobsInJsonDumpStorage(self):
##"""
##testLookForPriorityJobsInJsonDumpStorage(self):
##assure that we can find each uuid in standard and deferred storage
##assure that we do not find any bogus uuid
##assure that found uuids are added to jobs table with priority 1, and priority_jobs_NNN table for processor id NNN
##"""
##global me
##m = monitor.Monitor(me.config)
##createJDS.createTestSet(createJDS.jsonFileData,jsonKwargs={'logger':me.logger},rootDir=me.config.storageRoot)
##createJDS.createTestSet(createJDS.jsonMoreData,jsonKwargs={'logger':me.logger},rootDir=me.config.deferredStorageRoot)
##normUuids = createJDS.jsonFileData.keys()
##defUuids = createJDS.jsonMoreData.keys()
##allUuids = []
##allUuids.extend(normUuids)
##allUuids.extend(defUuids)
##badUuid = '0bad0bad-0bad-6666-9999-0bad20001025'
##dbCon,dbCur = m.getDatabaseConnectionPair()
##try:
##numProcessors = 5
##dbtestutil.fillProcessorTable(self.connection.cursor(),numProcessors)
##self.markLog()
##m.lookForPriorityJobsInJsonDumpStorage(dbCur,allUuids)
##assert [] == allUuids, 'Expect that all the uuids were found and removed from the looked for "set"'
##m.lookForPriorityJobsInJsonDumpStorage(dbCur,(badUuid,))
##self.markLog()
##seg = self.extractLogSegment()
##getIdAndPrioritySql = "SELECT owner,priority from jobs WHERE uuid = %s"
##getCountSql = "SELECT count(*) from %s"
##idCounts = dict( ( (x,0) for x in range(1,numProcessors+1) ) )
##allUuids.extend(normUuids)
##allUuids.extend(defUuids)
##for uuid in allUuids:
##dbCur.execute(getIdAndPrioritySql,(uuid,))
##procid,pri = dbCur.fetchone()
##assert 1 == pri, 'Expected priority of 1 for %s, but got %s'%(uuid,pri)
##idCounts[procid] += 1
##dbCur.execute(getIdAndPrioritySql,(badUuid,))
##assert not dbCur.fetchone(), "Expect to get None entries in jobs table for badUuid"
##for id,expectCount in idCounts.items():
##dbCur.execute(getCountSql%('priority_jobs_%s'%id))
##seenCount = dbCur.fetchone()[0]
##assert expectCount == seenCount, 'Expected %s, got %s as count in priority_jobs_%s'%(expectCount,seenCount,id)
##finally:
##m.databaseConnectionPool.cleanup()
##def testPriorityJobsNotFound(self):
##"""
##testPriorityJobsNotFound(self):
##for each uuid, log an error and remove the uuid from the provided table
##"""
##global me
##m = monitor.Monitor(me.config)
##dbCon,dbCur = m.getDatabaseConnectionPair()
##try:
##dropBogusSql = "DROP TABLE IF EXISTS bogus;"
##createBogusSql = "CREATE TABLE bogus (uuid varchar(55));"
##insertBogusSql = "INSERT INTO bogus (uuid) VALUES ('NOPE'), ('NEVERMIND');"
##countSql = "SELECT count(*) from %s"
##dbCur.execute(dropBogusSql)
##dbCon.commit()
##dbCur.execute(createBogusSql)
##dbCon.commit()
##dbCur.execute(insertBogusSql)
##dbCon.commit()
##dbCur.execute(countSql%('bogus'))
##bogusCount0 = dbCur.fetchone()[0]
##assert 2 == bogusCount0
##self.markLog()
##m.priorityJobsNotFound(dbCur,['NOPE','NEVERMIND'])
##dbCur.execute(countSql%('bogus'))
##bogusCount1 = dbCur.fetchone()[0]
##assert 2 == bogusCount1, 'Expect uuids deleted, if any, from priorityjobs by default'
##m.priorityJobsNotFound(dbCur,['NOPE','NEVERMIND'], 'bogus')
##dbCur.execute(countSql%('bogus'))
##bogusCount2 = dbCur.fetchone()[0]
##assert 0 == bogusCount2, 'Expect uuids deleted from bogus when it is specified'
##self.markLog()
##dbCur.execute(dropBogusSql)
##dbCon.commit()
##finally:
##m.databaseConnectionPool.cleanup()
##neverCount = 0
##nopeCount = 0
##seg = self.extractLogSegment()
##for line in seg:
##if " - MainThread - priority uuid" in line:
##if 'NOPE was never found' in line: nopeCount += 1
##if 'NEVERMIND was never found' in line: neverCount += 1
##assert 2 == neverCount
##assert 2 == nopeCount
|
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |
#
# XML-RPC CLIENT LIBRARY
# $Id$
#
# an XML-RPC client interface for Python.
#
# the marshalling and response parser code can also be used to
# implement XML-RPC servers.
#
# Notes:
# this version is designed to work with Python 2.1 or newer.
#
# History:
# 1999-01-14 fl Created
# 1999-01-15 fl Changed dateTime to use localtime
# 1999-01-16 fl Added Binary/base64 element, default to RPC2 service
# 1999-01-19 fl Fixed array data element (from Skip Montanaro)
# 1999-01-21 fl Fixed dateTime constructor, etc.
# 1999-02-02 fl Added fault handling, handle empty sequences, etc.
# 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro)
# 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8)
# 2000-11-28 fl Changed boolean to check the truth value of its argument
# 2001-02-24 fl Added encoding/Unicode/SafeTransport patches
# 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1)
# 2001-03-28 fl Make sure response tuple is a singleton
# 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2)
# 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser
# 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup)
# 2001-10-01 fl Remove containers from memo cache when done with them
# 2001-10-01 fl Use faster escape method (80% dumps speedup)
# 2001-10-02 fl More dumps microtuning
# 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow
# 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems)
# 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments
# 2002-04-16 fl Added __str__ methods to datetime/binary wrappers
# 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version
# 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type
# 2003-02-27 gvr Remove apply calls
# 2003-04-24 sm Use cStringIO if available
# 2003-04-25 ak Add support for nil
# 2003-06-15 gn Add support for time.struct_time
# 2003-07-12 gp Correct marshalling of Faults
# 2003-10-31 mvl Add multicall support
# 2004-08-20 mvl Bump minimum supported Python version to 2.1
#
# Copyright (c) 1999-2002 by Secret Labs AB.
# Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com
#
# --------------------------------------------------------------------
# The XML-RPC client interface is
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
#
# things to look into some day:
# TODO: sort out True/False/boolean issues for Python 2.3
|
"""
This page is in the table of contents.
Export is a craft tool to pick an export plugin, add information to the file name, and delete comments.
The export manual page is at:
http://fabmetheus.crsndoo.com/wiki/index.php/Skeinforge_Export
==Operation==
The default 'Activate Export' checkbox is on. When it is on, the functions described below will work, when it is off, the functions will not be called.
==Settings==
===Add Descriptive Extension===
Default is off.
When selected, key profile values will be added as an extension to the gcode file. For example:
test.04hx06w_03fill_2cx2r_33EL.gcode
would mean:
* . (Carve section.)
* 04h = 'Layer Height (mm):' 0.4
* x
* 06w = 0.6 width i.e. 0.4 times 'Edge Width over Height (ratio):' 1.5
* _ (Fill section.)
* 03fill = 'Infill Solidity (ratio):' 0.3
* _ (Multiply section; if there is one column and one row then this section is not shown.)
* 2c = 'Number of Columns (integer):' 2
* x
* 2r = 'Number of Rows (integer):' 2.
* _ (Speed section.)
* 33EL = 'Feed Rate (mm/s):' 33.0 and 'Flow Rate Setting (float):' 33.0. If either value has a positive value after the decimal place then this is also shown, but if it is zero it is hidden. Also, if the values differ (which they shouldn't with 5D volumetrics) then each should be displayed separately. For example, 35.2E30L = 'Feed Rate (mm/s):' 35.2 and 'Flow Rate Setting (float):' 30.0.
===Add Profile Extension===
Default is off.
When selected, the current profile will be added to the file extension. For example:
test.my_profile_name.gcode
===Add Timestamp Extension===
Default is off.
When selected, the current date and time is added as an extension in format YYYYmmdd_HHMMSS (so it is sortable if one has many files). For example:
test.my_profile_name.20110613_220113.gcode
===Also Send Output To===
Default is empty.
Defines the output name for sending to a file or pipe. A common choice is stdout to print the output in the shell screen. Another common choice is stderr. With the empty default, nothing will be done. If the value is anything else, the output will be written to that file name.
===Analyze Gcode===
Default is on.
When selected, the penultimate gcode will be sent to the analyze plugins to be analyzed and viewed.
===Comment Choice===
Default is 'Delete All Comments'.
====Do Not Delete Comments====
When selected, export will not delete comments. Crafting comments slow down the processing in many firmware types, which leads to pauses and therefore a lower quality print.
====Delete Crafting Comments====
When selected, export will delete the time consuming crafting comments, but leave the initialization comments. Since the crafting comments are deleted, there are no pauses during extrusion. The remaining initialization comments provide some useful information for the analyze tools.
====Delete All Comments====
When selected, export will delete all comments. The comments are not necessary to run a fabricator. Some printers do not support comments at all so the safest way is choose this option.
===Export Operations===
Export presents the user with a choice of the export plugins in the export_plugins folder. The chosen plugin will then modify the gcode or translate it into another format. There is also the "Do Not Change Output" choice, which will not change the output. An export plugin is a script in the export_plugins folder which has the getOutput function, the globalIsReplaceable variable and if it's output is not replaceable, the writeOutput function.
===File Extension===
Default is gcode.
Defines the file extension added to the name of the output file. The output file will be named as originalname_export.extension so if you are processing XYZ.stl the output will by default be XYZ_export.gcode
===Name of Replace File===
Default is replace.csv.
When export is exporting the code, if there is a tab separated file with the name of the "Name of Replace File" setting, it will replace the string in the first column by its replacement in the second column. If there is nothing in the second column, the first column string will be deleted, if this leads to an empty line, the line will be deleted. If there are replacement columns after the second, they will be added as extra lines of text. There is an example file replace_example.csv to demonstrate the tab separated format, which can be edited in a text editor or a spreadsheet.
Export looks for the alteration file in the alterations folder in the .skeinforge folder in the home directory. Export does not care if the text file names are capitalized, but some file systems do not handle file name cases properly, so to be on the safe side you should give them lower case names. If it doesn't find the file it then looks in the alterations folder in the skeinforge_plugins folder.
===Save Penultimate Gcode===
Default is off.
When selected, export will save the gcode file with the suffix '_penultimate.gcode' just before it is exported. This is useful because the code after it is exported could be in a form which the viewers can not display well.
==Examples==
The following examples export the file Screw Holder Bottom.stl. The examples are run in a terminal in the folder which contains Screw Holder Bottom.stl and export.py.
> python export.py
This brings up the export dialog.
> python export.py Screw Holder Bottom.stl
The export tool is parsing the file:
Screw Holder Bottom.stl
..
The export tool has created the file:
.. Screw Holder Bottom_export.gcode
""" |
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
# SKR03
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
|
"""
==================================
Constants (:mod:`scipy.constants`)
==================================
.. currentmodule:: scipy.constants
Physical and mathematical constants and units.
Mathematical constants
======================
================ =================================================================
``pi`` Pi
``golden`` Golden ratio
``golden_ratio`` Golden ratio
================ =================================================================
Physical constants
==================
=========================== =================================================================
``c`` speed of light in vacuum
``speed_of_light`` speed of light in vacuum
``mu_0`` the magnetic constant :math:`\mu_0`
``epsilon_0`` the electric constant (vacuum permittivity), :math:`\epsilon_0`
``h`` the Planck constant :math:`h`
``Planck`` the Planck constant :math:`h`
``hbar`` :math:`\hbar = h/(2\pi)`
``G`` Newtonian constant of gravitation
``gravitational_constant`` Newtonian constant of gravitation
``g`` standard acceleration of gravity
``e`` elementary charge
``elementary_charge`` elementary charge
``R`` molar gas constant
``gas_constant`` molar gas constant
``alpha`` fine-structure constant
``fine_structure`` fine-structure constant
``N_A`` Avogadro constant
``Avogadro`` Avogadro constant
``k`` Boltzmann constant
``Boltzmann`` Boltzmann constant
``sigma`` Stefan-Boltzmann constant :math:`\sigma`
``Stefan_Boltzmann`` Stefan-Boltzmann constant :math:`\sigma`
``Wien`` Wien displacement law constant
``Rydberg`` Rydberg constant
``m_e`` electron mass
``electron_mass`` electron mass
``m_p`` proton mass
``proton_mass`` proton mass
``m_n`` neutron mass
``neutron_mass`` neutron mass
=========================== =================================================================
Constants database
------------------
In addition to the above variables, :mod:`scipy.constants` also contains the
2014 CODATA recommended values [CODATA2014]_ database containing more physical
constants.
.. autosummary::
:toctree: generated/
value -- Value in physical_constants indexed by key
unit -- Unit in physical_constants indexed by key
precision -- Relative precision in physical_constants indexed by key
find -- Return list of physical_constant keys with a given string
ConstantWarning -- Constant sought not in newest CODATA data set
.. data:: physical_constants
Dictionary of physical constants, of the format
``physical_constants[name] = (value, unit, uncertainty)``.
Available constants:
====================================================================== ====
%(constant_names)s
====================================================================== ====
Units
=====
SI prefixes
-----------
============ =================================================================
``yotta`` :math:`10^{24}`
``zetta`` :math:`10^{21}`
``exa`` :math:`10^{18}`
``peta`` :math:`10^{15}`
``tera`` :math:`10^{12}`
``giga`` :math:`10^{9}`
``mega`` :math:`10^{6}`
``kilo`` :math:`10^{3}`
``hecto`` :math:`10^{2}`
``deka`` :math:`10^{1}`
``deci`` :math:`10^{-1}`
``centi`` :math:`10^{-2}`
``milli`` :math:`10^{-3}`
``micro`` :math:`10^{-6}`
``nano`` :math:`10^{-9}`
``pico`` :math:`10^{-12}`
``femto`` :math:`10^{-15}`
``atto`` :math:`10^{-18}`
``zepto`` :math:`10^{-21}`
============ =================================================================
Binary prefixes
---------------
============ =================================================================
``kibi`` :math:`2^{10}`
``mebi`` :math:`2^{20}`
``gibi`` :math:`2^{30}`
``tebi`` :math:`2^{40}`
``pebi`` :math:`2^{50}`
``exbi`` :math:`2^{60}`
``zebi`` :math:`2^{70}`
``yobi`` :math:`2^{80}`
============ =================================================================
Weight
------
================= ============================================================
``gram`` :math:`10^{-3}` kg
``metric_ton`` :math:`10^{3}` kg
``grain`` one grain in kg
``lb`` one pound (avoirdupous) in kg
``pound`` one pound (avoirdupous) in kg
``oz`` one ounce in kg
``ounce`` one ounce in kg
``stone`` one stone in kg
``grain`` one grain in kg
``long_ton`` one long ton in kg
``short_ton`` one short ton in kg
``troy_ounce`` one Troy ounce in kg
``troy_pound`` one Troy pound in kg
``carat`` one carat in kg
``m_u`` atomic mass constant (in kg)
``u`` atomic mass constant (in kg)
``atomic_mass`` atomic mass constant (in kg)
================= ============================================================
Angle
-----
================= ============================================================
``degree`` degree in radians
``arcmin`` arc minute in radians
``arcminute`` arc minute in radians
``arcsec`` arc second in radians
``arcsecond`` arc second in radians
================= ============================================================
Time
----
================= ============================================================
``minute`` one minute in seconds
``hour`` one hour in seconds
``day`` one day in seconds
``week`` one week in seconds
``year`` one year (365 days) in seconds
``Julian_year`` one Julian year (365.25 days) in seconds
================= ============================================================
Length
------
===================== ============================================================
``inch`` one inch in meters
``foot`` one foot in meters
``yard`` one yard in meters
``mile`` one mile in meters
``mil`` one mil in meters
``pt`` one point in meters
``point`` one point in meters
``survey_foot`` one survey foot in meters
``survey_mile`` one survey mile in meters
``nautical_mile`` one nautical mile in meters
``fermi`` one Fermi in meters
``angstrom`` one Angstrom in meters
``micron`` one micron in meters
``au`` one astronomical unit in meters
``astronomical_unit`` one astronomical unit in meters
``light_year`` one light year in meters
``parsec`` one parsec in meters
===================== ============================================================
Pressure
--------
================= ============================================================
``atm`` standard atmosphere in pascals
``atmosphere`` standard atmosphere in pascals
``bar`` one bar in pascals
``torr`` one torr (mmHg) in pascals
``mmHg`` one torr (mmHg) in pascals
``psi`` one psi in pascals
================= ============================================================
Area
----
================= ============================================================
``hectare`` one hectare in square meters
``acre`` one acre in square meters
================= ============================================================
Volume
------
=================== ========================================================
``liter`` one liter in cubic meters
``litre`` one liter in cubic meters
``gallon`` one gallon (US) in cubic meters
``gallon_US`` one gallon (US) in cubic meters
``gallon_imp`` one gallon (UK) in cubic meters
``fluid_ounce`` one fluid ounce (US) in cubic meters
``fluid_ounce_US`` one fluid ounce (US) in cubic meters
``fluid_ounce_imp`` one fluid ounce (UK) in cubic meters
``bbl`` one barrel in cubic meters
``barrel`` one barrel in cubic meters
=================== ========================================================
Speed
-----
================== ==========================================================
``kmh`` kilometers per hour in meters per second
``mph`` miles per hour in meters per second
``mach`` one Mach (approx., at 15 C, 1 atm) in meters per second
``speed_of_sound`` one Mach (approx., at 15 C, 1 atm) in meters per second
``knot`` one knot in meters per second
================== ==========================================================
Temperature
-----------
===================== =======================================================
``zero_Celsius`` zero of Celsius scale in Kelvin
``degree_Fahrenheit`` one Fahrenheit (only differences) in Kelvins
===================== =======================================================
.. autosummary::
:toctree: generated/
convert_temperature
C2K
K2C
F2C
C2F
F2K
K2F
Energy
------
==================== =======================================================
``eV`` one electron volt in Joules
``electron_volt`` one electron volt in Joules
``calorie`` one calorie (thermochemical) in Joules
``calorie_th`` one calorie (thermochemical) in Joules
``calorie_IT`` one calorie (International Steam Table calorie, 1956) in Joules
``erg`` one erg in Joules
``Btu`` one British thermal unit (International Steam Table) in Joules
``Btu_IT`` one British thermal unit (International Steam Table) in Joules
``Btu_th`` one British thermal unit (thermochemical) in Joules
``ton_TNT`` one ton of TNT in Joules
==================== =======================================================
Power
-----
==================== =======================================================
``hp`` one horsepower in watts
``horsepower`` one horsepower in watts
==================== =======================================================
Force
-----
==================== =======================================================
``dyn`` one dyne in newtons
``dyne`` one dyne in newtons
``lbf`` one pound force in newtons
``pound_force`` one pound force in newtons
``kgf`` one kilogram force in newtons
``kilogram_force`` one kilogram force in newtons
==================== =======================================================
Optics
------
.. autosummary::
:toctree: generated/
lambda2nu
nu2lambda
References
==========
.. [CODATA2014] CODATA Recommended Values of the Fundamental
Physical Constants 2014.
http://physics.nist.gov/cuu/Constants/index.html
""" |
#
# XML-RPC CLIENT LIBRARY
# $Id$
#
# an XML-RPC client interface for Python.
#
# the marshalling and response parser code can also be used to
# implement XML-RPC servers.
#
# Notes:
# this version is designed to work with Python 2.1 or newer.
#
# History:
# 1999-01-14 fl Created
# 1999-01-15 fl Changed dateTime to use localtime
# 1999-01-16 fl Added Binary/base64 element, default to RPC2 service
# 1999-01-19 fl Fixed array data element (from Skip Montanaro)
# 1999-01-21 fl Fixed dateTime constructor, etc.
# 1999-02-02 fl Added fault handling, handle empty sequences, etc.
# 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro)
# 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8)
# 2000-11-28 fl Changed boolean to check the truth value of its argument
# 2001-02-24 fl Added encoding/Unicode/SafeTransport patches
# 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1)
# 2001-03-28 fl Make sure response tuple is a singleton
# 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2)
# 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME
# 2001-09-03 fl Allow Transport subclass to override getparser
# 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup)
# 2001-10-01 fl Remove containers from memo cache when done with them
# 2001-10-01 fl Use faster escape method (80% dumps speedup)
# 2001-10-02 fl More dumps microtuning
# 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow
# 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems)
# 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments
# 2002-04-16 fl Added __str__ methods to datetime/binary wrappers
# 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version
# 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type
# 2003-02-27 gvr Remove apply calls
# 2003-04-24 sm Use cStringIO if available
# 2003-04-25 ak Add support for nil
# 2003-06-15 gn Add support for time.struct_time
# 2003-07-12 gp Correct marshalling of Faults
# 2003-10-31 mvl Add multicall support
# 2004-08-20 mvl Bump minimum supported Python version to 2.1
#
# Copyright (c) 1999-2002 by Secret Labs AB.
# Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com
#
# --------------------------------------------------------------------
# The XML-RPC client interface is
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
#
# things to look into some day:
# TODO: sort out True/False/boolean issues for Python 2.3
|
"""Drag-and-drop support for Tkinter.
This is very preliminary. I currently only support dnd *within* one
application, between different windows (or within the same window).
I am trying to make this as generic as possible -- not dependent on
the use of a particular widget or icon type, etc. I also hope that
this will work with Pmw.
To enable an object to be dragged, you must create an event binding
for it that starts the drag-and-drop process. Typically, you should
bind <ButtonPress> to a callback function that you write. The function
should call Tkdnd.dnd_start(source, event), where 'source' is the
object to be dragged, and 'event' is the event that invoked the call
(the argument to your callback function). Even though this is a class
instantiation, the returned instance should not be stored -- it will
be kept alive automatically for the duration of the drag-and-drop.
When a drag-and-drop is already in process for the Tk interpreter, the
call is *ignored*; this normally averts starting multiple simultaneous
dnd processes, e.g. because different button callbacks all
dnd_start().
The object is *not* necessarily a widget -- it can be any
application-specific object that is meaningful to potential
drag-and-drop targets.
Potential drag-and-drop targets are discovered as follows. Whenever
the mouse moves, and at the start and end of a drag-and-drop move, the
Tk widget directly under the mouse is inspected. This is the target
widget (not to be confused with the target object, yet to be
determined). If there is no target widget, there is no dnd target
object. If there is a target widget, and it has an attribute
dnd_accept, this should be a function (or any callable object). The
function is called as dnd_accept(source, event), where 'source' is the
object being dragged (the object passed to dnd_start() above), and
'event' is the most recent event object (generally a <Motion> event;
it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept()
function returns something other than None, this is the new dnd target
object. If dnd_accept() returns None, or if the target widget has no
dnd_accept attribute, the target widget's parent is considered as the
target widget, and the search for a target object is repeated from
there. If necessary, the search is repeated all the way up to the
root widget. If none of the target widgets can produce a target
object, there is no target object (the target object is None).
The target object thus produced, if any, is called the new target
object. It is compared with the old target object (or None, if there
was no old target widget). There are several cases ('source' is the
source object, and 'event' is the most recent event object):
- Both the old and new target objects are None. Nothing happens.
- The old and new target objects are the same object. Its method
dnd_motion(source, event) is called.
- The old target object was None, and the new target object is not
None. The new target object's method dnd_enter(source, event) is
called.
- The new target object is None, and the old target object is not
None. The old target object's method dnd_leave(source, event) is
called.
- The old and new target objects differ and neither is None. The old
target object's method dnd_leave(source, event), and then the new
target object's method dnd_enter(source, event) is called.
Once this is done, the new target object replaces the old one, and the
Tk mainloop proceeds. The return value of the methods mentioned above
is ignored; if they raise an exception, the normal exception handling
mechanisms take over.
The drag-and-drop processes can end in two ways: a final target object
is selected, or no final target object is selected. When a final
target object is selected, it will always have been notified of the
potential drop by a call to its dnd_enter() method, as described
above, and possibly one or more calls to its dnd_motion() method; its
dnd_leave() method has not been called since the last call to
dnd_enter(). The target is notified of the drop by a call to its
method dnd_commit(source, event).
If no final target object is selected, and there was an old target
object, its dnd_leave(source, event) method is called to complete the
dnd sequence.
Finally, the source object is notified that the drag-and-drop process
is over, by a call to source.dnd_end(target, event), specifying either
the selected target object, or None if no target object was selected.
The source object can use this to implement the commit action; this is
sometimes simpler than to do it in the target's dnd_commit(). The
target's dnd_commit() method could then simply be aliased to
dnd_leave().
At any time during a dnd sequence, the application can cancel the
sequence by calling the cancel() method on the object returned by
dnd_start(). This will call dnd_leave() if a target is currently
active; it will never call dnd_commit().
""" |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ***********************IMPORTANT NMAP LICENSE TERMS************************
# * *
# * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is *
# * also a registered trademark of Insecure.Com LLC. This program is free *
# * software; you may redistribute and/or modify it under the terms of the *
# * GNU General Public License as published by the Free Software *
# * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS *
# * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, *
# * modify, and redistribute this software under certain conditions. If *
# * you wish to embed Nmap technology into proprietary software, we sell *
# * alternative licenses (contact EMAIL Dozens of software *
# * vendors already license Nmap technology such as host discovery, port *
# * scanning, OS detection, version detection, and the Nmap Scripting *
# * Engine. *
# * *
# * Note that the GPL places important restrictions on "derivative works", *
# * yet it does not provide a detailed definition of that term. To avoid *
# * misunderstandings, we interpret that term as broadly as copyright law *
# * allows. For example, we consider an application to constitute a *
# * derivative work for the purpose of this license if it does any of the *
# * following with any software or content covered by this license *
# * ("Covered Software"): *
# * *
# * o Integrates source code from Covered Software. *
# * *
# * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db *
# * or nmap-service-probes. *
# * *
# * o Is designed specifically to execute Covered Software and parse the *
# * results (as opposed to typical shell or execution-menu apps, which will *
# * execute anything you tell them to). *
# * *
# * o Includes Covered Software in a proprietary executable installer. The *
# * installers produced by InstallShield are an example of this. Including *
# * Nmap with other software in compressed or archival form does not *
# * trigger this provision, provided appropriate open source decompression *
# * or de-archiving software is widely available for no charge. For the *
# * purposes of this license, an installer is considered to include Covered *
# * Software even if it actually retrieves a copy of Covered Software from *
# * another source during runtime (such as by downloading it from the *
# * Internet). *
# * *
# * o Links (statically or dynamically) to a library which does any of the *
# * above. *
# * *
# * o Executes a helper program, module, or script to do any of the above. *
# * *
# * This list is not exclusive, but is meant to clarify our interpretation *
# * of derived works with some common examples. Other people may interpret *
# * the plain GPL differently, so we consider this a special exception to *
# * the GPL that we apply to Covered Software. Works which meet any of *
# * these conditions must conform to all of the terms of this license, *
# * particularly including the GPL Section 3 requirements of providing *
# * source code and allowing free redistribution of the work as a whole. *
# * *
# * As another special exception to the GPL terms, Insecure.Com LLC grants *
# * permission to link the code of this program with any version of the *
# * OpenSSL library which is distributed under a license identical to that *
# * listed in the included docs/licenses/OpenSSL.txt file, and distribute *
# * linked combinations including the two. *
# * *
# * Any redistribution of Covered Software, including any derived works, *
# * must obey and carry forward all of the terms of this license, including *
# * obeying all GPL rules and restrictions. For example, source code of *
# * the whole work must be provided and free redistribution must be *
# * allowed. All GPL references to "this License", are to be treated as *
# * including the special and conditions of the license text as well. *
# * *
# * Because this license imposes special exceptions to the GPL, Covered *
# * Work may not be combined (even as part of a larger work) with plain GPL *
# * software. The terms, conditions, and exceptions of this license must *
# * be included as well. This license is incompatible with some other open *
# * source licenses as well. In some cases we can relicense portions of *
# * Nmap or grant special permissions to use it in other open source *
# * software. Please contact EMAIL with any such requests. *
# * Similarly, we don't incorporate incompatible open source software into *
# * Covered Software without special permission from the copyright holders. *
# * *
# * If you have any questions about the licensing restrictions on using *
# * Nmap in other works, are happy to help. As mentioned above, we also *
# * offer alternative license to integrate Nmap into proprietary *
# * applications and appliances. These contracts have been sold to dozens *
# * of software vendors, and generally include a perpetual license as well *
# * as providing for priority support and updates. They also fund the *
# * continued development of Nmap. Please email EMAIL for *
# * further information. *
# * *
# * If you received these files with a written license agreement or *
# * contract stating terms other than the terms above, then that *
# * alternative license agreement takes precedence over these comments. *
# * *
# * Source is provided to this software because we believe users have a *
# * right to know exactly what a program is going to do before they run it. *
# * This also allows you to audit the software for security holes (none *
# * have been found so far). *
# * *
# * Source code also allows you to port Nmap to new platforms, fix bugs, *
# * and add new features. You are highly encouraged to send your changes *
# * to the EMAIL mailing list for possible incorporation into the *
# * main distribution. By sending these changes to Fyodor or one of the *
# * Insecure.Org development mailing lists, or checking them into the Nmap *
# * source code repository, it is understood (unless you specify otherwise) *
# * that you are offering the Nmap Project (Insecure.Com LLC) the *
# * unlimited, non-exclusive right to reuse, modify, and relicense the *
# * code. Nmap will always be available Open Source, but this is important *
# * because the inability to relicense code has caused devastating problems *
# * for other Free Software projects (such as KDE and NASM). We also *
# * occasionally relicense the code to third parties as discussed above. *
# * If you wish to specify special license conditions of your *
# * contributions, just say so when you send them. *
# * *
# * This program is distributed in the hope that it will be useful, but *
# * WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap *
# * license file for more details (it's in a COPYING file included with *
# * Nmap, and also available from https://svn.nmap.org/nmap/COPYING *
# * *
# ***************************************************************************/
|
# -*- coding: utf-8 -*-
# #
# 函数的调用
# python 自带的函数查看
# https://docs.python.org/3/library/functions.html
# function(参数)
# 参数的数量或者类型错误
# python 会报 TypeError 错误
# 并提示错误信息
# #
# #
# 函数的定义
# def functionName(Variable...)
# function
# 注意 return
# 在 return 执行之后,函数就结束运行了
# 省略 return ,其实执行的是 return none
# 同时 return none <=> return
# 已经编写了 Lesson9_Function_muAbs.py
# 其中有 myAbs() 函数
# 调用如下
# #
# from Lesson9_Function_myAbs import myAbs
# c = myAbs('a')
# print(c)
# #
# 空函数
# pass
# pass 通常用来做占位符,程序不报语法错误
# #
# def nop():
# pass
# #
# 类型检查
# 报错均是
# typeError
# 一是:参数个数的检查
# 二是:参数类型的检查
# 编写的时候要注意这两点
# #
# #
# return 的返回值是一个 tuple
# 也就是说可以用一个变量来存储这个 tuple
# 如果用多个变量来存储的话
# 就是 tuple 对对应位置的变量赋值
# #
# from Lesson9_Function_mySinAndCos import sinAndCos
# c = sinAndCos(0)
# print(c)
# # angle = input('Input angle : ')
# s,c = sinAndCos(45)
# print(s)
# print(c)
# # 0.8509035245341184
# # 0.5253219888177297
# 练习 :请定义一个函数quadratic(a, b, c),接收3个参数,返回一元二次方程:
# ax2 + bx + c = 0
# 的两个解。
# from Lesson9_Function_SolutionToEquation import solutionToEquation
# a = float( input('Input a : '))
# b = float( input('Input b : '))
# c = float( input('Input c : '))
# root = solutionToEquation(a,b,c)
# print(root)
# #
# 函数的参数
# #
# #
# 位置参数
# 如前面使用的
# solutionToEquation(a,b,c)
# 其中 a,b,c 都是位置参数
# 因为参数的使用和确定是根据位置一一对应
# #
# #
# 默认参数
# 默认参数的意思是,该参数可以省略,省略时用默认值
# 也可以对 默认参数 幅值,这个参数会覆盖 默认参数
# 下面用 power 乘方函数来展示默认参数
# 默认参数要用不可变对象
# #
# def myPower(x,n=2) :
# powerResult = 1
# while n > 0 :
# powerResult *= x
# n = n - 1
# return powerResult
# power1 = myPower(2)
# power2 = myPower(2,10)
# print( power1,'\n',power2)
# #
# 可变参数(个数)
# 格式:多了一个 * 星号
# 这样会把输入组成一个 tuple
# 这样就可以直接输入多个值,或者直接输入 List 和 Tuple
# def functionName(*variable)
# funtion_pass
# 以累加程序为例
# #
# def mySum(*num) :
# sum = 0
# for x in num :
# sum += x
# return sum
# >>> mySum(1,2,3)
# 6
# >>> mySum(*range(50))
# 1225
# #
# 关键字参数
# 可变参数允许你传入0个或任意个参数,这些可变参数在函数调用时自动组装为一个tuple
# 关键字参数允许你传入0个或任意个含参数名的参数,这些关键字参数在函数内部自动组装为一个dict
# 也就是说 关键字参数 可以传入一个 dict
# 定义: (多了 ** 两个星号)
# def functionName(variable1,**dictExample) :
# function_pass
# 以成绩统计程序为例
# #
# >>> Score('Deng')
# name= Deng score= {}
# >>> Score('Deng',Math=100)
# name= Deng score= {'Math': 100}
# >>> newdict = {'Math':100,'Phisics':100}
# >>> Score('deng',**newdict)
# name= deng score= {'Phisics': 100, 'Math': 100}
# #
# 递归
# recursion
# 在一个函数的内部调用函数本身,就是递归函数
# 优点:逻辑清晰(所有的递归可以写成循环,但是循环的逻辑不如递归)
# 缺点:可能堆栈溢出
# 以 累乘 为例
# #
# >>> def fact(x):
# while x==1 :
# return 1
# return x*fact(x-1)
# >>> fact(10)
# 3628800
# #
# 练习:请编写move(n, a, b, c)函数,它接收参数n,表示3个柱子A、B、C中第1个柱子A的盘子数量,然后打印出把所有盘子从A借助B移动到C的方法,例如:
|
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
"""
================
Precision-Recall
================
Example of Precision-Recall metric to evaluate classifier output quality.
Precision-Recall is a useful measure of success of prediction when the
classes are very imbalanced. In information retrieval, precision is a
measure of result relevancy, while recall is a measure of how many truly
relevant results are returned.
The precision-recall curve shows the tradeoff between precision and
recall for different threshold. A high area under the curve represents
both high recall and high precision, where high precision relates to a
low false positive rate, and high recall relates to a low false negative
rate. High scores for both show that the classifier is returning accurate
results (high precision), as well as returning a majority of all positive
results (high recall).
A system with high recall but low precision returns many results, but most of
its predicted labels are incorrect when compared to the training labels. A
system with high precision but low recall is just the opposite, returning very
few results, but most of its predicted labels are correct when compared to the
training labels. An ideal system with high precision and high recall will
return many results, with all results labeled correctly.
Precision (:math:`P`) is defined as the number of true positives (:math:`T_p`)
over the number of true positives plus the number of false positives
(:math:`F_p`).
:math:`P = \\frac{T_p}{T_p+F_p}`
Recall (:math:`R`) is defined as the number of true positives (:math:`T_p`)
over the number of true positives plus the number of false negatives
(:math:`F_n`).
:math:`R = \\frac{T_p}{T_p + F_n}`
These quantities are also related to the (:math:`F_1`) score, which is defined
as the harmonic mean of precision and recall.
:math:`F1 = 2\\frac{P \\times R}{P+R}`
Note that the precision may not decrease with recall. The
definition of precision (:math:`\\frac{T_p}{T_p + F_p}`) shows that lowering
the threshold of a classifier may increase the denominator, by increasing the
number of results returned. If the threshold was previously set too high, the
new results may all be true positives, which will increase precision. If the
previous threshold was about right or too low, further lowering the threshold
will introduce false positives, decreasing precision.
Recall is defined as :math:`\\frac{T_p}{T_p+F_n}`, where :math:`T_p+F_n` does
not depend on the classifier threshold. This means that lowering the classifier
threshold may increase recall, by increasing the number of true positive
results. It is also possible that lowering the threshold may leave recall
unchanged, while the precision fluctuates.
The relationship between recall and precision can be observed in the
stairstep area of the plot - at the edges of these steps a small change
in the threshold considerably reduces precision, with only a minor gain in
recall.
**Average precision** (AP) summarizes such a plot as the weighted mean of
precisions achieved at each threshold, with the increase in recall from the
previous threshold used as the weight:
:math:`\\text{AP} = \\sum_n (R_n - R_{n-1}) P_n`
where :math:`P_n` and :math:`R_n` are the precision and recall at the
nth threshold. A pair :math:`(R_k, P_k)` is referred to as an
*operating point*.
AP and the trapezoidal area under the operating points
(:func:`sklearn.metrics.auc`) are common ways to summarize a precision-recall
curve that lead to different results. Read more in the
:ref:`User Guide <precision_recall_f_measure_metrics>`.
Precision-recall curves are typically used in binary classification to study
the output of a classifier. In order to extend the precision-recall curve and
average precision to multi-class or multi-label classification, it is necessary
to binarize the output. One curve can be drawn per label, but one can also draw
a precision-recall curve by considering each element of the label indicator
matrix as a binary prediction (micro-averaging).
.. note::
See also :func:`sklearn.metrics.average_precision_score`,
:func:`sklearn.metrics.recall_score`,
:func:`sklearn.metrics.precision_score`,
:func:`sklearn.metrics.f1_score`
""" |
"""
# ggame
The simple cross-platform sprite and game platform for Brython Server (Pygame, Tkinter to follow?).
Ggame stands for a couple of things: "good game" (of course!) and also "git game" or "github game"
because it is designed to operate with [Brython Server](http://runpython.com) in concert with
Github as a backend file store.
Ggame is **not** intended to be a full-featured gaming API, with every bell and whistle. Ggame is
designed primarily as a tool for teaching computer programming, recognizing that the ability
to create engaging and interactive games is a powerful motivator for many progamming students.
Accordingly, any functional or performance enhancements that *can* be reasonably implemented
by the user are left as an exercise.
## Functionality Goals
The ggame library is intended to be trivially easy to use. For example:
from ggame import App, ImageAsset, Sprite
# Create a displayed object at 100,100 using an image asset
Sprite(ImageAsset("ggame/bunny.png"), (100,100))
# Create the app, with a 500x500 pixel stage
app = App(500,500)
# Run the app
app.run()
## Overview
There are three major components to the `ggame` system: Assets, Sprites and the App.
### Assets
Asset objects (i.e. `ggame.ImageAsset`, etc.) typically represent separate files that
are provided by the "art department". These might be background images, user interface
images, or images that represent objects in the game. In addition, `ggame.SoundAsset`
is used to represent sound files (`.wav` or `.mp3` format) that can be played in the
game.
Ggame also extends the asset concept to include graphics that are generated dynamically
at run-time, such as geometrical objects, e.g. rectangles, lines, etc.
### Sprites
All of the visual aspects of the game are represented by instances of `ggame.Sprite` or
subclasses of it.
### App
Every ggame application must create a single instance of the `ggame.App` class (or
a sub-class of it). Creating an instance of the `ggame.App` class will initiate
creation of a pop-up window on your browser. Executing the app's `run` method will
begin the process of refreshing the visual assets on the screen.
### Events
No game is complete without a player and players produce events. Your code handles user
input by registering to receive keyboard and mouse events using `ggame.App.listenKeyEvent` and
`ggame.App.listenMouseEvent` methods.
## Execution Environment
Ggame is designed to be executed in a web browser using [Brython](http://brython.info/),
[Pixi.js](http://www.pixijs.com/) and [Buzz](http://buzz.jaysalvat.com/). The easiest
way to do this is by executing from [runpython](http://runpython.com), with source
code residing on [github](http://github.com).
When using [runpython](http://runpython.com), you will have to configure your browser
to allow popup windows.
To use Ggame in your own application, you will minimally need to create a folder called
`ggame` in your project. Within `ggame`, copy the `ggame.py`, `sysdeps.py` and
`__init__.py` files from the [ggame project](https://github.com/BrythonServer/ggame).
### Include Ggame as a Git Subtree
From the same directory as your own python sources (note: you must have an existing git
repository with committed files in order for the following to work properly),
execute the following terminal commands:
git remote add -f ggame https://github.com/BrythonServer/ggame.git
git merge -s ours --no-commit ggame/master
mkdir ggame
git read-tree --prefix=ggame/ -u ggame/master
git commit -m "Merge ggame project as our subdirectory"
If you want to pull in updates from ggame in the future:
git pull -s subtree ggame master
You can see an example of how a ggame subtree is used by examining the
[Brython Server Spacewar](https://github.com/BrythonServer/Spacewar) repo on Github.
## Geometry
When referring to screen coordinates, note that the x-axis of the computer screen
is *horizontal* with the zero position on the left hand side of the screen. The
y-axis is *vertical* with the zero position at the **top** of the screen.
Increasing positive y-coordinates correspond to the downward direction on the
computer screen. Note that this is **different** from the way you may have learned
about x and y coordinates in math class!
""" |
"""
REACH is a biology-oriented machine reading system which uses a cascade of
grammars to extract biological mechanisms from free text.
To cover a wide range of use cases and scenarios, there are currently 4
different ways in which INDRA can use REACH.
1. INDRA communicating with a locally running REACH Server (:py:mod:`indra.sources.reach.api`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Setup and usage: Follow standard instructions to install
`SBT <www.scala-sbt.org>`_. Then clone REACH and run the REACH web server.
.. code-block:: bash
git clone https://github.com/clulab/reach.git
cd reach
sbt 'run-main org.clulab.reach.export.server.ApiServer'
Then read text by specifying the url parameter when using
`indra.sources.reach.process_text`.
.. code-block:: python
from indra.sources import reach
rp = reach.process_text('MEK binds ERK', url=reach.local_text_url)
It is also possible to read NXML (string or file) and process the text of a
paper given its PMC ID or PubMed ID using other API methods in
:py:mod:`indra.sources.reach.api`. Note that `reach.local_nxml_url` needs
to be used as `url` in case NXML content is being read.
Advantages:
* Does not require setting up the pyjnius Python-Java bridge.
* Does not require assembling a REACH JAR file.
* Allows local control the REACH version and configuration used to run the
service.
* REACH is running in a separate process and therefore does not need to
be initialized if a new Python session is started.
Disadvantages:
* First request might be time-consuming as REACH is loading additional
resources.
* Only endpoints exposed by the REACH web server are available, i.e., no
full object-level access to REACH components.
2. INDRA communicating with the UA REACH Server (:py:mod:`indra.sources.reach.api`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Setup and usage: Does not require any additional setup after installing INDRA.
Read text using the default values for `offline` and `url` parameters.
.. code-block:: python
from indra.sources import reach
rp = reach.process_text('MEK binds ERK')
It is also possible to read NXML (string or file) and process the content of
a paper given its PMC ID or PubMed ID using other functions in
:py:mod:`indra.sources.reach.api`.
Advantages:
* Does not require setting up the pyjnius Python-Java bridge.
* Does not require assembling a REACH JAR file or installing REACH at all
locally.
* Suitable for initial prototyping or integration testing.
Disadvantages:
* Cannot handle high-throughput reading workflows due to limited server
resources.
* No control over which REACH version is used to run the service.
* Difficulties processing NXML-formatted text (request times out) have been
observed in the past.
3. INDRA using a REACH JAR through a Python-Java bridge (:py:mod:`indra.sources.reach.reader`)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Setup and usage:
Follow standard instructions for installing SBT. First, the REACH system and
its dependencies need to be packaged as a fat JAR:
.. code-block:: bash
git clone https://github.com/clulab/reach.git
cd reach
sbt assembly
This creates a JAR file in reach/target/scala[version]/reach-[version].jar.
Set the absolute path to this file on the REACHPATH environmental variable
and then append REACHPATH to the CLASSPATH environmental variable (entries
are separated by colons).
The `pyjnius` package needs to be set up and be operational. For more details,
see :ref:`pyjniussetup` setup instructions in the documentation.
Then, reading can be done using the `indra.sources.reach.process_text`
function with the offline option.
.. code-block:: python
from indra.sources import reach
rp = reach.process_text('MEK binds ERK', offline=True)
Other functions in :py:mod:`indra.sources.reach.api` can also be used
with the offline option to invoke local, JAR-based reading.
Advantages:
* Doesn't require running a separate process for REACH and INDRA.
* Having a single REACH JAR file makes this solution easily portable.
* Through jnius, all classes in REACH become available for programmatic
access.
Disadvantages:
* Requires configuring pyjnius which is often difficult (e.g., on Windows).
Therefore this usage mode is generally not recommended.
* The ReachReader instance needs to be instantiated every time a new INDRA
session is started which is time consuming.
4. Use REACH separately to produce output files and then process those with INDRA
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this usage mode REACH is not directly invoked by INDRA. Rather, REACH
is set up and run independently of INDRA to produce output files
for a set of text content. For more information on running REACH on a set of
text or NXML files, see the REACH documentation at:
https://github.com/clulab/reach. Note that INDRA uses the `fries` output format
produced by REACH.
Once REACH output has been obtained in the `fries` JSON format, one can
use :py:mod:`indra.sources.reach.api.process_json_file`
in INDRA to process each JSON file.
""" |
#!/usr/bin/python
# -*- encoding: utf-8; py-indent-offset: 4 -*-
# +------------------------------------------------------------------+
# | ____ _ _ __ __ _ __ |
# | / ___| |__ ___ ___| | __ | \/ | |/ / |
# | | | | '_ \ / _ \/ __| |/ / | |\/| | ' / |
# | | |___| | | | __/ (__| < | | | | . \ |
# | \____|_| |_|\___|\___|_|\_\___|_| |_|_|\_\ |
# | |
# | Copyright NAME 2014 EMAIL |
# +------------------------------------------------------------------+
#
# This file is part of Check_MK.
# The official homepage is at http://mathias-kettner.de/check_mk.
#
# check_mk is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation in version 2. check_mk is distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY; with-
# out even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the GNU General Public License for more de-
# ails. You should have received a copy of the GNU General Public
# License along with GNU Make; see the file COPYING. If not, write
# to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
# Boston, MA 02110-1301 USA.
# .--README--------------------------------------------------------------.
# | ____ _ |
# | | _ \ ___ __ _ __| | _ __ ___ ___ |
# | | |_) / _ \/ _` |/ _` | | '_ ` _ \ / _ \ |
# | | _ < __/ (_| | (_| | | | | | | | __/ |
# | |_| \_\___|\__,_|\__,_| |_| |_| |_|\___| |
# | |
# +----------------------------------------------------------------------+
# | A few words about the implementation details of WATO. |
# `----------------------------------------------------------------------'
# [1] Files and Folders
# WATO organizes hosts in folders. A wato folder is represented by a
# OS directory. If the folder contains host definitions, then in that
# directory a file name "hosts.mk" is kept.
# The directory hierarchy of WATO is rooted at etc/check_mk/conf.d/wato.
# All files in and below that directory are kept by WATO. WATO does not
# touch any other files or directories in conf.d.
# A *path* in WATO means a relative folder path to that directory. The
# root folder has the empty path (""). Folders are separated by slashes.
# Each directory contains a file ".wato" which keeps information needed
# by WATO but not by Check_MK itself.
# [2] Global variables
# Yes. Global variables are bad. But we use them anyway. Please go away
# if you do not like this. Global variables - if properly used - can make
# implementation a lot easier and clearer. Of course we could pack everything
# into a class and use class variables. But what's the difference?
#
# g_folders -> A dictionary of all folders, the key are there paths,
# the values are dictionaries. Keys beginning
# with a period are not persisted. Important keys are:
#
# ".folders" -> List of subfolders. This key is present even for leaf folders.
# ".parent" -> parent folder (not name, but Python reference!). Missing for the root folder
# ".name" -> OS name of the folder
# ".path" -> absolute path of folder
# ".hosts" -> Hosts in that folder. This key is present even if there are no hosts.
# If the hosts in the folder have not been loaded yet, then the key
# is missing.
# "title" -> Title/alias of that folder
# "attributes" -> Attributes to be inherited to subfolders and hosts
# "num_hosts" -> number of hosts in this folder (this is identical to
# to len() of the entry ".hosts" but is persisted for
# performance issues.
# ".total_hosts" -> recursive number of hosts, computed on demand by
# num_hosts_in()
# ".siteid" -> This attribute is mandatory for host objects and optional for folder
# objects. In case of hosts and single WATO setup it is always None.
#
#
# g_folder -> The folder object representing the folder the user is
# currently operating in.
#
# g_root_folder -> The folder object representing the root folder
#
# At the beginning of each page, those three global variables are
# set. All folders are loaded, but only their meta-data, not the
# actual Check_MK files (hosts.mk). WATO is designed for managing
# 100.000 hosts. So operations on all hosts might last a while...
#
# g_configvars -> dictionary of variables in main.mk that can be configured
# via WATO.
#
# g_html_head_open -> True, if the HTML head has already been rendered.
#.
# .--Init----------------------------------------------------------------.
# | ___ _ _ |
# | |_ _|_ __ (_) |_ |
# | | || '_ \| | __| |
# | | || | | | | |_ |
# | |___|_| |_|_|\__| |
# | |
# +----------------------------------------------------------------------+
# | Importing, Permissions, global variables |
# `----------------------------------------------------------------------'
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
# SKR03
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
|
"""
This module contains the machinery handling assumptions.
All symbolic objects have assumption attributes that can be accessed via
.is_<assumption name> attribute.
Assumptions determine certain properties of symbolic objects and can
have 3 possible values: True, False, None. True is returned if the
object has the property and False is returned if it doesn't or can't
(i.e. doesn't make sense):
>>> from sympy import I
>>> I.is_algebraic
True
>>> I.is_real
False
>>> I.is_prime
False
When the property cannot be determined (or when a method is not
implemented) None will be returned, e.g. a generic symbol, x, may or
may not be positive so a value of None is returned for x.is_positive.
By default, all symbolic values are in the largest set in the given context
without specifying the property. For example, a symbol that has a property
being integer, is also real, complex, etc.
Here follows a list of possible assumption names:
.. glossary::
commutative
object commutes with any other object with
respect to multiplication operation.
complex
object can have only values from the set
of complex numbers.
imaginary
object value is a number that can be written as a real
number multiplied by the imaginary unit ``I``. See
[3]_. Please note, that ``0`` is not considered to be an
imaginary number, see
`issue #7649 <https://github.com/sympy/sympy/issues/7649>`_.
real
object can have only values from the set
of real numbers.
integer
object can have only values from the set
of integers.
odd
even
object can have only values from the set of
odd (even) integers [2]_.
prime
object is a natural number greater than ``1`` that has
no positive divisors other than ``1`` and itself. See [6]_.
composite
object is a positive integer that has at least one positive
divisor other than ``1`` or the number itself. See [4]_.
zero
object has the value of ``0``.
nonzero
object is a real number that is not zero.
rational
object can have only values from the set
of rationals.
algebraic
object can have only values from the set
of algebraic numbers [11]_.
transcendental
object can have only values from the set
of transcendental numbers [10]_.
irrational
object value cannot be represented exactly by Rational, see [5]_.
finite
infinite
object absolute value is bounded (arbitrarily large).
See [7]_, [8]_, [9]_.
negative
nonnegative
object can have only negative (nonnegative)
values [1]_.
positive
nonpositive
object can have only positive (only
nonpositive) values.
hermitian
antihermitian
object belongs to the field of hermitian
(antihermitian) operators.
Examples
========
>>> from sympy import Symbol
>>> x = Symbol('x', real=True); x
x
>>> x.is_real
True
>>> x.is_complex
True
See Also
========
.. seealso::
:py:class:`sympy.core.numbers.ImaginaryUnit`
:py:class:`sympy.core.numbers.Zero`
:py:class:`sympy.core.numbers.One`
Notes
=====
Assumption values are stored in obj._assumptions dictionary or
are returned by getter methods (with property decorators) or are
attributes of objects/classes.
References
==========
.. [1] https://en.wikipedia.org/wiki/Negative_number
.. [2] https://en.wikipedia.org/wiki/Parity_%28mathematics%29
.. [3] https://en.wikipedia.org/wiki/Imaginary_number
.. [4] https://en.wikipedia.org/wiki/Composite_number
.. [5] https://en.wikipedia.org/wiki/Irrational_number
.. [6] https://en.wikipedia.org/wiki/Prime_number
.. [7] https://en.wikipedia.org/wiki/Finite
.. [8] https://docs.python.org/3/library/math.html#math.isfinite
.. [9] http://docs.scipy.org/doc/numpy/reference/generated/numpy.isfinite.html
.. [10] https://en.wikipedia.org/wiki/Transcendental_number
.. [11] https://en.wikipedia.org/wiki/Algebraic_number
""" |
"""
========
Glossary
========
.. glossary::
along an axis
Axes are defined for arrays with more than one dimension. A
2-dimensional array has two corresponding axes: the first running
vertically downwards across rows (axis 0), and the second running
horizontally across columns (axis 1).
Many operation can take place along one of these axes. For example,
we can sum each row of an array, in which case we operate along
columns, or axis 1::
>>> x = np.arange(12).reshape((3,4))
>>> x
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.sum(axis=1)
array([ 6, 22, 38])
array
A homogeneous container of numerical elements. Each element in the
array occupies a fixed amount of memory (hence homogeneous), and
can be a numerical element of a single type (such as float, int
or complex) or a combination (such as ``(float, int, float)``). Each
array has an associated data-type (or ``dtype``), which describes
the numerical type of its elements::
>>> x = np.array([1, 2, 3], float)
>>> x
array([ 1., 2., 3.])
>>> x.dtype # floating point number, 64 bits of memory per element
dtype('float64')
# More complicated data type: each array element is a combination of
# and integer and a floating point number
>>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)])
array([(1, 2.0), (3, 4.0)],
dtype=[('x', '<i4'), ('y', '<f8')])
Fast element-wise operations, called `ufuncs`_, operate on arrays.
array_like
Any sequence that can be interpreted as an ndarray. This includes
nested lists, tuples, scalars and existing arrays.
attribute
A property of an object that can be accessed using ``obj.attribute``,
e.g., ``shape`` is an attribute of an array::
>>> x = np.array([1, 2, 3])
>>> x.shape
(3,)
BLAS
`Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_
broadcast
NumPy can do operations on arrays whose shapes are mismatched::
>>> x = np.array([1, 2])
>>> y = np.array([[3], [4]])
>>> x
array([1, 2])
>>> y
array([[3],
[4]])
>>> x + y
array([[4, 5],
[5, 6]])
See `doc.broadcasting`_ for more information.
C order
See `row-major`
column-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In column-major order, the leftmost index "varies the
fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the column-major order as::
[1, 4, 2, 5, 3, 6]
Column-major order is also known as the Fortran order, as the Fortran
programming language uses it.
decorator
An operator that transforms a function. For example, a ``log``
decorator may be defined to print debugging information upon
function execution::
>>> def log(f):
... def new_logging_func(*args, **kwargs):
... print "Logging call with parameters:", args, kwargs
... return f(*args, **kwargs)
...
... return new_logging_func
Now, when we define a function, we can "decorate" it using ``log``::
>>> @log
... def add(a, b):
... return a + b
Calling ``add`` then yields:
>>> add(1, 2)
Logging call with parameters: (1, 2) {}
3
dictionary
Resembling a language dictionary, which provides a mapping between
words and descriptions thereof, a Python dictionary is a mapping
between two objects::
>>> x = {1: 'one', 'two': [1, 2]}
Here, `x` is a dictionary mapping keys to values, in this case
the integer 1 to the string "one", and the string "two" to
the list ``[1, 2]``. The values may be accessed using their
corresponding keys::
>>> x[1]
'one'
>>> x['two']
[1, 2]
Note that dictionaries are not stored in any specific order. Also,
most mutable (see *immutable* below) objects, such as lists, may not
be used as keys.
For more information on dictionaries, read the
`Python tutorial <http://docs.python.org/tut>`_.
Fortran order
See `column-major`
flattened
Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details.
immutable
An object that cannot be modified after execution is called
immutable. Two common examples are strings and tuples.
instance
A class definition gives the blueprint for constructing an object::
>>> class House(object):
... wall_colour = 'white'
Yet, we have to *build* a house before it exists::
>>> h = House() # build a house
Now, ``h`` is called a ``House`` instance. An instance is therefore
a specific realisation of a class.
iterable
A sequence that allows "walking" (iterating) over items, typically
using a loop such as::
>>> x = [1, 2, 3]
>>> [item**2 for item in x]
[1, 4, 9]
It is often used in combintion with ``enumerate``::
>>> keys = ['a','b','c']
>>> for n, k in enumerate(keys):
... print "Key %d: %s" % (n, k)
...
Key 0: a
Key 1: b
Key 2: c
list
A Python container that can hold any number of objects or items.
The items do not have to be of the same type, and can even be
lists themselves::
>>> x = [2, 2.0, "two", [2, 2.0]]
The list `x` contains 4 items, each which can be accessed individually::
>>> x[2] # the string 'two'
'two'
>>> x[3] # a list, containing an integer 2 and a float 2.0
[2, 2.0]
It is also possible to select more than one item at a time,
using *slicing*::
>>> x[0:2] # or, equivalently, x[:2]
[2, 2.0]
In code, arrays are often conveniently expressed as nested lists::
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
For more information, read the section on lists in the `Python
tutorial <http://docs.python.org/tut>`_. For a mapping
type (key-value), see *dictionary*.
mask
A boolean array, used to select only certain elements for an operation::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> mask = (x > 2)
>>> mask
array([False, False, False, True, True], dtype=bool)
>>> x[mask] = -1
>>> x
array([ 0, 1, 2, -1, -1])
masked array
Array that suppressed values indicated by a mask::
>>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True])
>>> x
masked_array(data = [-- 2.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
>>> x + [1, 2, 3]
masked_array(data = [-- 4.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
Masked arrays are often used when operating on arrays containing
missing or invalid entries.
matrix
A 2-dimensional ndarray that preserves its two-dimensional nature
throughout operations. It has certain special operations, such as ``*``
(matrix multiplication) and ``**`` (matrix power), defined::
>>> x = np.mat([[1, 2], [3, 4]])
>>> x
matrix([[1, 2],
[3, 4]])
>>> x**2
matrix([[ 7, 10],
[15, 22]])
method
A function associated with an object. For example, each ndarray has a
method called ``repeat``::
>>> x = np.array([1, 2, 3])
>>> x.repeat(2)
array([1, 1, 2, 2, 3, 3])
ndarray
See *array*.
reference
If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore,
``a`` and ``b`` are different names for the same Python object.
row-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In row-major order, the rightmost index "varies
the fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the row-major order as::
[1, 2, 3, 4, 5, 6]
Row-major order is also known as the C order, as the C programming
language uses it. New Numpy arrays are by default in row-major order.
self
Often seen in method signatures, ``self`` refers to the instance
of the associated class. For example:
>>> class Paintbrush(object):
... color = 'blue'
...
... def paint(self):
... print "Painting the city %s!" % self.color
...
>>> p = Paintbrush()
>>> p.color = 'red'
>>> p.paint() # self refers to 'p'
Painting the city red!
slice
Used to select only certain elements from a sequence::
>>> x = range(5)
>>> x
[0, 1, 2, 3, 4]
>>> x[1:3] # slice from 1 to 3 (excluding 3 itself)
[1, 2]
>>> x[1:5:2] # slice from 1 to 5, but skipping every second element
[1, 3]
>>> x[::-1] # slice a sequence in reverse
[4, 3, 2, 1, 0]
Arrays may have more than one dimension, each which can be sliced
individually::
>>> x = np.array([[1, 2], [3, 4]])
>>> x
array([[1, 2],
[3, 4]])
>>> x[:, 1]
array([2, 4])
tuple
A sequence that may contain a variable number of types of any
kind. A tuple is immutable, i.e., once constructed it cannot be
changed. Similar to a list, it can be indexed and sliced::
>>> x = (1, 'one', [1, 2])
>>> x
(1, 'one', [1, 2])
>>> x[0]
1
>>> x[:2]
(1, 'one')
A useful concept is "tuple unpacking", which allows variables to
be assigned to the contents of a tuple::
>>> x, y = (1, 2)
>>> x, y = 1, 2
This is often used when a function returns multiple values:
>>> def return_many():
... return 1, 'alpha', None
>>> a, b, c = return_many()
>>> a, b, c
(1, 'alpha', None)
>>> a
1
>>> b
'alpha'
ufunc
Universal function. A fast element-wise array operation. Examples include
``add``, ``sin`` and ``logical_or``.
view
An array that does not own its data, but refers to another array's
data instead. For example, we may create a view that only shows
every second element of another array::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> y = x[::2]
>>> y
array([0, 2, 4])
>>> x[0] = 3 # changing x changes y as well, since y is a view on x
>>> y
array([3, 2, 4])
wrapper
Python is a high-level (highly abstracted, or English-like) language.
This abstraction comes at a price in execution speed, and sometimes
it becomes necessary to use lower level languages to do fast
computations. A wrapper is code that provides a bridge between
high and the low level languages, allowing, e.g., Python to execute
code written in C or Fortran.
Examples include ctypes, SWIG and Cython (which wraps C and C++)
and f2py (which wraps Fortran).
""" |
"""Stuff to parse AIFF-C and AIFF files.
Unless explicitly stated otherwise, the description below is true
both for AIFF-C files and AIFF files.
An AIFF-C file has the following structure.
+-----------------+
| FORM |
+-----------------+
| <size> |
+----+------------+
| | AIFC |
| +------------+
| | <chunks> |
| | . |
| | . |
| | . |
+----+------------+
An AIFF file has the string "AIFF" instead of "AIFC".
A chunk consists of an identifier (4 bytes) followed by a size (4 bytes,
big endian order), followed by the data. The size field does not include
the size of the 8 byte header.
The following chunk types are recognized.
FVER
<version number of AIFF-C defining document> (AIFF-C only).
MARK
<# of markers> (2 bytes)
list of markers:
<marker ID> (2 bytes, must be > 0)
<position> (4 bytes)
<marker name> ("pstring")
COMM
<# of channels> (2 bytes)
<# of sound frames> (4 bytes)
<size of the samples> (2 bytes)
<sampling frequency> (10 bytes, IEEE 80-bit extended
floating point)
in AIFF-C files only:
<compression type> (4 bytes)
<human-readable version of compression type> ("pstring")
SSND
<offset> (4 bytes, not used by this program)
<blocksize> (4 bytes, not used by this program)
<sound data>
A pstring consists of 1 byte length, a string of characters, and 0 or 1
byte pad to make the total length even.
Usage.
Reading AIFF files:
f = aifc.open(file, 'r')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods read(), seek(), and close().
In some types of audio files, if the setpos() method is not used,
the seek() method is not necessary.
This returns an instance of a class with the following public methods:
getnchannels() -- returns number of audio channels (1 for
mono, 2 for stereo)
getsampwidth() -- returns sample width in bytes
getframerate() -- returns sampling frequency
getnframes() -- returns number of audio frames
getcomptype() -- returns compression type ('NONE' for AIFF files)
getcompname() -- returns human-readable version of
compression type ('not compressed' for AIFF files)
getparams() -- returns a tuple consisting of all of the
above in the above order
getmarkers() -- get the list of marks in the audio file or None
if there are no marks
getmark(id) -- get mark with the specified id (raises an error
if the mark does not exist)
readframes(n) -- returns at most n frames of audio
rewind() -- rewind to the beginning of the audio stream
setpos(pos) -- seek to the specified position
tell() -- return the current position
close() -- close the instance (make it unusable)
The position returned by tell(), the position given to setpos() and
the position of marks are all compatible and have nothing to do with
the actual position in the file.
The close() method is called automatically when the class instance
is destroyed.
Writing AIFF files:
f = aifc.open(file, 'w')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods write(), tell(), seek(), and
close().
This returns an instance of a class with the following public methods:
aiff() -- create an AIFF file (AIFF-C default)
aifc() -- create an AIFF-C file
setnchannels(n) -- set the number of channels
setsampwidth(n) -- set the sample width
setframerate(n) -- set the frame rate
setnframes(n) -- set the number of frames
setcomptype(type, name)
-- set the compression type and the
human-readable compression type
setparams(tuple)
-- set all parameters at once
setmark(id, pos, name)
-- add specified mark to the list of marks
tell() -- return current position in output file (useful
in combination with setmark())
writeframesraw(data)
-- write audio frames without pathing up the
file header
writeframes(data)
-- write audio frames and patch up the file header
close() -- patch up the file header and close the
output file
You should set the parameters before the first writeframesraw or
writeframes. The total number of frames does not need to be set,
but when it is set to the correct value, the header does not have to
be patched up.
It is best to first set all parameters, perhaps possibly the
compression type, and then write audio frames using writeframesraw.
When all frames have been written, either call writeframes('') or
close() to patch up the sizes in the header.
Marks can be added anytime. If there are any marks, ypu must call
close() after all frames have been written.
The close() method is called automatically when the class instance
is destroyed.
When a file is opened with the extension '.aiff', an AIFF file is
written, otherwise an AIFF-C file is written. This default can be
changed by calling aiff() or aifc() before the first writeframes or
writeframesraw.
""" |
"""
Wrappers to LAPACK library
==========================
flapack -- wrappers for Fortran [*] LAPACK routines
clapack -- wrappers for ATLAS LAPACK routines
calc_lwork -- calculate optimal lwork parameters
get_lapack_funcs -- query for wrapper functions.
[*] If ATLAS libraries are available then Fortran routines
actually use ATLAS routines and should perform equally
well to ATLAS routines.
Module flapack
++++++++++++++
In the following all function names are shown without
type prefix (s,d,c,z). Optimal values for lwork can
be computed using calc_lwork module.
Linear Equations
----------------
Drivers::
lu,piv,x,info = gesv(a,b,overwrite_a=0,overwrite_b=0)
lub,piv,x,info = gbsv(kl,ku,ab,b,overwrite_ab=0,overwrite_b=0)
c,x,info = posv(a,b,lower=0,overwrite_a=0,overwrite_b=0)
Computational routines::
lu,piv,info = getrf(a,overwrite_a=0)
x,info = getrs(lu,piv,b,trans=0,overwrite_b=0)
inv_a,info = getri(lu,piv,lwork=min_lwork,overwrite_lu=0)
c,info = potrf(a,lower=0,clean=1,overwrite_a=0)
x,info = potrs(c,b,lower=0,overwrite_b=0)
inv_a,info = potri(c,lower=0,overwrite_c=0)
inv_c,info = trtri(c,lower=0,unitdiag=0,overwrite_c=0)
Linear Least Squares (LLS) Problems
-----------------------------------
Drivers::
v,x,s,rank,info = gelss(a,b,cond=-1.0,lwork=min_lwork,overwrite_a=0,overwrite_b=0)
Computational routines::
qr,tau,info = geqrf(a,lwork=min_lwork,overwrite_a=0)
q,info = orgqr|ungqr(qr,tau,lwork=min_lwork,overwrite_qr=0,overwrite_tau=1)
Generalized Linear Least Squares (LSE and GLM) Problems
-------------------------------------------------------
Standard Eigenvalue and Singular Value Problems
-----------------------------------------------
Drivers::
w,v,info = syev|heev(a,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0)
w,v,info = syevd|heevd(a,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0)
w,v,info = syevr|heevr(a,compute_v=1,lower=0,vrange=,irange=,atol=-1.0,lwork=min_lwork,overwrite_a=0)
t,sdim,(wr,wi|w),vs,info = gees(select,a,compute_v=1,sort_t=0,lwork=min_lwork,select_extra_args=(),overwrite_a=0)
wr,(wi,vl|w),vr,info = geev(a,compute_vl=1,compute_vr=1,lwork=min_lwork,overwrite_a=0)
u,s,vt,info = gesdd(a,compute_uv=1,lwork=min_lwork,overwrite_a=0)
Computational routines::
ht,tau,info = gehrd(a,lo=0,hi=n-1,lwork=min_lwork,overwrite_a=0)
ba,lo,hi,pivscale,info = gebal(a,scale=0,permute=0,overwrite_a=0)
Generalized Eigenvalue and Singular Value Problems
--------------------------------------------------
Drivers::
w,v,info = sygv|hegv(a,b,itype=1,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0,overwrite_b=0)
w,v,info = sygvd|hegvd(a,b,itype=1,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0,overwrite_b=0)
(alphar,alphai|alpha),beta,vl,vr,info = ggev(a,b,compute_vl=1,compute_vr=1,lwork=min_lwork,overwrite_a=0,overwrite_b=0)
Auxiliary routines
------------------
a,info = lauum(c,lower=0,overwrite_c=0)
a = laswp(a,piv,k1=0,k2=len(piv)-1,off=0,inc=1,overwrite_a=0)
Module clapack
++++++++++++++
Linear Equations
----------------
Drivers::
lu,piv,x,info = gesv(a,b,rowmajor=1,overwrite_a=0,overwrite_b=0)
c,x,info = posv(a,b,lower=0,rowmajor=1,overwrite_a=0,overwrite_b=0)
Computational routines::
lu,piv,info = getrf(a,rowmajor=1,overwrite_a=0)
x,info = getrs(lu,piv,b,trans=0,rowmajor=1,overwrite_b=0)
inv_a,info = getri(lu,piv,rowmajor=1,overwrite_lu=0)
c,info = potrf(a,lower=0,clean=1,rowmajor=1,overwrite_a=0)
x,info = potrs(c,b,lower=0,rowmajor=1,overwrite_b=0)
inv_a,info = potri(c,lower=0,rowmajor=1,overwrite_c=0)
inv_c,info = trtri(c,lower=0,unitdiag=0,rowmajor=1,overwrite_c=0)
Auxiliary routines
------------------
a,info = lauum(c,lower=0,rowmajor=1,overwrite_c=0)
Module calc_lwork
+++++++++++++++++
Optimal lwork is maxwrk. Default is minwrk.
minwrk,maxwrk = gehrd(prefix,n,lo=0,hi=n-1)
minwrk,maxwrk = gesdd(prefix,m,n,compute_uv=1)
minwrk,maxwrk = gelss(prefix,m,n,nrhs)
minwrk,maxwrk = getri(prefix,n)
minwrk,maxwrk = geev(prefix,n,compute_vl=1,compute_vr=1)
minwrk,maxwrk = heev(prefix,n,lower=0)
minwrk,maxwrk = syev(prefix,n,lower=0)
minwrk,maxwrk = gees(prefix,n,compute_v=1)
minwrk,maxwrk = geqrf(prefix,m,n)
minwrk,maxwrk = gqr(prefix,m,n)
""" |
"""
Define a simple format for saving numpy arrays to disk with the full
information about them.
The ``.npy`` format is the standard binary file format in NumPy for
persisting a *single* arbitrary NumPy array on disk. The format stores all
of the shape and dtype information necessary to reconstruct the array
correctly even on another machine with a different architecture.
The format is designed to be as simple as possible while achieving
its limited goals.
The ``.npz`` format is the standard format for persisting *multiple* NumPy
arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy``
files, one for each array.
Capabilities
------------
- Can represent all NumPy arrays including nested record arrays and
object arrays.
- Represents the data in its native binary form.
- Supports Fortran-contiguous arrays directly.
- Stores all of the necessary information to reconstruct the array
including shape and dtype on a machine of a different
architecture. Both little-endian and big-endian arrays are
supported, and a file with little-endian numbers will yield
a little-endian array on any machine reading the file. The
types are described in terms of their actual sizes. For example,
if a machine with a 64-bit C "long int" writes out an array with
"long ints", a reading machine with 32-bit C "long ints" will yield
an array with 64-bit integers.
- Is straightforward to reverse engineer. Datasets often live longer than
the programs that created them. A competent developer should be
able to create a solution in his preferred programming language to
read most ``.npy`` files that he has been given without much
documentation.
- Allows memory-mapping of the data. See `open_memmep`.
- Can be read from a filelike stream object instead of an actual file.
- Stores object arrays, i.e. arrays containing elements that are arbitrary
Python objects. Files with object arrays are not to be mmapable, but
can be read and written to disk.
Limitations
-----------
- Arbitrary subclasses of numpy.ndarray are not completely preserved.
Subclasses will be accepted for writing, but only the array data will
be written out. A regular numpy.ndarray object will be created
upon reading the file.
.. warning::
Due to limitations in the interpretation of structured dtypes, dtypes
with fields with empty names will have the names replaced by 'f0', 'f1',
etc. Such arrays will not round-trip through the format entirely
accurately. The data is intact; only the field names will differ. We are
working on a fix for this. This fix will not require a change in the
file format. The arrays with such structures can still be saved and
restored, and the correct dtype may be restored by using the
``loadedarray.view(correct_dtype)`` method.
File extensions
---------------
We recommend using the ``.npy`` and ``.npz`` extensions for files saved
in this format. This is by no means a requirement; applications may wish
to use these file formats but use an extension specific to the
application. In the absence of an obvious alternative, however,
we suggest using ``.npy`` and ``.npz``.
Version numbering
-----------------
The version numbering of these formats is independent of NumPy version
numbering. If the format is upgraded, the code in `numpy.io` will still
be able to read and write Version 1.0 files.
Format Version 1.0
------------------
The first 6 bytes are a magic string: exactly ``\\x93NUMPY``.
The next 1 byte is an unsigned byte: the major version number of the file
format, e.g. ``\\x01``.
The next 1 byte is an unsigned byte: the minor version number of the file
format, e.g. ``\\x00``. Note: the version of the file format is not tied
to the version of the numpy package.
The next 2 bytes form a little-endian unsigned short int: the length of
the header data HEADER_LEN.
The next HEADER_LEN bytes form the header data describing the array's
format. It is an ASCII string which contains a Python literal expression
of a dictionary. It is terminated by a newline (``\\n``) and padded with
spaces (``\\x20``) to make the total length of
``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment
purposes.
The dictionary contains three keys:
"descr" : dtype.descr
An object that can be passed as an argument to the `numpy.dtype`
constructor to create the array's dtype.
"fortran_order" : bool
Whether the array data is Fortran-contiguous or not. Since
Fortran-contiguous arrays are a common form of non-C-contiguity,
we allow them to be written directly to disk for efficiency.
"shape" : tuple of int
The shape of the array.
For repeatability and readability, the dictionary keys are sorted in
alphabetic order. This is for convenience only. A writer SHOULD implement
this if possible. A reader MUST NOT depend on this.
Following the header comes the array data. If the dtype contains Python
objects (i.e. ``dtype.hasobject is True``), then the data is a Python
pickle of the array. Otherwise the data is the contiguous (either C-
or Fortran-, depending on ``fortran_order``) bytes of the array.
Consumers can figure out the number of bytes by multiplying the number
of elements given by the shape (noting that ``shape=()`` means there is
1 element) by ``dtype.itemsize``.
Notes
-----
The ``.npy`` format, including reasons for creating it and a comparison of
alternatives, is described fully in the "npy-format" NEP.
""" |
# (c) 2014 NAME <EMAIL>
# https://github.com/timraasveld/ansible-string-split-filter/
# (c) 2014 NAME <EMAIL>
# https://debops.org/
# SPDX-License-Identifier: CC0-1.0
# License: CC0 1.0 Universal
#
# Statement of Purpose
#
# The laws of most jurisdictions throughout the world automatically confer
# exclusive Copyright and Related Rights (defined below) upon the creator and
# subsequent owner(s) (each and all, an "owner") of an original work of
# authorship and/or a database (each, a "Work").
#
# Certain owners wish to permanently relinquish those rights to a Work for the
# purpose of contributing to a commons of creative, cultural and scientific
# works ("Commons") that the public can reliably and without fear of later
# claims of infringement build upon, modify, incorporate in other works, reuse
# and redistribute as freely as possible in any form whatsoever and for any
# purposes, including without limitation commercial purposes. These owners may
# contribute to the Commons to promote the ideal of a free culture and the
# further production of creative, cultural and scientific works, or to gain
# reputation or greater distribution for their Work in part through the use and
# efforts of others.
#
# For these and/or other purposes and motivations, and without any expectation
# of additional consideration or compensation, the person associating CC0 with
# a Work (the "Affirmer"), to the extent that he or she is an owner of
# Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to
# the Work and publicly distribute the Work under its terms, with knowledge of
# his or her Copyright and Related Rights in the Work and the meaning and
# intended legal effect of CC0 on those rights.
#
# 1. Copyright and Related Rights. A Work made available under CC0 may be
# protected by copyright and related or neighboring rights ("Copyright and
# Related Rights"). Copyright and Related Rights include, but are not limited
# to, the following:
#
# i. the right to reproduce, adapt, distribute, perform, display,
# communicate, and translate a Work;
#
# ii. moral rights retained by the original author(s) and/or performer(s);
#
# iii. publicity and privacy rights pertaining to a person's image or
# likeness depicted in a Work;
#
# iv. rights protecting against unfair competition in regards to a Work,
# subject to the limitations in paragraph 4(a), below;
#
# v. rights protecting the extraction, dissemination, use and reuse of data
# in a Work;
#
# vi. database rights (such as those arising under Directive 96/9/EC of the
# European Parliament and of the Council of 11 March 1996 on the legal
# protection of databases, and under any national implementation thereof,
# including any amended or successor version of such directive); and
#
# vii. other similar, equivalent or corresponding rights throughout the world
# based on applicable law or treaty, and any national implementations
# thereof.
#
# 2. Waiver. To the greatest extent permitted by, but not in contravention of,
# applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and
# unconditionally waives, abandons, and surrenders all of Affirmer's Copyright
# and Related Rights and associated claims and causes of action, whether now
# known or unknown (including existing as well as future claims and causes of
# action), in the Work (i) in all territories worldwide, (ii) for the maximum
# duration provided by applicable law or treaty (including future time
# extensions), (iii) in any current or future medium and for any number of
# copies, and (iv) for any purpose whatsoever, including without limitation
# commercial, advertising or promotional purposes (the "Waiver"). Affirmer
# makes the Waiver for the benefit of each member of the public at large and to
# the detriment of Affirmer's heirs and successors, fully intending that such
# Waiver shall not be subject to revocation, rescission, cancellation,
# termination, or any other legal or equitable action to disrupt the quiet
# enjoyment of the Work by the public as contemplated by Affirmer's express
# Statement of Purpose.
#
# 3. Public License Fallback. Should any part of the Waiver for any reason be
# judged legally invalid or ineffective under applicable law, then the Waiver
# shall be preserved to the maximum extent permitted taking into account
# Affirmer's express Statement of Purpose. In addition, to the extent the
# Waiver is so judged Affirmer hereby grants to each affected person
# a royalty-free, non transferable, non sublicensable, non exclusive,
# irrevocable and unconditional license to exercise Affirmer's Copyright and
# Related Rights in the Work (i) in all territories worldwide, (ii) for the
# maximum duration provided by applicable law or treaty (including future time
# extensions), (iii) in any current or future medium and for any number of
# copies, and (iv) for any purpose whatsoever, including without limitation
# commercial, advertising or promotional purposes (the "License"). The License
# shall be deemed effective as of the date CC0 was applied by Affirmer to the
# Work. Should any part of the License for any reason be judged legally invalid
# or ineffective under applicable law, such partial invalidity or
# ineffectiveness shall not invalidate the remainder of the License, and in
# such case Affirmer hereby affirms that he or she will not (i) exercise any of
# his or her remaining Copyright and Related Rights in the Work or (ii) assert
# any associated claims and causes of action with respect to the Work, in
# either case contrary to Affirmer's express Statement of Purpose.
#
# 4. Limitations and Disclaimers.
#
# a. No trademark or patent rights held by Affirmer are waived, abandoned,
# surrendered, licensed or otherwise affected by this document.
#
# b. Affirmer offers the Work as-is and makes no representations or
# warranties of any kind concerning the Work, express, implied, statutory or
# otherwise, including without limitation warranties of title,
# merchantability, fitness for a particular purpose, non infringement, or the
# absence of latent or other defects, accuracy, or the present or absence of
# errors, whether or not discoverable, all to the greatest extent permissible
# under applicable law.
#
# c. Affirmer disclaims responsibility for clearing rights of other persons
# that may apply to the Work or any use thereof, including without limitation
# any person's Copyright and Related Rights in the Work. Further, Affirmer
# disclaims responsibility for obtaining any necessary consents, permissions
# or other rights required for any use of the Work.
#
# d. Affirmer understands and acknowledges that Creative Commons is not a
# party to this document and has no duty or obligation with respect to this
# CC0 or use of the Work.
#
# For more information, please see
# <http://creativecommons.org/publicdomain/zero/1.0/>
|
"""Provides an API for creation of custom ClauseElements and compilers.
Synopsis
========
Usage involves the creation of one or more :class:`~sqlalchemy.sql.expression.ClauseElement`
subclasses and one or more callables defining its compilation::
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import ColumnClause
class MyColumn(ColumnClause):
pass
@compiles(MyColumn)
def compile_mycolumn(element, compiler, **kw):
return "[%s]" % element.name
Above, ``MyColumn`` extends :class:`~sqlalchemy.sql.expression.ColumnClause`,
the base expression element for named column objects. The ``compiles``
decorator registers itself with the ``MyColumn`` class so that it is invoked
when the object is compiled to a string::
from sqlalchemy import select
s = select([MyColumn('x'), MyColumn('y')])
print str(s)
Produces::
SELECT [x], [y]
Dialect-specific compilation rules
==================================
Compilers can also be made dialect-specific. The appropriate compiler will be
invoked for the dialect in use::
from sqlalchemy.schema import DDLElement
class AlterColumn(DDLElement):
def __init__(self, column, cmd):
self.column = column
self.cmd = cmd
@compiles(AlterColumn)
def visit_alter_column(element, compiler, **kw):
return "ALTER COLUMN %s ..." % element.column.name
@compiles(AlterColumn, 'postgresql')
def visit_alter_column(element, compiler, **kw):
return "ALTER TABLE %s ALTER COLUMN %s ..." % (element.table.name, element.column.name)
The second ``visit_alter_table`` will be invoked when any ``postgresql`` dialect is used.
Compiling sub-elements of a custom expression construct
=======================================================
The ``compiler`` argument is the :class:`~sqlalchemy.engine.base.Compiled`
object in use. This object can be inspected for any information about the
in-progress compilation, including ``compiler.dialect``,
``compiler.statement`` etc. The :class:`~sqlalchemy.sql.compiler.SQLCompiler`
and :class:`~sqlalchemy.sql.compiler.DDLCompiler` both include a ``process()``
method which can be used for compilation of embedded attributes::
from sqlalchemy.sql.expression import Executable, ClauseElement
class InsertFromSelect(Executable, ClauseElement):
def __init__(self, table, select):
self.table = table
self.select = select
@compiles(InsertFromSelect)
def visit_insert_from_select(element, compiler, **kw):
return "INSERT INTO %s (%s)" % (
compiler.process(element.table, asfrom=True),
compiler.process(element.select)
)
insert = InsertFromSelect(t1, select([t1]).where(t1.c.x>5))
print insert
Produces::
"INSERT INTO mytable (SELECT mytable.x, mytable.y, mytable.z FROM mytable WHERE mytable.x > :x_1)"
Cross Compiling between SQL and DDL compilers
---------------------------------------------
SQL and DDL constructs are each compiled using different base compilers - ``SQLCompiler``
and ``DDLCompiler``. A common need is to access the compilation rules of SQL expressions
from within a DDL expression. The ``DDLCompiler`` includes an accessor ``sql_compiler`` for this reason, such as below where we generate a CHECK
constraint that embeds a SQL expression::
@compiles(MyConstraint)
def compile_my_constraint(constraint, ddlcompiler, **kw):
return "CONSTRAINT %s CHECK (%s)" % (
constraint.name,
ddlcompiler.sql_compiler.process(constraint.expression)
)
Changing the default compilation of existing constructs
=======================================================
The compiler extension applies just as well to the existing constructs. When overriding
the compilation of a built in SQL construct, the @compiles decorator is invoked upon
the appropriate class (be sure to use the class, i.e. ``Insert`` or ``Select``, instead of the creation function such as ``insert()`` or ``select()``).
Within the new compilation function, to get at the "original" compilation routine,
use the appropriate visit_XXX method - this because compiler.process() will call upon the
overriding routine and cause an endless loop. Such as, to add "prefix" to all insert statements::
from sqlalchemy.sql.expression import Insert
@compiles(Insert)
def prefix_inserts(insert, compiler, **kw):
return compiler.visit_insert(insert.prefix_with("some prefix"), **kw)
The above compiler will prefix all INSERT statements with "some prefix" when compiled.
.. _type_compilation_extension:
Changing Compilation of Types
=============================
``compiler`` works for types, too, such as below where we implement the MS-SQL specific 'max' keyword for ``String``/``VARCHAR``::
@compiles(String, 'mssql')
@compiles(VARCHAR, 'mssql')
def compile_varchar(element, compiler, **kw):
if element.length == 'max':
return "VARCHAR('max')"
else:
return compiler.visit_VARCHAR(element, **kw)
foo = Table('foo', metadata,
Column('data', VARCHAR('max'))
)
Subclassing Guidelines
======================
A big part of using the compiler extension is subclassing SQLAlchemy expression constructs. To make this easier, the expression and schema packages feature a set of "bases" intended for common tasks. A synopsis is as follows:
* :class:`~sqlalchemy.sql.expression.ClauseElement` - This is the root
expression class. Any SQL expression can be derived from this base, and is
probably the best choice for longer constructs such as specialized INSERT
statements.
* :class:`~sqlalchemy.sql.expression.ColumnElement` - The root of all
"column-like" elements. Anything that you'd place in the "columns" clause of
a SELECT statement (as well as order by and group by) can derive from this -
the object will automatically have Python "comparison" behavior.
:class:`~sqlalchemy.sql.expression.ColumnElement` classes want to have a
``type`` member which is expression's return type. This can be established
at the instance level in the constructor, or at the class level if its
generally constant::
class timestamp(ColumnElement):
type = TIMESTAMP()
* :class:`~sqlalchemy.sql.expression.FunctionElement` - This is a hybrid of a
``ColumnElement`` and a "from clause" like object, and represents a SQL
function or stored procedure type of call. Since most databases support
statements along the line of "SELECT FROM <some function>"
``FunctionElement`` adds in the ability to be used in the FROM clause of a
``select()`` construct::
from sqlalchemy.sql.expression import FunctionElement
class coalesce(FunctionElement):
name = 'coalesce'
@compiles(coalesce)
def compile(element, compiler, **kw):
return "coalesce(%s)" % compiler.process(element.clauses)
@compiles(coalesce, 'oracle')
def compile(element, compiler, **kw):
if len(element.clauses) > 2:
raise TypeError("coalesce only supports two arguments on Oracle")
return "nvl(%s)" % compiler.process(element.clauses)
* :class:`~sqlalchemy.schema.DDLElement` - The root of all DDL expressions,
like CREATE TABLE, ALTER TABLE, etc. Compilation of ``DDLElement``
subclasses is issued by a ``DDLCompiler`` instead of a ``SQLCompiler``.
``DDLElement`` also features ``Table`` and ``MetaData`` event hooks via the
``execute_at()`` method, allowing the construct to be invoked during CREATE
TABLE and DROP TABLE sequences.
* :class:`~sqlalchemy.sql.expression.Executable` - This is a mixin which should be
used with any expression class that represents a "standalone" SQL statement that
can be passed directly to an ``execute()`` method. It is already implicit
within ``DDLElement`` and ``FunctionElement``.
""" |
# #!/usr/bin/env python
#
# """
# @package ion.agents.platform.util.test.test_network_util
# @file ion/agents/platform/util/test/test_network_util.py
# @author NAME @brief Test cases for network_util.
# """
#
# __author__ = 'Carlos NAME __license__ = 'Apache 2.0'
#
# #
# # bin/nosetests -sv ion.agents.platform.util.test.test_network_util:Test.test_create_node_network
# # bin/nosetests -sv ion.agents.platform.util.test.test_network_util:Test.test_serialization_deserialization
# # bin/nosetests -sv ion.agents.platform.util.test.test_network_util:Test.test_compute_checksum
# # bin/nosetests -sv ion.agents.platform.util.test.test_network_util:Test.test_create_network_definition_from_ci_config_bad
# # bin/nosetests -sv ion.agents.platform.util.test.test_network_util:Test.test_create_network_definition_from_ci_config
# #
#
# from pyon.public import log
# import logging
#
# from ion.agents.platform.util.network_util import NetworkUtil
# from ion.agents.platform.exceptions import PlatformDefinitionException
#
# from pyon.util.containers import DotDict
#
# from pyon.util.unit_test import IonUnitTestCase
# from nose.plugins.attrib import attr
#
# import unittest
#
#
# @attr('UNIT', group='sa')
# class Test(IonUnitTestCase):
#
# def test_create_node_network(self):
#
# # small valid map:
# plat_map = [('R', ''), ('a', 'R'), ]
# pnodes = NetworkUtil.create_node_network(plat_map)
# for p, q in plat_map: self.assertTrue(p in pnodes and q in pnodes)
#
# # duplicate 'a' but valid (same parent)
# plat_map = [('R', ''), ('a', 'R'), ('a', 'R')]
# NetworkUtil.create_node_network(plat_map)
# for p, q in plat_map: self.assertTrue(p in pnodes and q in pnodes)
#
# with self.assertRaises(PlatformDefinitionException):
# # invalid empty map
# plat_map = []
# NetworkUtil.create_node_network(plat_map)
#
# with self.assertRaises(PlatformDefinitionException):
# # no dummy root (id = '')
# plat_map = [('R', 'x')]
# NetworkUtil.create_node_network(plat_map)
#
# with self.assertRaises(PlatformDefinitionException):
# # multiple regular roots
# plat_map = [('R1', ''), ('R2', ''), ]
# NetworkUtil.create_node_network(plat_map)
#
# with self.assertRaises(PlatformDefinitionException):
# # duplicate 'a' but invalid (diff parents)
# plat_map = [('R', ''), ('a', 'R'), ('a', 'x')]
# NetworkUtil.create_node_network(plat_map)
#
# def test_serialization_deserialization(self):
# # create NetworkDefinition object by de-serializing the simulated network:
# ndef = NetworkUtil.deserialize_network_definition(
# file('ion/agents/platform/rsn/simulator/network.yml'))
#
# # serialize object to string
# serialization = NetworkUtil.serialize_network_definition(ndef)
#
# # recreate object by de-serializing the string:
# ndef2 = NetworkUtil.deserialize_network_definition(serialization)
#
# # verify the objects are equal:
# diff = ndef.diff(ndef2)
# self.assertIsNone(diff, "deserialized version must be equal to original."
# " DIFF=\n%s" % diff)
#
# def test_compute_checksum(self):
# # create NetworkDefinition object by de-serializing the simulated network:
# ndef = NetworkUtil.deserialize_network_definition(
# file('ion/agents/platform/rsn/simulator/network.yml'))
#
# checksum = ndef.compute_checksum()
# if log.isEnabledFor(logging.DEBUG):
# log.debug("NetworkDefinition checksum = %s", checksum)
#
# #
# # Basic tests regarding conversion from CI agent configuration to a
# # corresponding network definition.
# #
#
# def test_create_network_definition_from_ci_config_bad(self):
#
# CFG = DotDict({
# 'device_type' : "bad_device_type",
# })
#
# # device_type
# with self.assertRaises(PlatformDefinitionException):
# NetworkUtil.create_network_definition_from_ci_config(CFG)
#
# CFG = DotDict({
# 'device_type' : "PlatformDevice",
# })
#
# # missing platform_id
# with self.assertRaises(PlatformDefinitionException):
# NetworkUtil.create_network_definition_from_ci_config(CFG)
#
# CFG = DotDict({
# 'device_type' : "PlatformDevice",
#
# 'platform_config': {
# 'platform_id': 'Node1D'
# },
# })
#
# # missing driver_config
# with self.assertRaises(PlatformDefinitionException):
# NetworkUtil.create_network_definition_from_ci_config(CFG)
#
# def test_create_network_definition_from_ci_config(self):
#
# CFG = DotDict({
# 'device_type' : "PlatformDevice",
#
# 'platform_config': {
# 'platform_id': 'Node1D'
# },
#
# 'driver_config': {'attributes': {'MVPC_pressure_1': {'attr_id': 'MVPC_pressure_1',
# 'group': 'pressure',
# 'max_val': 33.8,
# 'min_val': -3.8,
# 'monitor_cycle_seconds': 10,
# 'precision': 0.04,
# 'read_write': 'read',
# 'type': 'float',
# 'units': 'PSI'},
# 'MVPC_temperature': {'attr_id': 'MVPC_temperature',
# 'group': 'temperature',
# 'max_val': 58.5,
# 'min_val': -1.5,
# 'monitor_cycle_seconds': 10,
# 'precision': 0.06,
# 'read_write': 'read',
# 'type': 'float',
# 'units': 'Degrees C'},
# 'input_bus_current': {'attr_id': 'input_bus_current',
# 'group': 'power',
# 'max_val': 50,
# 'min_val': -50,
# 'monitor_cycle_seconds': 5,
# 'precision': 0.1,
# 'read_write': 'write',
# 'type': 'float',
# 'units': 'Amps'},
# 'input_voltage': {'attr_id': 'input_voltage',
# 'group': 'power',
# 'max_val': 500,
# 'min_val': -500,
# 'monitor_cycle_seconds': 5,
# 'precision': 1,
# 'read_write': 'read',
# 'type': 'float',
# 'units': 'Volts'}},
# 'dvr_cls': 'RSNPlatformDriver',
# 'dvr_mod': 'ion.agents.platform.rsn.rsn_platform_driver',
# 'oms_uri': 'embsimulator',
# 'ports': {'Node1D_port_1': {'port_id': 'Node1D_port_1'},
# 'Node1D_port_2': {'port_id': 'Node1D_port_2'}},
# },
#
#
# 'children': {'d7877d832cf94c388089b141043d60de': {'agent': {'resource_id': 'd7877d832cf94c388089b141043d60de'},
# 'device_type': 'PlatformDevice',
# 'platform_config': {'platform_id': 'MJ01C'},
# 'driver_config': {'attributes': {'MJ01C_attr_1': {'attr_id': 'MJ01C_attr_1',
# 'group': 'power',
# 'max_val': 10,
# 'min_val': -2,
# 'monitor_cycle_seconds': 5,
# 'read_write': 'read',
# 'type': 'int',
# 'units': 'xyz'},
# 'MJ01C_attr_2': {'attr_id': 'MJ01C_attr_2',
# 'group': 'power',
# 'max_val': 10,
# 'min_val': -2,
# 'monitor_cycle_seconds': 5,
# 'read_write': 'write',
# 'type': 'int',
# 'units': 'xyz'}},
# 'dvr_cls': 'RSNPlatformDriver',
# 'dvr_mod': 'ion.agents.platform.rsn.rsn_platform_driver',
# 'oms_uri': 'embsimulator',
# 'ports': {'MJ01C_port_1': {'port_id': 'MJ01C_port_1'},
# 'MJ01C_port_2': {'port_id': 'MJ01C_port_2'}}},
#
# 'children': {'d0203cb9eb844727b7a8eea77db78e89': {'agent': {'resource_id': 'd0203cb9eb844727b7a8eea77db78e89'},
# 'platform_config': {'platform_id': 'LJ01D'},
# 'device_type': 'PlatformDevice',
# 'driver_config': {'attributes': {'MVPC_pressure_1': {'attr_id': 'MVPC_pressure_1',
# 'group': 'pressure',
# 'max_val': 33.8,
# 'min_val': -3.8,
# 'monitor_cycle_seconds': 10,
# 'precision': 0.04,
# 'read_write': 'read',
# 'type': 'float',
# 'units': 'PSI'},
# 'MVPC_temperature': {'attr_id': 'MVPC_temperature',
# 'group': 'temperature',
# 'max_val': 58.5,
# 'min_val': -1.5,
# 'monitor_cycle_seconds': 10,
# 'precision': 0.06,
# 'read_write': 'read',
# 'type': 'float',
# 'units': 'Degrees C'},
# 'input_bus_current': {'attr_id': 'input_bus_current',
# 'group': 'power',
# 'max_val': 50,
# 'min_val': -50,
# 'monitor_cycle_seconds': 5,
# 'precision': 0.1,
# 'read_write': 'write',
# 'type': 'float',
# 'units': 'Amps'},
# 'input_voltage': {'attr_id': 'input_voltage',
# 'group': 'power',
# 'max_val': 500,
# 'min_val': -500,
# 'monitor_cycle_seconds': 5,
# 'precision': 1,
# 'read_write': 'read',
# 'type': 'float',
# 'units': 'Volts'}},
# 'dvr_cls': 'RSNPlatformDriver',
# 'dvr_mod': 'ion.agents.platform.rsn.rsn_platform_driver',
# 'oms_uri': 'embsimulator',
# 'ports': {'LJ01D_port_1': {'port_id': '1'},
# 'LJ01D_port_2': {'port_id': '2'}}},
# 'children': {},
# }
# }
# }
# }
# })
#
# ndef = NetworkUtil.create_network_definition_from_ci_config(CFG)
#
# if log.isEnabledFor(logging.TRACE):
# serialization = NetworkUtil.serialize_network_definition(ndef)
# log.trace("serialization = \n%s", serialization)
#
# self.assertIn('Node1D', ndef.pnodes)
# Node1D = ndef.pnodes['Node1D']
#
# common_attr_names = ['MVPC_pressure_1|0', 'MVPC_temperature|0',
# 'input_bus_current|0', 'input_voltage|0', ]
#
# for attr_name in common_attr_names:
# self.assertIn(attr_name, Node1D.attrs)
#
# #todo complete the network definition: align ports defintion with internal representation.
# #for port_name in ['Node1D_port_1', 'Node1D_port_2']:
# # self.assertIn(port_name, Node1D.ports)
#
# for subplat_name in ['MJ01C', ]:
# self.assertIn(subplat_name, Node1D.subplatforms)
#
# MJ01C = Node1D.subplatforms['MJ01C']
#
# for subplat_name in ['LJ01D', ]:
# self.assertIn(subplat_name, MJ01C.subplatforms)
#
# LJ01D = MJ01C.subplatforms['LJ01D']
#
# for attr_name in common_attr_names:
# self.assertIn(attr_name, LJ01D.attrs)
#
|
"""
========================
Broadcasting over arrays
========================
The term broadcasting describes how numpy treats arrays with different
shapes during arithmetic operations. Subject to certain constraints,
the smaller array is "broadcast" across the larger array so that they
have compatible shapes. Broadcasting provides a means of vectorizing
array operations so that looping occurs in C instead of Python. It does
this without making needless copies of data and usually leads to
efficient algorithm implementations. There are, however, cases where
broadcasting is a bad idea because it leads to inefficient use of memory
that slows computation.
NumPy operations are usually done on pairs of arrays on an
element-by-element basis. In the simplest case, the two arrays must
have exactly the same shape, as in the following example:
>>> a = np.array([1.0, 2.0, 3.0])
>>> b = np.array([2.0, 2.0, 2.0])
>>> a * b
array([ 2., 4., 6.])
NumPy's broadcasting rule relaxes this constraint when the arrays'
shapes meet certain constraints. The simplest broadcasting example occurs
when an array and a scalar value are combined in an operation:
>>> a = np.array([1.0, 2.0, 3.0])
>>> b = 2.0
>>> a * b
array([ 2., 4., 6.])
The result is equivalent to the previous example where ``b`` was an array.
We can think of the scalar ``b`` being *stretched* during the arithmetic
operation into an array with the same shape as ``a``. The new elements in
``b`` are simply copies of the original scalar. The stretching analogy is
only conceptual. NumPy is smart enough to use the original scalar value
without actually making copies, so that broadcasting operations are as
memory and computationally efficient as possible.
The code in the second example is more efficient than that in the first
because broadcasting moves less memory around during the multiplication
(``b`` is a scalar rather than an array).
General Broadcasting Rules
==========================
When operating on two arrays, NumPy compares their shapes element-wise.
It starts with the trailing dimensions, and works its way forward. Two
dimensions are compatible when
1) they are equal, or
2) one of them is 1
If these conditions are not met, a
``ValueError: frames are not aligned`` exception is thrown, indicating that
the arrays have incompatible shapes. The size of the resulting array
is the maximum size along each dimension of the input arrays.
Arrays do not need to have the same *number* of dimensions. For example,
if you have a ``256x256x3`` array of RGB values, and you want to scale
each color in the image by a different value, you can multiply the image
by a one-dimensional array with 3 values. Lining up the sizes of the
trailing axes of these arrays according to the broadcast rules, shows that
they are compatible::
Image (3d array): 256 x 256 x 3
Scale (1d array): 3
Result (3d array): 256 x 256 x 3
When either of the dimensions compared is one, the other is
used. In other words, dimensions with size 1 are stretched or "copied"
to match the other.
In the following example, both the ``A`` and ``B`` arrays have axes with
length one that are expanded to a larger size during the broadcast
operation::
A (4d array): 8 x 1 x 6 x 1
B (3d array): 7 x 1 x 5
Result (4d array): 8 x 7 x 6 x 5
Here are some more examples::
A (2d array): 5 x 4
B (1d array): 1
Result (2d array): 5 x 4
A (2d array): 5 x 4
B (1d array): 4
Result (2d array): 5 x 4
A (3d array): 15 x 3 x 5
B (3d array): 15 x 1 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 1
Result (3d array): 15 x 3 x 5
Here are examples of shapes that do not broadcast::
A (1d array): 3
B (1d array): 4 # trailing dimensions do not match
A (2d array): 2 x 1
B (3d array): 8 x 4 x 3 # second from last dimensions mismatched
An example of broadcasting in practice::
>>> x = np.arange(4)
>>> xx = x.reshape(4,1)
>>> y = np.ones(5)
>>> z = np.ones((3,4))
>>> x.shape
(4,)
>>> y.shape
(5,)
>>> x + y
<type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape
>>> xx.shape
(4, 1)
>>> y.shape
(5,)
>>> (xx + y).shape
(4, 5)
>>> xx + y
array([[ 1., 1., 1., 1., 1.],
[ 2., 2., 2., 2., 2.],
[ 3., 3., 3., 3., 3.],
[ 4., 4., 4., 4., 4.]])
>>> x.shape
(4,)
>>> z.shape
(3, 4)
>>> (x + z).shape
(3, 4)
>>> x + z
array([[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.]])
Broadcasting provides a convenient way of taking the outer product (or
any other outer operation) of two arrays. The following example shows an
outer addition operation of two 1-d arrays::
>>> a = np.array([0.0, 10.0, 20.0, 30.0])
>>> b = np.array([1.0, 2.0, 3.0])
>>> a[:, np.newaxis] + b
array([[ 1., 2., 3.],
[ 11., 12., 13.],
[ 21., 22., 23.],
[ 31., 32., 33.]])
Here the ``newaxis`` index operator inserts a new axis into ``a``,
making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array
with ``b``, which has shape ``(3,)``, yields a ``4x3`` array.
See `this article <http://wiki.scipy.org/EricsBroadcastingDoc>`_
for illustrations of broadcasting concepts.
""" |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2014 NAME This file is part of Condiment.
#
# Condiment is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Condiment is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# def django_deployserver():
# """
# """
# env.condiment_static_dir = get_path([BASEDIR, 'condiment', 'data', 'static'])
# env.condiment_supervisor_config = get_path([CONFDIR, 'data',
# 'condiment.supervisor.conf'])
# env.condiment_uwsgi_config = get_path([CONFDIR, 'data',
# 'condiment.uwsgi.ini'])
# env.condiment_nginx_config = get_path([CONFDIR, 'data',
# 'condiment.nginx.conf'])
# docker_kill_all_containers()
# local(('echo "'
# 'upstream uwsgi {\n'
# '\tserver\t\t\t\tunix:///var/run/condiment/uwsgi.sock;\n'
# '}\n'
# '\n'
# 'server {\n'
# '\tlisten\t\t\t\t8000;\n'
# '\tserver_name\t\t\tIP_ADDRESS;\n'
# '\tcharset\t\t\t\tutf-8;\n'
# '\n'
# '\tlocation /static {\n'
# '\t\talias\t\t\t%(condiment_static_dir)s;\n'
# '\t}\n'
# '\n'
# '\tlocation / {\n'
# '\t\tuwsgi_pass\t\tuwsgi;\n'
# '\t\tinclude\t\t\t/etc/nginx/uwsgi_params;\n'
# '\t}\n'
# '}'
# '" > %(condiment_nginx_config)s') % env, capture=False)
# local(('echo "'
# '[program:condiment-celery]\n'
# 'command = /usr/bin/python %(basedir)s/manage.py celeryd\n'
# 'directory = %(basedir)s\n'
# 'user = www-data\n'
# 'numprocs = 1\n'
# 'stdout_logfile = /var/log/condiment/celeryd.log\n'
# 'stderr_logfile = /var/log/condiment/celeryd.log\n'
# 'autostart = true\n'
# 'autorestart = true\n'
# 'startsecs = 10\n'
# 'stopwaitsecs = 30\n'
# '\n'
# '[program:condiment-celerybeat]\n'
# 'command = /usr/bin/python %(basedir)s/manage.py celerybeat\n'
# 'directory = %(basedir)s\n'
# 'user = www-data\n'
# 'numprocs = 1\n'
# 'stdout_logfile = /var/log/condiment/celerybeat.log\n'
# 'stderr_logfile = /var/log/condiment/celerybeat.log\n'
# 'autostart = true\n'
# 'autorestart = true\n'
# 'startsecs = 10\n'
# 'stopwaitsecs = 30\n'
# '" > %(condiment_supervisor_config)s') % env, capture=False)
# local(('echo "'
# '[uwsgi]\n'
# 'chdir = %(basedir)s\n'
# 'env = DJANGO_SETTINGS_MODULE=condiment.config.web\n'
# 'wsgi-file = %(basedir)s/condiment/web/wsgi.py\n'
# 'logto = /var/log/condiment/uwsgi.log\n'
# 'pidfile = /var/run/condiment/uwsgi.pid\n'
# 'socket = /var/run/condiment/uwsgi.sock\n'
# 'plugin = python\n'
# '" > %(condiment_uwsgi_config)s') % env, capture=False)
# local(('echo "#!/usr/bin/env bash\n'
# 'ln -fs /proc/self/fd /dev/fd\n'
# 'ln -fs %(condiment_nginx_config)s /etc/nginx/sites-enabled/\n'
# 'ln -fs %(condiment_uwsgi_config)s /etc/uwsgi/apps-enabled/\n'
# 'ln -fs %(condiment_supervisor_config)s /etc/supervisor/conf.d/\n'
# '%(start_services)s\n'
# 'sleep 1200\n'
# 'exit 0'
# '" > %(condiment_django_runserver_script)s') % env, capture=False)
# local(('sudo bash -c '
# '"%(docker)s run -d -p IP_ADDRESS:8000:8000 '
# '--name="%(condiment_runtime_container)s" '
# '%(mounts)s %(condiment_runtime_image)s '
# 'bash %(condiment_django_runserver_script)s"') % env)
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
# SKR03
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2009 Veritos - NAME - www.veritos.nl
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsability of assessing all potential
# consequences resulting from its eventual inadequacies and bugs.
# End users who are looking for a ready-to-use solution with commercial
# garantees and support are strongly adviced to contract a Free Software
# Service Company like Veritos.
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
##############################################################################
#
# Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger).
# Deze module werkt niet in OpenERP versie 4 en lager.
#
# Status 1.0 - getest op OpenERP 5.0.3
#
# Versie IP_ADDRESS
# account.account.type
# Basis gelegd voor alle account type
#
# account.account.template
# Basis gelegd met alle benodigde grootboekrekeningen welke via een menu-
# structuur gelinkt zijn aan rubrieken 1 t/m 9.
# De grootboekrekeningen gelinkt aan de account.account.type
# Deze links moeten nog eens goed nagelopen worden.
#
# account.chart.template
# Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren,
# bank, inkoop en verkoop boeken en de BTW configuratie.
#
# Versie IP_ADDRESS
# account.tax.code.template
# Basis gelegd voor de BTW configuratie (structuur)
# Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt?
#
# account.tax.template
# De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende
# grootboekrekeningen
#
# Versie IP_ADDRESS
# Opschonen van de code en verwijderen van niet gebruikte componenten.
# Versie IP_ADDRESS
# Aanpassen a_expense van 3000 -> 7000
# record id='btw_code_5b' op negatieve waarde gezet
# Versie IP_ADDRESS
# BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde.
# Versie IP_ADDRESS
# Account Receivable en Payable goed gedefinieerd.
# Versie IP_ADDRESS
# Alle user_type_xxx velden goed gedefinieerd.
# Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren.
# Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren.
# Versie IP_ADDRESS
# Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging)
# versie IP_ADDRESS
# Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity
# versie IP_ADDRESS
# Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht.
# versie IP_ADDRESS
# BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd.
# versie IP_ADDRESS - Switch to English
# Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense
# Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
|
"""
=============
Miscellaneous
=============
IEEE 754 Floating Point Special Values:
-----------------------------------------------
Special values defined in numpy: nan, inf,
NaNs can be used as a poor-man's mask (if you don't care what the
original value was)
Note: cannot use equality to test NaNs. E.g.: ::
>>> myarr = np.array([1., 0., np.nan, 3.])
>>> np.where(myarr == np.nan)
>>> np.nan == np.nan # is always False! Use special numpy functions instead.
False
>>> myarr[myarr == np.nan] = 0. # doesn't work
>>> myarr
array([ 1., 0., NaN, 3.])
>>> myarr[np.isnan(myarr)] = 0. # use this instead find
>>> myarr
array([ 1., 0., 0., 3.])
Other related special value functions: ::
isinf(): True if value is inf
isfinite(): True if not nan or inf
nan_to_num(): Map nan to 0, inf to max float, -inf to min float
The following corresponds to the usual functions except that nans are excluded
from the results: ::
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
How numpy handles numerical exceptions:
------------------------------------------
The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
set individually for different kinds of exceptions. The different behaviors
are:
- 'ignore' : Take no action when the exception occurs.
- 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
- 'raise' : Raise a `FloatingPointError`.
- 'call' : Call a function specified using the `seterrcall` function.
- 'print' : Print a warning directly to ``stdout``.
- 'log' : Record error in a Log object specified by `seterrcall`.
These behaviors can be set for all kinds of errors or specific ones:
- all : apply to all numeric exceptions
- invalid : when NaNs are generated
- divide : divide by zero (for integers as well!)
- overflow : floating point overflows
- underflow : floating point underflows
Note that integer divide-by-zero is handled by the same machinery.
These behaviors are set on a per-thread basis.
Examples:
------------
::
>>> oldsettings = np.seterr(all='warn')
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under='ignore')
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid='raise')
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
... print "saw stupid error!"
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all='call')
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
... # error-handling settings
Interfacing to C:
-----------------
Only a survey of the choices. Little detail on how each works.
1) Bare metal, wrap your own C-code manually.
- Plusses:
- Efficient
- No dependencies on other tools
- Minuses:
- Lots of learning overhead:
- need to learn basics of Python C API
- need to learn basics of numpy C API
- need to learn how to handle reference counting and love it.
- Reference counting often difficult to get right.
- getting it wrong leads to memory leaks, and worse, segfaults
- API will change for Python 3.0!
2) pyrex
- Plusses:
- avoid learning C API's
- no dealing with reference counting
- can code in psuedo python and generate C code
- can also interface to existing C code
- should shield you from changes to Python C api
- become pretty popular within Python community
- Minuses:
- Can write code in non-standard form which may become obsolete
- Not as flexible as manual wrapping
- Maintainers not easily adaptable to new features
Thus:
3) cython - fork of pyrex to allow needed features for SAGE
- being considered as the standard scipy/numpy wrapping tool
- fast indexing support for arrays
4) ctypes
- Plusses:
- part of Python standard library
- good for interfacing to existing sharable libraries, particularly
Windows DLLs
- avoids API/reference counting issues
- good numpy support: arrays have all these in their ctypes
attribute: ::
a.ctypes.data a.ctypes.get_strides
a.ctypes.data_as a.ctypes.shape
a.ctypes.get_as_parameter a.ctypes.shape_as
a.ctypes.get_data a.ctypes.strides
a.ctypes.get_shape a.ctypes.strides_as
- Minuses:
- can't use for writing code to be turned into C extensions, only a wrapper
tool.
5) SWIG (automatic wrapper generator)
- Plusses:
- around a long time
- multiple scripting language support
- C++ support
- Good for wrapping large (many functions) existing C libraries
- Minuses:
- generates lots of code between Python and the C code
- can cause performance problems that are nearly impossible to optimize
out
- interface files can be hard to write
- doesn't necessarily avoid reference counting issues or needing to know
API's
7) Weave
- Plusses:
- Phenomenal tool
- can turn many numpy expressions into C code
- dynamic compiling and loading of generated C code
- can embed pure C code in Python module and have weave extract, generate
interfaces and compile, etc.
- Minuses:
- Future uncertain--lacks a champion
8) Psyco
- Plusses:
- Turns pure python into efficient machine code through jit-like
optimizations
- very fast when it optimizes well
- Minuses:
- Only on intel (windows?)
- Doesn't do much for numpy?
Interfacing to Fortran:
-----------------------
Fortran: Clear choice is f2py. (Pyfort is an older alternative, but not
supported any longer)
Interfacing to C++:
-------------------
1) CXX
2) Boost.python
3) SWIG
4) Sage has used cython to wrap C++ (not pretty, but it can be done)
5) SIP (used mainly in PyQT)
""" |
# Defines classes that provide synchronization objects. Note that use of
# this module requires that your Python support threads.
#
# condition(lock=None) # a POSIX-like condition-variable object
# barrier(n) # an n-thread barrier
# event() # an event object
# semaphore(n=1) # a semaphore object, with initial count n
# mrsw() # a multiple-reader single-writer lock
#
# CONDITIONS
#
# A condition object is created via
# import this_module
# your_condition_object = this_module.condition(lock=None)
#
# As explained below, a condition object has a lock associated with it,
# used in the protocol to protect condition data. You can specify a
# lock to use in the constructor, else the constructor will allocate
# an anonymous lock for you. Specifying a lock explicitly can be useful
# when more than one condition keys off the same set of shared data.
#
# Methods:
# .acquire()
# acquire the lock associated with the condition
# .release()
# release the lock associated with the condition
# .wait()
# block the thread until such time as some other thread does a
# .signal or .broadcast on the same condition, and release the
# lock associated with the condition. The lock associated with
# the condition MUST be in the acquired state at the time
# .wait is invoked.
# .signal()
# wake up exactly one thread (if any) that previously did a .wait
# on the condition; that thread will awaken with the lock associated
# with the condition in the acquired state. If no threads are
# .wait'ing, this is a nop. If more than one thread is .wait'ing on
# the condition, any of them may be awakened.
# .broadcast()
# wake up all threads (if any) that are .wait'ing on the condition;
# the threads are woken up serially, each with the lock in the
# acquired state, so should .release() as soon as possible. If no
# threads are .wait'ing, this is a nop.
#
# Note that if a thread does a .wait *while* a signal/broadcast is
# in progress, it's guaranteeed to block until a subsequent
# signal/broadcast.
#
# Secret feature: `broadcast' actually takes an integer argument,
# and will wake up exactly that many waiting threads (or the total
# number waiting, if that's less). Use of this is dubious, though,
# and probably won't be supported if this form of condition is
# reimplemented in C.
#
# DIFFERENCES FROM POSIX
#
# + A separate mutex is not needed to guard condition data. Instead, a
# condition object can (must) be .acquire'ed and .release'ed directly.
# This eliminates a common error in using POSIX conditions.
#
# + Because of implementation difficulties, a POSIX `signal' wakes up
# _at least_ one .wait'ing thread. Race conditions make it difficult
# to stop that. This implementation guarantees to wake up only one,
# but you probably shouldn't rely on that.
#
# PROTOCOL
#
# Condition objects are used to block threads until "some condition" is
# true. E.g., a thread may wish to wait until a producer pumps out data
# for it to consume, or a server may wish to wait until someone requests
# its services, or perhaps a whole bunch of threads want to wait until a
# preceding pass over the data is complete. Early models for conditions
# relied on some other thread figuring out when a blocked thread's
# condition was true, and made the other thread responsible both for
# waking up the blocked thread and guaranteeing that it woke up with all
# data in a correct state. This proved to be very delicate in practice,
# and gave conditions a bad name in some circles.
#
# The POSIX model addresses these problems by making a thread responsible
# for ensuring that its own state is correct when it wakes, and relies
# on a rigid protocol to make this easy; so long as you stick to the
# protocol, POSIX conditions are easy to "get right":
#
# A) The thread that's waiting for some arbitrarily-complex condition
# (ACC) to become true does:
#
# condition.acquire()
# while not (code to evaluate the ACC):
# condition.wait()
# # That blocks the thread, *and* releases the lock. When a
# # condition.signal() happens, it will wake up some thread that
# # did a .wait, *and* acquire the lock again before .wait
# # returns.
# #
# # Because the lock is acquired at this point, the state used
# # in evaluating the ACC is frozen, so it's safe to go back &
# # reevaluate the ACC.
#
# # At this point, ACC is true, and the thread has the condition
# # locked.
# # So code here can safely muck with the shared state that
# # went into evaluating the ACC -- if it wants to.
# # When done mucking with the shared state, do
# condition.release()
#
# B) Threads that are mucking with shared state that may affect the
# ACC do:
#
# condition.acquire()
# # muck with shared state
# condition.release()
# if it's possible that ACC is true now:
# condition.signal() # or .broadcast()
#
# Note: You may prefer to put the "if" clause before the release().
# That's fine, but do note that anyone waiting on the signal will
# stay blocked until the release() is done (since acquiring the
# condition is part of what .wait() does before it returns).
#
# TRICK OF THE TRADE
#
# With simpler forms of conditions, it can be impossible to know when
# a thread that's supposed to do a .wait has actually done it. But
# because this form of condition releases a lock as _part_ of doing a
# wait, the state of that lock can be used to guarantee it.
#
# E.g., suppose thread A spawns thread B and later wants to wait for B to
# complete:
#
# In A: In B:
#
# B_done = condition() ... do work ...
# B_done.acquire() B_done.acquire(); B_done.release()
# spawn B B_done.signal()
# ... some time later ... ... and B exits ...
# B_done.wait()
#
# Because B_done was in the acquire'd state at the time B was spawned,
# B's attempt to acquire B_done can't succeed until A has done its
# B_done.wait() (which releases B_done). So B's B_done.signal() is
# guaranteed to be seen by the .wait(). Without the lock trick, B
# may signal before A .waits, and then A would wait forever.
#
# BARRIERS
#
# A barrier object is created via
# import this_module
# your_barrier = this_module.barrier(num_threads)
#
# Methods:
# .enter()
# the thread blocks until num_threads threads in all have done
# .enter(). Then the num_threads threads that .enter'ed resume,
# and the barrier resets to capture the next num_threads threads
# that .enter it.
#
# EVENTS
#
# An event object is created via
# import this_module
# your_event = this_module.event()
#
# An event has two states, `posted' and `cleared'. An event is
# created in the cleared state.
#
# Methods:
#
# .post()
# Put the event in the posted state, and resume all threads
# .wait'ing on the event (if any).
#
# .clear()
# Put the event in the cleared state.
#
# .is_posted()
# Returns 0 if the event is in the cleared state, or 1 if the event
# is in the posted state.
#
# .wait()
# If the event is in the posted state, returns immediately.
# If the event is in the cleared state, blocks the calling thread
# until the event is .post'ed by another thread.
#
# Note that an event, once posted, remains posted until explicitly
# cleared. Relative to conditions, this is both the strength & weakness
# of events. It's a strength because the .post'ing thread doesn't have to
# worry about whether the threads it's trying to communicate with have
# already done a .wait (a condition .signal is seen only by threads that
# do a .wait _prior_ to the .signal; a .signal does not persist). But
# it's a weakness because .clear'ing an event is error-prone: it's easy
# to mistakenly .clear an event before all the threads you intended to
# see the event get around to .wait'ing on it. But so long as you don't
# need to .clear an event, events are easy to use safely.
#
# SEMAPHORES
#
# A semaphore object is created via
# import this_module
# your_semaphore = this_module.semaphore(count=1)
#
# A semaphore has an integer count associated with it. The initial value
# of the count is specified by the optional argument (which defaults to
# 1) passed to the semaphore constructor.
#
# Methods:
#
# .p()
# If the semaphore's count is greater than 0, decrements the count
# by 1 and returns.
# Else if the semaphore's count is 0, blocks the calling thread
# until a subsequent .v() increases the count. When that happens,
# the count will be decremented by 1 and the calling thread resumed.
#
# .v()
# Increments the semaphore's count by 1, and wakes up a thread (if
# any) blocked by a .p(). It's an (detected) error for a .v() to
# increase the semaphore's count to a value larger than the initial
# count.
#
# MULTIPLE-READER SINGLE-WRITER LOCKS
#
# A mrsw lock is created via
# import this_module
# your_mrsw_lock = this_module.mrsw()
#
# This kind of lock is often useful with complex shared data structures.
# The object lets any number of "readers" proceed, so long as no thread
# wishes to "write". When a (one or more) thread declares its intention
# to "write" (e.g., to update a shared structure), all current readers
# are allowed to finish, and then a writer gets exclusive access; all
# other readers & writers are blocked until the current writer completes.
# Finally, if some thread is waiting to write and another is waiting to
# read, the writer takes precedence.
#
# Methods:
#
# .read_in()
# If no thread is writing or waiting to write, returns immediately.
# Else blocks until no thread is writing or waiting to write. So
# long as some thread has completed a .read_in but not a .read_out,
# writers are blocked.
#
# .read_out()
# Use sometime after a .read_in to declare that the thread is done
# reading. When all threads complete reading, a writer can proceed.
#
# .write_in()
# If no thread is writing (has completed a .write_in, but hasn't yet
# done a .write_out) or reading (similarly), returns immediately.
# Else blocks the calling thread, and threads waiting to read, until
# the current writer completes writing or all the current readers
# complete reading; if then more than one thread is waiting to
# write, one of them is allowed to proceed, but which one is not
# specified.
#
# .write_out()
# Use sometime after a .write_in to declare that the thread is done
# writing. Then if some other thread is waiting to write, it's
# allowed to proceed. Else all threads (if any) waiting to read are
# allowed to proceed.
#
# .write_to_read()
# Use instead of a .write_in to declare that the thread is done
# writing but wants to continue reading without other writers
# intervening. If there are other threads waiting to write, they
# are allowed to proceed only if the current thread calls
# .read_out; threads waiting to read are only allowed to proceed
# if there are are no threads waiting to write. (This is a
# weakness of the interface!)
|
"""Drag-and-drop support for Tkinter.
This is very preliminary. I currently only support dnd *within* one
application, between different windows (or within the same window).
I an trying to make this as generic as possible -- not dependent on
the use of a particular widget or icon type, etc. I also hope that
this will work with Pmw.
To enable an object to be dragged, you must create an event binding
for it that starts the drag-and-drop process. Typically, you should
bind <ButtonPress> to a callback function that you write. The function
should call Tkdnd.dnd_start(source, event), where 'source' is the
object to be dragged, and 'event' is the event that invoked the call
(the argument to your callback function). Even though this is a class
instantiation, the returned instance should not be stored -- it will
be kept alive automatically for the duration of the drag-and-drop.
When a drag-and-drop is already in process for the Tk interpreter, the
call is *ignored*; this normally averts starting multiple simultaneous
dnd processes, e.g. because different button callbacks all
dnd_start().
The object is *not* necessarily a widget -- it can be any
application-specific object that is meaningful to potential
drag-and-drop targets.
Potential drag-and-drop targets are discovered as follows. Whenever
the mouse moves, and at the start and end of a drag-and-drop move, the
Tk widget directly under the mouse is inspected. This is the target
widget (not to be confused with the target object, yet to be
determined). If there is no target widget, there is no dnd target
object. If there is a target widget, and it has an attribute
dnd_accept, this should be a function (or any callable object). The
function is called as dnd_accept(source, event), where 'source' is the
object being dragged (the object passed to dnd_start() above), and
'event' is the most recent event object (generally a <Motion> event;
it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept()
function returns something other than None, this is the new dnd target
object. If dnd_accept() returns None, or if the target widget has no
dnd_accept attribute, the target widget's parent is considered as the
target widget, and the search for a target object is repeated from
there. If necessary, the search is repeated all the way up to the
root widget. If none of the target widgets can produce a target
object, there is no target object (the target object is None).
The target object thus produced, if any, is called the new target
object. It is compared with the old target object (or None, if there
was no old target widget). There are several cases ('source' is the
source object, and 'event' is the most recent event object):
- Both the old and new target objects are None. Nothing happens.
- The old and new target objects are the same object. Its method
dnd_motion(source, event) is called.
- The old target object was None, and the new target object is not
None. The new target object's method dnd_enter(source, event) is
called.
- The new target object is None, and the old target object is not
None. The old target object's method dnd_leave(source, event) is
called.
- The old and new target objects differ and neither is None. The old
target object's method dnd_leave(source, event), and then the new
target object's method dnd_enter(source, event) is called.
Once this is done, the new target object replaces the old one, and the
Tk mainloop proceeds. The return value of the methods mentioned above
is ignored; if they raise an exception, the normal exception handling
mechanisms take over.
The drag-and-drop processes can end in two ways: a final target object
is selected, or no final target object is selected. When a final
target object is selected, it will always have been notified of the
potential drop by a call to its dnd_enter() method, as described
above, and possibly one or more calls to its dnd_motion() method; its
dnd_leave() method has not been called since the last call to
dnd_enter(). The target is notified of the drop by a call to its
method dnd_commit(source, event).
If no final target object is selected, and there was an old target
object, its dnd_leave(source, event) method is called to complete the
dnd sequence.
Finally, the source object is notified that the drag-and-drop process
is over, by a call to source.dnd_end(target, event), specifying either
the selected target object, or None if no target object was selected.
The source object can use this to implement the commit action; this is
sometimes simpler than to do it in the target's dnd_commit(). The
target's dnd_commit() method could then simply be aliased to
dnd_leave().
At any time during a dnd sequence, the application can cancel the
sequence by calling the cancel() method on the object returned by
dnd_start(). This will call dnd_leave() if a target is currently
active; it will never call dnd_commit().
""" |
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to reqd all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
#
# XML-RPC CLIENT LIBRARY
# $Id$
#
# an XML-RPC client interface for Python.
#
# the marshalling and response parser code can also be used to
# implement XML-RPC servers.
#
# Notes:
# this version is designed to work with Python 2.1 or newer.
#
# History:
# 1999-01-14 fl Created
# 1999-01-15 fl Changed dateTime to use localtime
# 1999-01-16 fl Added Binary/base64 element, default to RPC2 service
# 1999-01-19 fl Fixed array data element (from Skip Montanaro)
# 1999-01-21 fl Fixed dateTime constructor, etc.
# 1999-02-02 fl Added fault handling, handle empty sequences, etc.
# 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro)
# 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8)
# 2000-11-28 fl Changed boolean to check the truth value of its argument
# 2001-02-24 fl Added encoding/Unicode/SafeTransport patches
# 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1)
# 2001-03-28 fl Make sure response tuple is a singleton
# 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2)
# 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser
# 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup)
# 2001-10-01 fl Remove containers from memo cache when done with them
# 2001-10-01 fl Use faster escape method (80% dumps speedup)
# 2001-10-02 fl More dumps microtuning
# 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow
# 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems)
# 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments
# 2002-04-16 fl Added __str__ methods to datetime/binary wrappers
# 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version
# 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type
# 2003-02-27 gvr Remove apply calls
# 2003-04-24 sm Use cStringIO if available
# 2003-04-25 ak Add support for nil
# 2003-06-15 gn Add support for time.struct_time
# 2003-07-12 gp Correct marshalling of Faults
# 2003-10-31 mvl Add multicall support
# 2004-08-20 mvl Bump minimum supported Python version to 2.1
# 2014-12-02 ch/doko Add workaround for gzip bomb vulnerability
#
# Copyright (c) 1999-2002 by Secret Labs AB.
# Copyright (c) 1999-2002 by NAME Lundh.
#
# EMAIL http://www.pythonware.com
#
# --------------------------------------------------------------------
# The XML-RPC client interface is
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by NAME Lundh
#
# By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
#
# things to look into some day:
# TODO: sort out True/False/boolean issues for Python 2.3
|
"""
******************************************
Hierarchical clustering (``hierarchical``)
******************************************
.. index::
single: clustering, hierarchical, dendrogram
.. index:: aglomerative clustering
The following example show clustering of the Iris data, with distance matrix
computed with the :class:`Orange.distance.Euclidean` distance measure
and cluster it with average linkage.
.. literalinclude:: code/hierarchical-example-2.py
:lines: 1-12
Data instances belonging to the top-most four clusters
(obtained with :obj:`top_clusters`) could be printed out
with:
.. literalinclude:: code/hierarchical-example-2.py
:lines: 14-19
It could be more informative to
print out the class distributions for each cluster.
.. literalinclude:: code/hierarchical-example-2.py
:lines: 21-26
Here is the output.
::
Iris-setosa: 0 Iris-versicolor: 50 Iris-virginica: 17
Iris-setosa: 49 Iris-versicolor: 0 Iris-virginica: 0
Iris-setosa: 0 Iris-versicolor: 0 Iris-virginica: 33
Iris-setosa: 1 Iris-versicolor: 0 Iris-virginica: 0
The same results could also be obtained with:
.. literalinclude:: code/hierarchical-example-3.py
:lines: 1-7
Basic functionality
-------------------
.. autofunction:: clustering
.. class:: HierarchicalClustering
.. attribute:: linkage
Specifies the linkage method, which can be either. Default is
:obj:`SINGLE`.
.. attribute:: overwrite_matrix
If True (default is False), the algorithm will save memory
by working on the original distance matrix, destroying it in
the process.
.. attribute:: progress_callback
A callback function (None by default), which will be called 101 times.
The function only gets called if the number of objects is at least 1000.
.. method:: __call__(matrix)
Return an instance of HierarchicalCluster representing
the root of the hierarchy (instance of :class:`HierarchicalCluster`).
The distance matrix has to contain no negative elements, as
this helps the algorithm to run faster. The elements on the
diagonal are ignored. The method works in approximately O(n2)
time (with the worst case O(n3)).
:param matrix: A distance matrix to perform the clustering on.
:type matrix: :class:`Orange.misc.SymMatrix`
.. rubric:: Linkage methods
.. data:: SINGLE
Distance between groups is defined as the distance between the closest
pair of objects, one from each group.
.. data:: AVERAGE
Distance between two clusters is defined as the average of distances
between all pairs of objects, where each pair is made up of one
object from each group.
.. data:: COMPLETE
Distance between groups is defined as the distance between the most
distant pair of objects, one from each group. Complete linkage is
also called farthest neighbor.
.. data:: WARD
Ward's distance.
Drawing
--------------
.. autofunction:: dendrogram_draw(file, cluster, attr_cluster=None, labels=None, data=None, width=None, height=None, tree_height=None, heatmap_width=None, text_width=None, spacing=2, cluster_colors={}, color_palette=ColorPalette([(255, 0, 0), (0, 255, 0)]), maxv=None, minv=None, gamma=None, format=None)
.. rubric:: Example
The following scripts clusters a subset of 20 instances from the Iris data set.
The leaves are labelled with the class value.
.. literalinclude:: code/hierarchical-draw.py
:lines: 1-8
The resulting dendrogram is shown below.
.. image:: files/hclust-dendrogram.png
The following code, that produces the dendrogram below, also colors the
three topmost branches and represents attribute values with a custom color
schema, (spanning red - black - green with custom gamma minv and maxv).
.. literalinclude:: code/hierarchical-draw.py
:lines: 10-16
.. image:: files/hclust-colored-dendrogram.png
Cluster analysis
-----------------
.. autofunction:: cluster_to_list
.. autofunction:: top_clusters
.. autofunction:: top_cluster_membership
.. autofunction:: order_leaves
.. autofunction:: postorder
.. autofunction:: preorder
.. autofunction:: prune
.. autofunction:: pruned
.. autofunction:: clone
.. autofunction:: cluster_depths
.. autofunction:: cophenetic_distances
.. autofunction:: cophenetic_correlation
.. autofunction:: joining_cluster
HierarchicalCluster hierarchy
-----------------------------
Results of clustering are stored in a hierarchy of
:obj:`HierarchicalCluster` objects.
.. class:: HierarchicalCluster
A node in the clustering tree, as returned by
:obj:`HierarchicalClustering`.
.. attribute:: branches
A list of sub-clusters (:class:`HierarchicalCluster` instances). If this
is a leaf node this attribute is `None`
.. attribute:: left
The left sub-cluster (defined only if there are only two branches).
Same as ``branches[0]``.
.. attribute:: right
The right sub-cluster (defined only if there are only two branches).
Same as ``branches[1]``.
.. attribute:: height
Height of the cluster (distance between the sub-clusters).
.. attribute:: mapping
A list of indices to the original distance matrix. It is the same
for all clusters in the hierarchy - it simply represents the indices
ordered according to the clustering.
.. attribute:: mapping.objects
A sequence describing objects - an :obj:`Orange.data.Table`, a
list of instance, a list of features (when clustering features),
or even a string of the same length as the number of elements.
If objects are given, the cluster's elements, as got by indexing
or interacion, are not indices but corresponding objects. It we
put an :obj:`Orange.data.Table` into objects, ``root.left[-1]``
is the last instance of the first left cluster.
.. attribute:: first
.. attribute:: last
``first`` and ``last`` are indices into the elements of ``mapping`` that
belong to that cluster.
.. method:: __len__()
Asking for the length of the cluster gives the number of the objects
belonging to it. This equals ``last - first``.
.. method:: __getitem__(index)
By indexing the cluster we address its elements; these are either
indices or objects.
For instance cluster[2] gives the third element of the cluster, and
list(cluster) will return the cluster elements as a list. The cluster
elements are read-only.
.. method:: swap()
Swaps the ``left`` and the ``right`` subcluster; it will
report an error when the cluster has more than two subclusters. This
function changes the mapping and first and last of all clusters below
this one and thus needs O(len(cluster)) time.
.. method:: permute(permutation)
Permutes the subclusters. Permutation gives the order in which the
subclusters will be arranged. As for swap, this function changes the
mapping and first and last of all clusters below this one.
Subclusters are ordered so that ``cluster.left.last`` always equals
``cluster.right.first`` or, in general, ``cluster.branches[i].last``
equals ``cluster.branches[i+1].first``.
Swapping and permutation change the order of
elements in ``branches``, permute the corresponding regions in
:obj:`~HierarchicalCluster.mapping` and adjust the ``first`` and ``last``
for all the clusters below.
.. rubric:: An example
The following example constructs a simple distance matrix and runs clustering
on it.
>>> import Orange
>>> m = [[],
... [ 3],
... [ 2, 4],
... [17, 5, 4],
... [ 2, 8, 3, 8],
... [ 7, 5, 10, 11, 2],
... [ 8, 4, 1, 5, 11, 13],
... [ 4, 7, 12, 8, 10, 1, 5],
... [13, 9, 14, 15, 7, 8, 4, 6],
... [12, 10, 11, 15, 2, 5, 7, 3, 1]]
>>> matrix = Orange.misc.SymMatrix(m)
>>> root = Orange.clustering.hierarchical.HierarchicalClustering(matrix,
... linkage=Orange.clustering.hierarchical.AVERAGE)
``root`` is the root of the cluster hierarchy. We can print it with a
simple recursive function.
>>> def print_clustering(cluster):
... if cluster.branches:
... return "(%s %s)" % (print_clustering(cluster.left), print_clustering(cluster.right))
... else:
... return str(cluster[0])
The clustering looks like
>>> print_clustering(root)
'(((0 4) ((5 7) (8 9))) ((1 (2 6)) 3))'
The elements form two groups, the first with elements 0, 4, 5, 7, 8, 9,
and the second with 1, 2, 6, 3. The difference between them equals to
>>> print root.height
9.0
The first cluster is further divided onto 0 and 4 in one, and 5, 7, 8,
9 in the other subcluster.
The following code prints the left subcluster of root.
>>> for el in root.left:
... print el,
0 4 5 7 8 9
Object descriptions can be added with
>>> root.mapping.objects = ["Ann", "Bob", "Curt", "Danny", "Eve",
... "Fred", "Greg", "Hue", "NAME", "Jon"]
As before, let us print out the elements of the first left cluster
>>> for el in root.left:
... print el,
NAME NAME NAME NAME ``root.left.swap`` reverses the order of subclusters of
``root.left``
>>> print_clustering(root)
'(((NAME) ((NAME) (NAME Jon))) ((Bob (Curt NAME NAME
>>> root.left.swap()
>>> print_clustering(root)
'((((NAME) (NAME Jon)) (NAME)) ((Bob (Curt NAME NAME
Let us write function for cluster pruning.
>>> def prune(cluster, h):
... if cluster.branches:
... if cluster.height < h:
... cluster.branches = None
... else:
... for branch in cluster.branches:
... prune(branch, h)
Here we need a function that can plot leafs with multiple elements.
>>> def print_clustering2(cluster):
... if cluster.branches:
... return "(%s %s)" % (print_clustering2(cluster.left), print_clustering2(cluster.right))
... else:
... return str(tuple(cluster))
Four clusters remain.
>>> prune(root, 5)
>>> print print_clustering2(root)
(((('Fred', 'Hue') ('NAME', 'Jon')) ('Ann', 'Eve')) ('Bob', 'Curt', 'Greg', 'Danny'))
The following function returns a list of lists.
>>> def list_of_clusters0(cluster, alist):
... if not cluster.branches:
... alist.append(list(cluster))
... else:
... for branch in cluster.branches:
... list_of_clusters0(branch, alist)
...
>>> def list_of_clusters(root):
... l = []
... list_of_clusters0(root, l)
... return l
The function returns a list of lists, in our case
>>> list_of_clusters(root)
[['Fred', 'Hue'], ['NAME', 'Jon'], ['Ann', 'Eve'], ['Bob', 'Curt', 'Greg', 'Danny']]
If :obj:`~HierarchicalCluster.mapping.objects` were not defined the list
would contains indices instead of names.
>>> root.mapping.objects = None
>>> print list_of_clusters(root)
[[5, 7], [8, 9], [0, 4], [1, 2, 6, 3]]
Utility Functions
-----------------
.. autofunction:: clustering_features
.. autofunction:: feature_distance_matrix
.. autofunction:: dendrogram_layout
""" |
"""
PHYLIP multiple sequence alignment format (:mod:`skbio.io.format.phylip`)
=========================================================================
.. currentmodule:: skbio.io.format.phylip
The PHYLIP file format stores a multiple sequence alignment. The format was
originally defined and used in NAME PHYLIP package [1]_, and has
since been supported by several other bioinformatics tools (e.g., RAxML [2]_).
See [3]_ for the original format description, and [4]_ and [5]_ for additional
descriptions.
An example PHYLIP-formatted file taken from [3]_::
5 42
Turkey AAGCTNGGGC ATTTCAGGGT GAGCCCGGGC AATACAGGGT AT
Salmo gairAAGCCTTGGC AGTGCAGGGT GAGCCGTGGC CGGGCACGGT AT
H. SapiensACCGGTTGGC CGTTCAGGGT ACAGGTTGGC CGTTCAGGGT AA
Chimp AAACCCTTGC CGTTACGCTT AAACCGAGGC CGGGACACTC AT
Gorilla AAACCCTTGC CGGTACGCTT AAACCATTGC CGGTACGCTT AA
.. note:: Original copyright notice for the above PHYLIP file:
*(c) Copyright 1986-2008 by The University of Washington. Written by NAME Permission is granted to copy this document provided that no
fee is charged for it and that this copyright notice is not removed.*
Format Support
--------------
**Has Sniffer: Yes**
+------+------+---------------------------------------------------------------+
|Reader|Writer| Object Class |
+======+======+===============================================================+
|Yes |Yes |:mod:`skbio.alignment.Alignment` |
+------+------+---------------------------------------------------------------+
Format Specification
--------------------
PHYLIP format is a plain text format containing exactly two sections: a header
describing the dimensions of the alignment, followed by the multiple sequence
alignment itself.
The format described here is "strict" PHYLIP, as described in [4]_. Strict
PHYLIP requires that each sequence identifier is exactly 10 characters long
(padded with spaces as necessary). Other bioinformatics tools (e.g., RAxML) may
relax this rule to allow for longer sequence identifiers. See the
**Alignment Section** below for more details.
The format described here is "sequential" format. The original PHYLIP format
specification [3]_ describes both sequential and interleaved formats.
.. note:: scikit-bio currently only supports writing strict, sequential
PHYLIP-formatted files from an ``skbio.alignment.Alignment``. It does not
yet support reading PHYLIP-formatted files, nor does it support relaxed or
interleaved PHYLIP formats.
Header Section
^^^^^^^^^^^^^^
The header consists of a single line describing the dimensions of the
alignment. It **must** be the first line in the file. The header consists of
optional spaces, followed by two positive integers (``n`` and ``m``) separated
by one or more spaces. The first integer (``n``) specifies the number of
sequences (i.e., the number of rows) in the alignment. The second integer
(``m``) specifies the length of the sequences (i.e., the number of columns) in
the alignment. The smallest supported alignment dimensions are 1x1.
.. note:: scikit-bio will write the PHYLIP format header *without* preceding
spaces, and with only a single space between ``n`` and ``m``.
PHYLIP format *does not* support blank line(s) between the header and the
alignment.
Alignment Section
^^^^^^^^^^^^^^^^^
The alignment section immediately follows the header. It consists of ``n``
lines (rows), one for each sequence in the alignment. Each row consists of a
sequence identifier (ID) and characters in the sequence, in fixed width format.
The sequence ID can be up to 10 characters long. IDs less than 10 characters
must have spaces appended to them to reach the 10 character fixed width. Within
an ID, all characters except newlines are supported, including spaces,
underscores, and numbers.
.. note:: While not explicitly stated in the original PHYLIP format
description, scikit-bio only supports writing unique sequence identifiers
(i.e., duplicates are not allowed). Uniqueness is required because an
``skbio.alignment.Alignment`` cannot be created with duplicate IDs.
scikit-bio supports the empty string (``''``) as a valid sequence ID. An
empty ID will be padded with 10 spaces.
Sequence characters immediately follow the sequence ID. They *must* start at
the 11th character in the line, as the first 10 characters are reserved for the
sequence ID. While PHYLIP format does not explicitly restrict the set of
supported characters that may be used to represent a sequence, the original
format description [3]_ specifies the IUPAC nucleic acid lexicon for DNA or RNA
sequences, and the IUPAC protein lexicon for protein sequences. The original
PHYLIP specification uses ``-`` as a gap character, though older versions also
supported ``.``. The sequence characters may contain optional spaces (e.g., to
improve readability), and both upper and lower case characters are supported.
.. note:: scikit-bio will write a PHYLIP-formatted file even if the alignment's
sequence characters are not valid IUPAC characters. This differs from the
PHYLIP specification, which states that a PHYLIP-formatted file can only
contain valid IUPAC characters. To check whether all characters are valid
before writing, the user can call ``Alignment.is_valid()``.
Since scikit-bio supports both ``-`` and ``.`` as gap characters (e.g., in
``skbio.alignment.Alignment``), both are supported when writing a
PHYLIP-formatted file.
When writing a PHYLIP-formatted file, scikit-bio will split up each sequence
into chunks that are 10 characters long. Each chunk will be separated by a
single space. The sequence will always appear on a single line (sequential
format). It will *not* be wrapped across multiple lines. Sequences are
chunked in this manner for improved readability, and because most example
PHYLIP files are chunked in a similar way (e.g., see the example file
above). Note that this chunking is not required by the PHYLIP format.
Examples
--------
Let's create an alignment with three DNA sequences of equal length:
>>> from skbio import Alignment, DNA
>>> seqs = [DNA('ACCGTTGTA-GTAGCT', metadata={'id':'seq1'}),
... DNA('A--GTCGAA-GTACCT', metadata={'id':'sequence-2'}),
... DNA('AGAGTTGAAGGTATCT', metadata={'id':'3'})]
>>> aln = Alignment(seqs)
>>> aln
<Alignment: n=3; mean +/- std length=16.00 +/- 0.00>
Now let's write the alignment to file in PHYLIP format, and take a look at the
output:
>>> from io import StringIO
>>> fh = StringIO()
>>> print(aln.write(fh, format='phylip').getvalue())
3 16
seq1 ACCGTTGTA- GTAGCT
sequence-2A--GTCGAA- GTACCT
3 AGAGTTGAAG GTATCT
<BLANKLINE>
>>> fh.close()
Notice that the 16-character sequences were split into two chunks, and that
each sequence appears on a single line (sequential format). Also note that each
sequence ID is padded with spaces to 10 characters in order to produce a fixed
width column.
If the sequence IDs in an alignment surpass the 10-character limit, an error
will be raised when we try to write a PHYLIP file:
>>> long_id_seqs = [DNA('ACCGT', metadata={'id':'seq1'}),
... DNA('A--GT', metadata={'id':'long-sequence-2'}),
... DNA('AGAGT', metadata={'id':'seq3'})]
>>> long_id_aln = Alignment(long_id_seqs)
>>> fh = StringIO()
>>> long_id_aln.write(fh, format='phylip')
Traceback (most recent call last):
...
skbio.io._exception.PhylipFormatError: Alignment can only be written in \
PHYLIP format if all sequence IDs have 10 or fewer characters. Found sequence \
with ID 'long-sequence-2' that exceeds this limit. Use Alignment.update_ids \
to assign shorter IDs.
>>> fh.close()
One way to work around this is to update the IDs to be shorter. The recommended
way of accomplishing this is via ``Alignment.update_ids``, which provides a
flexible way of creating a new ``Alignment`` with updated IDs. For example, to
remap each of the IDs to integer-based IDs:
>>> short_id_aln, _ = long_id_aln.update_ids()
>>> short_id_aln.ids()
['1', '2', '3']
We can now write the new alignment in PHYLIP format:
>>> fh = StringIO()
>>> print(short_id_aln.write(fh, format='phylip').getvalue())
3 5
1 ACCGT
2 A--GT
3 AGAGT
<BLANKLINE>
>>> fh.close()
References
----------
.. [1] http://evolution.genetics.washington.edu/phylip.html
.. [2] RAxML Version 8: A tool for Phylogenetic Analysis and
Post-Analysis of Large Phylogenies". In Bioinformatics, 2014
.. [3] http://evolution.genetics.washington.edu/phylip/doc/sequence.html
.. [4] http://www.phylo.org/tools/obsolete/phylip.html
.. [5] http://www.bioperl.org/wiki/PHYLIP_multiple_alignment_format
""" |
"""
..
>>> from datetime import datetime, date
>>> from cube.models import Cube, Dimension
>>> from cube.views import table_from_cube
>>> import copy
.. currentmodule:: cube
Some fixtures for the examples ...
Some models
>>> class Instrument(models.Model):
... name = models.CharField(max_length=100)
...
>>> class Musician(models.Model):
... firstname = models.CharField(max_length=100)
... lastname = models.CharField(max_length=100)
... instrument = models.ForeignKey(Instrument)
...
>>> class Song(models.Model):
... title = models.CharField(max_length=100)
... release_date = models.DateField()
... author = models.ForeignKey(Musician)
...
Some instruments
>>> trumpet = Instrument(name='trumpet')
>>> piano = Instrument(name='piano')
>>> sax = Instrument(name='sax')
..
>>> trumpet.save() ; piano.save() ; sax.save()
Some musicians
>>> miles_davis = Musician(firstname='Miles', lastname='Davis', instrument=trumpet)
>>> freddie_hubbard = Musician(firstname='Freddie', lastname='Hubbard', instrument=trumpet)
>>> erroll_garner = Musician(firstname='Erroll', lastname='Garner', instrument=piano)
>>> bill_evans_p = Musician(firstname='Bill', lastname='Evans', instrument=piano)
>>> thelonious_monk = Musician(firstname='Thelonious', lastname='NAME', instrument=piano)
>>> bill_evans_s = Musician(firstname='Bill', lastname='Evans', instrument=sax)
..
>>> miles_davis.save() ; freddie_hubbard.save() ; erroll_garner.save() ; bill_evans_p.save() ; thelonious_monk.save() ; bill_evans_s.save()
Some songs
>>> so_what = Song(title='So What', author=miles_davis, release_date=date(1959, 8, 17))
>>> all_blues = Song(title='All Blues', author=miles_davis, release_date=date(1959, 8, 17))
>>> blue_in_green = Song(title='Blue In Green', author=bill_evans_p, release_date=date(1959, 8, 17))
>>> south_street_stroll = Song(title='South Street Stroll', author=freddie_hubbard, release_date=date(1969, 1, 21))
>>> well_you_neednt = Song(title='Well You Needn\\'t', author=thelonious_monk, release_date=date(1944, 2, 1))
>>> blue_monk = Song(title='Blue NAME', author=thelonious_monk, release_date=date(1945, 2, 1))
..
>>> so_what.save() ; all_blues.save() ; blue_in_green.save() ; south_street_stroll.save() ; well_you_neednt.save() ; blue_monk.save()
Dimension
===========
..
----- Deep copy
>>> d = Dimension(field='attribute__date__absmonth', queryset=[1, 2, 3], sample_space=[89, 99])
>>> d_copy = copy.deepcopy(d)
>>> id(d_copy) != id(d)
True
>>> d_copy.field == d.field
True
>>> id(d_copy.sample_space) != id(d.sample_space) ; d_copy.sample_space == d.sample_space
True
True
>>> id(d_copy.queryset) != id(d.queryset) ; d_copy.queryset == d.queryset
True
True
----- Formatting datetimes constraint
>>> d = Dimension(field='attribute__date__absmonth')
>>> d.constraint = date(3000, 7, 1)
>>> d.to_queryset_filter() == {'attribute__date__month': 7, 'attribute__date__year': 3000}
True
>>> d = Dimension(field='attribute__date__absday')
>>> d.constraint = datetime(1990, 8, 23, 0, 0, 0)
>>> d.to_queryset_filter() == {'attribute__date__day': 23, 'attribute__date__month': 8, 'attribute__date__year': 1990}
True
>>> d = Dimension()
>>> d._name = 'myname'
>>> d.constraint = 'coucou'
>>> d.to_queryset_filter() == {'myname': 'coucou'}
True
Setting a dimension's sample space
---------------------------------------
You can set explicitely the sample space for a dimension, by passing to the constructor a keyword *sample_space* that is an iterable. It works with lists :
>>> d = Dimension(field='instrument__name', sample_space=['trumpet', 'piano'])
>>> d.get_sample_space(sort=True) == sorted(['trumpet', 'piano'])
True
But also with querysets (any iterable):
>>> d = Dimension(field='instrument', sample_space=Instrument.objects.filter(name__contains='a').order_by('name'))
>>> d.get_sample_space() == [piano, sax]
True
Default sample space for a dimension
-----------------------------------------------
If you didn't give explicitely the sample space of a dimension, the method :meth:`get_sample_space` will return a default sample space taken from the dimension's queryset.
>>> d = Dimension(field='title', queryset=Song.objects.all())
>>> set(d.get_sample_space()) == set([
... 'So What', 'All Blues', 'Blue In Green',
... 'South Street Stroll', 'Well You Needn\\'t', 'Blue NAME'
... ])
True
It works also with field names that use django field-lookup syntax
>>> d = Dimension(field='release_date__year', queryset=Song.objects.all())
>>> d.get_sample_space() == sorted([1944, 1969, 1959, 1945])
True
And you can also use the special "field-lookups" *absmonth* or *absday*
>>> d = Dimension(field='release_date__absmonth', queryset=Song.objects.all())
>>> d.get_sample_space() == sorted([
... datetime(1969, 1, 1, 0, 0), datetime(1945, 2, 1, 0, 0),
... datetime(1944, 2, 1, 0, 0), datetime(1959, 8, 1, 0, 0)
... ])
True
>>> d = Dimension(field='release_date__absday', queryset=Song.objects.all())
>>> d.get_sample_space() == sorted([
... datetime(1969, 1, 21, 0, 0), datetime(1945, 2, 1, 0, 0),
... datetime(1944, 2, 1, 0, 0), datetime(1959, 8, 17, 0, 0)
... ])
True
You can traverse foreign keys,
>>> d = Dimension(field='author__firstname', queryset=Song.objects.all())
>>> d.get_sample_space(sort=True) == sorted(['Bill', 'Miles', 'Thelonious', 'Freddie'])
True
>>> d = Dimension(field='author__instrument__name', queryset=Song.objects.all())
>>> d.get_sample_space(sort=True) == sorted(['piano', 'trumpet'])
True
and refer to any type of field, even a django object
>>> d = Dimension(field='author__instrument', queryset=Song.objects.all())
>>> d.get_sample_space(sort=True) == [trumpet, piano] # django objects are ordered by their pk
True
>>> d = Dimension(field='author', queryset=Song.objects.all())
>>> d.get_sample_space(sort=True) == [
... miles_davis, freddie_hubbard,
... bill_evans_p, thelonious_monk,
... ]
True
Giving dimension's sample space as a callable
---------------------------------------------
You can pass a callable to the dimension's constructor to set its sample space. This callable takes a queryset as parameter, and returns the sample space. For example :
>>> def select_contains_s(queryset):
... #This function returns all musicians that wrote a song
... #and whose last name contains at least one 's'
... s_queryset = queryset.filter(author__lastname__icontains='s').distinct().select_related()
... m_queryset = Musician.objects.filter(pk__in=s_queryset.values_list('author', flat=True))
... return list(m_queryset)
>>> d = Dimension(field='author', queryset=Song.objects.all(), sample_space=select_contains_s)
>>> d.get_sample_space() == [
... miles_davis, bill_evans_p
... ]
True
Overriding the display of dimension's value
---------------------------------------------
:class:`Dimension` provides a property :meth:`Dimension.pretty_constraint` which gives a pretty version of the dimension's value (AKA its constraint). To customize this display, just declare a new sub-class of :class:`Dimension`, and override the :meth:`pretty_constraint` property. For example, this displays an Instrument object as its name, with a capital letter first :
>>> class InstrumentDimension(Dimension):
... @property
... def pretty_constraint(self):
... return self.constraint.name.capitalize()
Cube
======
..
Metaclass
-----------
>>> class MyCube(Cube):
... dim1 = Dimension()
... dim2 = Dimension()
>>> set([dim.name for dim in MyCube._meta.dimensions.values()]) == set(['dim1', 'dim2'])
True
Inheritance
--------------
>>> class ParentCube(Cube):
... dim1 = Dimension()
... dim2 = Dimension()
>>> class ChildCube(ParentCube):
... pass
>>> set([dim.name for dim in ChildCube._meta.dimensions.values()]) == set(['dim1', 'dim2'])
True
>>> set(ChildCube._meta.dimensions.values()) == set(ParentCube._meta.dimensions.values())
False
Declaring cubes
-----------------
Declaring a cube is very similar to declaring a Django model, with dimensions instead of fields. Notice that you have to override the static method :meth:`aggregation`, which calculates the aggregation on a given queryset.
>>> class SongCube(Cube):
... author = Dimension()
... auth_name = Dimension(field='author__lastname')
... date = Dimension(field='release_date')
... date_absmonth = Dimension(field='release_date__absmonth')
... date_month = Dimension(field='release_date__month')
... date_year = Dimension(field='release_date__year')
...
... @staticmethod
... def aggregation(queryset):
... return queryset.count()
...
>>> class MusicianCube(Cube):
... instrument_name = Dimension(field='instrument__name')
... instrument_cat = Dimension(field='instrument__name__in',
... sample_space=[('trumpet', 'piano'), ('trumpet', 'sax'), ('sax', 'piano')])
... instrument = InstrumentDimension()
... firstname = Dimension()
... lastname = Dimension()
...
... @staticmethod
... def aggregation(queryset):
... return queryset.count()
..
----- Deep copy
>>> c = MusicianCube(Musician.objects.all())
>>> c_copy = copy.deepcopy(c)
>>> id(c_copy) != id(c)
True
>>> set(c_copy.dimensions.keys()) == set(c.dimensions.keys())
True
>>> c_copy.constraint == c.constraint
True
>>> id(c_copy.queryset) != id(c.queryset) ; list(c_copy.queryset) == list(c.queryset)
True
True
Get a cube's sample space
----------------------------
On the cube, you can get the sample space for one dimension like this :
>>> c.get_sample_space('firstname', format='flat') == ['Bill', 'Erroll', 'Freddie', 'Miles', 'Thelonious']
True
and the cube's sample space for several dimensions like this :
>>> c.get_sample_space('firstname', 'instrument_name') == [
... {'firstname': 'Bill', 'instrument_name': 'piano'},
... {'firstname': 'Bill', 'instrument_name': 'sax'},
... {'firstname': 'Bill', 'instrument_name': 'trumpet'},
... {'firstname': 'Erroll', 'instrument_name': 'piano'},
... {'firstname': 'Erroll', 'instrument_name': 'sax'},
... {'firstname': 'Erroll', 'instrument_name': 'trumpet'},
... {'firstname': 'Freddie', 'instrument_name': 'piano'},
... {'firstname': 'Freddie', 'instrument_name': 'sax'},
... {'firstname': 'Freddie', 'instrument_name': 'trumpet'},
... {'firstname': 'Miles', 'instrument_name': 'piano'},
... {'firstname': 'Miles', 'instrument_name': 'sax'},
... {'firstname': 'Miles', 'instrument_name': 'trumpet'},
... {'firstname': 'Thelonious', 'instrument_name': 'piano'},
... {'firstname': 'Thelonious', 'instrument_name': 'sax'},
... {'firstname': 'Thelonious', 'instrument_name': 'trumpet'},
... ]
True
And note that if one dimension is already constrained, the sample space for the cube on this dimension is the constraint value :
>>> c = c.constrain(firstname='Bill')
>>> c.get_sample_space('firstname', 'instrument_name') == [
... {'firstname': 'Bill', 'instrument_name': 'piano'},
... {'firstname': 'Bill', 'instrument_name': 'sax'},
... {'firstname': 'Bill', 'instrument_name': 'trumpet'},
... ]
True
Getting a measure from the cube
--------------------------------
Once you have instantiated a cube with a base queryset, you can access a measure at any valid coordinates.
>>> c = MusicianCube(Musician.objects.all())
>>> c.measure(firstname='Miles')
1
>>> c.measure(firstname='Bill')
2
>>> c.measure(firstname='Miles', instrument_name='trumpet')
1
>>> c.measure(firstname='Miles', instrument_name='piano')
0
>>> c.measure()
6
Iterating over cube's subcubes
---------------------------------
If your cube has no constrained dimension, querying its subcubes will yield as many subcubes as there are combinations of elements from the dimensions' sample spaces. For example :
>>> ['%s' % subcube for subcube in c.subcubes('firstname', 'instrument_name')] == [
... 'Cube(instrument, instrument_cat, lastname, firstname=Bill, instrument_name=piano)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Bill, instrument_name=sax)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Bill, instrument_name=trumpet)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Erroll, instrument_name=piano)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Erroll, instrument_name=sax)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Erroll, instrument_name=trumpet)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Freddie, instrument_name=piano)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Freddie, instrument_name=sax)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Freddie, instrument_name=trumpet)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Miles, instrument_name=piano)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Miles, instrument_name=sax)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Miles, instrument_name=trumpet)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Thelonious, instrument_name=piano)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Thelonious, instrument_name=sax)',
... 'Cube(instrument, instrument_cat, lastname, firstname=Thelonious, instrument_name=trumpet)'
... ]
True
On the other hand, if your cube is constrained, all the subcubes yielded will be constrained as well :
>>> c = MusicianCube(Musician.objects.all()).constrain(firstname='Miles')
>>> ['%s' % subcube for subcube in c.subcubes('firstname')] == [
... 'Cube(instrument, instrument_cat, instrument_name, lastname, firstname=Miles)',
... ]
True
List of measures as dictionnaries
----------------------------------
Using :meth:`Cube.measures`, you can get a list of measures, very similar to what is returned by the `.values()` method on a django queryset.
>>> c = MusicianCube(Musician.objects.filter(instrument__name__in=['piano', 'trumpet']))
>>> c.measures('firstname', 'instrument_name') == [
... {'firstname': 'Bill', 'instrument_name': 'piano', '__measure': 1},
... {'firstname': 'Bill', 'instrument_name': 'trumpet', '__measure': 0},
... {'firstname': 'Erroll', 'instrument_name': 'piano', '__measure': 1},
... {'firstname': 'Erroll', 'instrument_name': 'trumpet', '__measure': 0},
... {'firstname': 'Freddie', 'instrument_name': 'piano', '__measure': 0},
... {'firstname': 'Freddie', 'instrument_name': 'trumpet', '__measure': 1},
... {'firstname': 'Miles', 'instrument_name': 'piano', '__measure': 0},
... {'firstname': 'Miles', 'instrument_name': 'trumpet', '__measure': 1},
... {'firstname': 'Thelonious', 'instrument_name': 'piano', '__measure': 1},
... {'firstname': 'Thelonious', 'instrument_name': 'trumpet', '__measure': 0},
... ]
True
Multidimensionnal dictionnary of measures
-------------------------------------------
Using :meth:`Cube.measures_dict`, you can get a dictionnary of all the measures, organized by dimensions :
>>> c = MusicianCube(Musician.objects.filter(instrument__name__in=['piano', 'trumpet']))
>>> c.measures_dict('firstname', 'instrument_name') == {
... 'subcubes': {
... 'Bill': {
... 'subcubes': {
... 'piano': {'measure': 1},
... 'trumpet': {'measure': 0},
... },
... 'measure': 1
... },
... 'Miles': {
... 'subcubes': {
... 'piano': {'measure': 0},
... 'trumpet': {'measure': 1},
... },
... 'measure': 1
... },
... 'Thelonious': {
... 'subcubes': {
... 'piano': {'measure': 1},
... 'trumpet': {'measure': 0},
... },
... 'measure': 1
... },
... 'Freddie': {
... 'subcubes': {
... 'piano': {'measure': 0},
... 'trumpet': {'measure': 1},
... },
... 'measure': 1
... },
... 'Erroll': {
... 'subcubes': {
... 'piano': {'measure': 1},
... 'trumpet': {'measure': 0},
... },
... 'measure': 1
... },
... },
... 'measure': 5
... }
True
You can do the same thing, but calculating only the measures for the subcubes whose dimensions passed to :meth:`measures_dict` are all fixed.
>>> c.measures_dict('firstname', 'instrument_name', full=False) == {
... 'Bill': {
... 'piano': {'measure': 1},
... 'trumpet': {'measure': 0},
... },
... 'Miles': {
... 'piano': {'measure': 0},
... 'trumpet': {'measure': 1},
... },
... 'Thelonious': {
... 'piano': {'measure': 1},
... 'trumpet': {'measure': 0},
... },
... 'Freddie': {
... 'piano': {'measure': 0},
... 'trumpet': {'measure': 1},
... },
... 'Erroll': {
... 'piano': {'measure': 1},
... 'trumpet': {'measure': 0},
... },
... }
True
Multidimensionnal list of measures
------------------------------------
Using :meth:`Cube.measures_list`, you can get a list of measures organized by dimension :
>>> c.measures_list('firstname', 'instrument_name') == [
... [1, 0], #Bill: piano, trumpet
... [1, 0], #Erroll ...
... [0, 1], #Freddie ...
... [0, 1], #Miles ...
... [1, 0], #Thelonious ...
... ]
True
>>> other_c = MusicianCube(Musician.objects.filter(instrument__name__in=['piano']))
>>> other_c.measures_list('firstname', 'instrument_name', 'lastname') == [
... [[1, 0, 0]], #Bill: piano: NAME NAME NAME
... [[0, 1, 0]], #Erroll ...
... [[0, 0, 1]], #Thelonious ...
... ]
True
Getting a subcube
------------------
You can get a subcube of a cube by constraining it :
>>> subcube = c.constrain(instrument_name='trumpet')
>>> subcube.measures_dict('firstname', 'instrument_name', full=False) == {
... 'Bill': {
... 'trumpet': {'measure': 0},
... },
... 'Erroll': {
... 'trumpet': {'measure': 0},
... },
... 'Freddie': {
... 'trumpet': {'measure': 1},
... },
... 'Miles': {
... 'trumpet': {'measure': 1},
... },
... 'Thelonious': {
... 'trumpet': {'measure': 0},
... },
... }
True
Using Django field-lookup syntax for date dimensions (see the dimensions declaration) works pretty well too :
>>> c = SongCube(Song.objects.all())
>>> subcube = c.constrain(date_month=2)
>>> subcube.measures_dict('date_month', 'date_year', 'auth_name', full=False) == {
... 2: {
... 1945: {
... 'Davis': {'measure': 0},
... 'Hubbard': {'measure': 0},
... 'Evans': {'measure': 0},
... 'NAME': {'measure': 1}
... },
... 1944: {
... 'Davis': {'measure': 0},
... 'Hubbard': {'measure': 0},
... 'Evans': {'measure': 0},
... 'NAME': {'measure': 1}
... },
... 1969: {
... 'Davis': {'measure': 0},
... 'Hubbard': {'measure': 0},
... 'Evans': {'measure': 0},
... 'NAME': {'measure': 0}
... },
... 1959: {
... 'Davis': {'measure': 0},
... 'Hubbard': {'measure': 0},
... 'Evans': {'measure': 0},
... 'NAME': {'measure': 0}
... },
... }
... }
True
As well as using Django field-lookup syntax for relations (see the dimensions declaration) :
>>> c = MusicianCube(Musician.objects.all())
>>> c.measures_dict('instrument_cat', 'firstname', full=False) == {
... ('trumpet', 'piano'): {
... 'Bill': {'measure': 1},
... 'Erroll': {'measure': 1},
... 'Miles': {'measure': 1},
... 'Freddie': {'measure': 1},
... 'Thelonious': {'measure': 1},
... },
... ('trumpet', 'sax'): {
... 'Bill': {'measure': 1},
... 'Erroll': {'measure': 0},
... 'Miles': {'measure': 1},
... 'Freddie': {'measure': 1},
... 'Thelonious': {'measure': 0},
... },
... ('sax', 'piano'): {
... 'Bill': {'measure': 2},
... 'Erroll': {'measure': 1},
... 'Miles': {'measure': 0},
... 'Freddie': {'measure': 0},
... 'Thelonious': {'measure': 1},
... },
... }
True
Sorting results
---------------------
We declare a cube that overrides *sort_key* to provide custom sorting.
>>> class SortedCube(Cube):
... instrument_name = Dimension(field='instrument__name')
... firstname = Dimension()
... lastname = Dimension()
...
... @staticmethod
... def sort_key(coordinates):
... coordinates = dict(coordinates)
... if coordinates.get('firstname'):
... return coordinates.pop('firstname') + ''.join(coordinates.values())
...
... @staticmethod
... def aggregation(queryset):
... return queryset.count()
Now, everytime that the dimension *firstname* is used, it has priority on other dimensions for sorting.
>>> ['%s' % c for c in SortedCube(Musician.objects.all()).subcubes('instrument_name', 'firstname')] == [
... u'Cube(lastname, firstname=Bill, instrument_name=piano)',
... u'Cube(lastname, firstname=Bill, instrument_name=sax)',
... u'Cube(lastname, firstname=Bill, instrument_name=trumpet)',
... u'Cube(lastname, firstname=Erroll, instrument_name=piano)',
... u'Cube(lastname, firstname=Erroll, instrument_name=sax)',
... u'Cube(lastname, firstname=Erroll, instrument_name=trumpet)',
... u'Cube(lastname, firstname=Freddie, instrument_name=piano)',
... u'Cube(lastname, firstname=Freddie, instrument_name=sax)',
... u'Cube(lastname, firstname=Freddie, instrument_name=trumpet)',
... u'Cube(lastname, firstname=Miles, instrument_name=piano)',
... u'Cube(lastname, firstname=Miles, instrument_name=sax)',
... u'Cube(lastname, firstname=Miles, instrument_name=trumpet)',
... u'Cube(lastname, firstname=Thelonious, instrument_name=piano)',
... u'Cube(lastname, firstname=Thelonious, instrument_name=sax)',
... u'Cube(lastname, firstname=Thelonious, instrument_name=trumpet)'
... ]
True
>>> ['%s' % c for c in SortedCube(Musician.objects.all()).subcubes('firstname', 'instrument_name')] == [
... u'Cube(lastname, firstname=Bill, instrument_name=piano)',
... u'Cube(lastname, firstname=Bill, instrument_name=sax)',
... u'Cube(lastname, firstname=Bill, instrument_name=trumpet)',
... u'Cube(lastname, firstname=Erroll, instrument_name=piano)',
... u'Cube(lastname, firstname=Erroll, instrument_name=sax)',
... u'Cube(lastname, firstname=Erroll, instrument_name=trumpet)',
... u'Cube(lastname, firstname=Freddie, instrument_name=piano)',
... u'Cube(lastname, firstname=Freddie, instrument_name=sax)',
... u'Cube(lastname, firstname=Freddie, instrument_name=trumpet)',
... u'Cube(lastname, firstname=Miles, instrument_name=piano)',
... u'Cube(lastname, firstname=Miles, instrument_name=sax)',
... u'Cube(lastname, firstname=Miles, instrument_name=trumpet)',
... u'Cube(lastname, firstname=Thelonious, instrument_name=piano)',
... u'Cube(lastname, firstname=Thelonious, instrument_name=sax)',
... u'Cube(lastname, firstname=Thelonious, instrument_name=trumpet)'
... ]
True
Template tags and filters
============================
..
>>> from cube.templatetags import cube_templatetags
>>> from django.template import Template, Context, Variable
>>> import re
Iterating over cube's subcubes
-------------------------------
Let's create a cube
>>> c = MusicianCube(Musician.objects.filter(firstname__in=['Bill', 'Miles']))
Here's how to use the template tag *subcubes* to iterate over subcubes :
>>> context = Context({'my_cube': c, 'dim1': 'firstname'})
>>> template = Template(
... '{% load cube_templatetags %}'
... '{% subcubes my_cube by dim1, "instrument_name" as subcube1 %}'
... '{{ subcube1 }}:{{ subcube1.measure }}'
... '{% subcubes subcube1 by "lastname" as subcube2 %}'
... '{{ subcube2 }}:{{ subcube2.measure }}'
... '{% endsubcubes %}'
... '{% endsubcubes %}'
... )
Here is what the rendering gives :
>>> awaited = ''\\
... 'Cube(instrument, instrument_cat, lastname, firstname=Bill, instrument_name=piano):1'\\
... 'Cube(instrument, instrument_cat, firstname=Bill, instrument_name=piano, lastname=Davis):0'\\
... 'Cube(instrument, instrument_cat, firstname=Bill, instrument_name=piano, lastname=Evans):1'\\
... 'Cube(instrument, instrument_cat, lastname, firstname=Bill, instrument_name=sax):1'\\
... 'Cube(instrument, instrument_cat, firstname=Bill, instrument_name=sax, lastname=Davis):0'\\
... 'Cube(instrument, instrument_cat, firstname=Bill, instrument_name=sax, lastname=Evans):1'\\
... 'Cube(instrument, instrument_cat, lastname, firstname=Bill, instrument_name=trumpet):0'\\
... 'Cube(instrument, instrument_cat, firstname=Bill, instrument_name=trumpet, lastname=Davis):0'\\
... 'Cube(instrument, instrument_cat, firstname=Bill, instrument_name=trumpet, lastname=Evans):0'\\
... 'Cube(instrument, instrument_cat, lastname, firstname=Miles, instrument_name=piano):0'\\
... 'Cube(instrument, instrument_cat, firstname=Miles, instrument_name=piano, lastname=Davis):0'\\
... 'Cube(instrument, instrument_cat, firstname=Miles, instrument_name=piano, lastname=Evans):0'\\
... 'Cube(instrument, instrument_cat, lastname, firstname=Miles, instrument_name=sax):0'\\
... 'Cube(instrument, instrument_cat, firstname=Miles, instrument_name=sax, lastname=Davis):0'\\
... 'Cube(instrument, instrument_cat, firstname=Miles, instrument_name=sax, lastname=Evans):0'\\
... 'Cube(instrument, instrument_cat, lastname, firstname=Miles, instrument_name=trumpet):1'\\
... 'Cube(instrument, instrument_cat, firstname=Miles, instrument_name=trumpet, lastname=Davis):1'\\
... 'Cube(instrument, instrument_cat, firstname=Miles, instrument_name=trumpet, lastname=Evans):0'\\
..
>>> awaited == template.render(context)
True
Get a pretty display of a dimension's constraint
----------------------------------------------------
In your templates, you can access the pretty value of dimension's constraint by using the filter `prettyconstraint`. This will call the method :meth:`Dimension.pretty_constraint` on the dimension whose name is passed as argument.
>>> c = MusicianCube(Musician.objects.all()).constrain(
... firstname='John',
... instrument=sax,
... )
>>> context = Context({'my_cube': c})
>>> template = Template(
... '{% load cube_templatetags %}'
... '>FUNKY<{{ my_cube|prettyconstraint:\\'instrument\\' }}>FUNKY<'
... )
>>> template.render(context)
u'>FUNKY<Sax>FUNKY<'
..
----- Test creation of table from cube context
>>> c = MusicianCube(Musician.objects.all())
>>> c.table_helper('firstname', 'instrument') == {
... 'col_names': [
... ('Bill', 'Bill'),
... ('Erroll', 'Erroll'),
... ('Freddie', 'Freddie'),
... ('Miles', 'Miles'),
... ('Thelonious', 'Thelonious'),
... ],
... 'cols': [
... {'name': 'Bill', 'pretty_name': 'Bill', 'values': [0, 1, 1], 'overall': 2},
... {'name': 'Erroll', 'pretty_name': 'Erroll', 'values': [0, 1, 0], 'overall': 1},
... {'name': 'Freddie', 'pretty_name': 'Freddie', 'values': [1, 0, 0], 'overall': 1},
... {'name': 'Miles', 'pretty_name': 'Miles', 'values': [1, 0, 0], 'overall': 1},
... {'name': 'Thelonious', 'pretty_name': 'Thelonious', 'values': [0, 1, 0], 'overall': 1}
... ],
... 'col_overalls': [2, 1, 1, 1, 1],
... 'row_names': [
... (trumpet, 'Trumpet'),
... (piano, 'Piano'),
... (sax, 'Sax'),
... ],
... 'rows': [
... {'name': trumpet, 'pretty_name': 'Trumpet', 'values': [0, 0, 1, 1, 0], 'overall': 2},
... {'name': piano, 'pretty_name': 'Piano', 'values': [1, 1, 0, 0, 1], 'overall': 3},
... {'name': sax, 'pretty_name': 'Sax', 'values': [1, 0, 0, 0, 0], 'overall': 1},
... ],
... 'row_overalls': [2, 3, 1],
... 'col_dim_name': 'firstname',
... 'row_dim_name': 'instrument',
... 'overall': 6,
... }
True
Insert a table
----------------
Let's create a cube
>>> c = MusicianCube(Musician.objects.all())
Here's how to use the inclusion tag *tablefromcube* to insert a table in your template :
>>> context = Context({'my_cube': c, 'dim1': 'firstname', 'template_name': 'table_from_cube.html'})
>>> template = Template(
... '{% load cube_templatetags %}'
... '{% tablefromcube my_cube by dim1, "instrument_name" using template_name %}'
... )
It will render 'template_name' with a context built from :meth:`models.Cube.table_helper`.
Here is what the rendering gives :
>>> awaited = ''\\
... '<table>'\\
... '<theader>'\\
... '<tr>'\\
... '<th></th>'\\
... '<th>Bill</th>'\\
... '<th>Erroll</th>'\\
... '<th>Freddie</th>'\\
... '<th>Miles</th>'\\
... '<th>Thelonious</th>'\\
... '<th>OVERALL</th>'\\
... '</tr>'\\
... '</theader>'\\
... '<tbody>'\\
... '<tr>'\\
... '<th>piano</th>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>1</td>'\\
... '<td>3</td>'\\
... '</tr>'\\
... '<tr>'\\
... '<th>sax</th>'\\
... '<td>1</td>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>1</td>'\\
... '</tr>'\\
... '<tr>'\\
... '<th>trumpet</th>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>0</td>'\\
... '<td>2</td>'\\
... '</tr>'\\
... '</tbody>'\\
... '<tfoot>'\\
... '<tr>'\\
... '<th>OVERALL</th>'\\
... '<td>2</td>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>6</td>'\\
... '</tr>'\\
... '</tfoot>'\\
... '</table>'
..
>>> awaited == re.sub(' |\\n', '', template.render(context))
True
Views
=======
Get a table from a cube
-------------------------
Let's create a cube
>>> c = MusicianCube(Musician.objects.all())
..
>>> from django.http import HttpRequest
>>> request = HttpRequest()
Let's use the view :func:`views.table_from_cube` which will render the template with a context built from :meth:`models.Cube.table_helper`. :
>>> response = table_from_cube(request, cube=c, dimensions=['firstname', 'instrument_name'])
Here is what the rendering gives :
>>> awaited = ''\\
... 'Content-Type:text/html;charset=utf-8'\\
... '<table>'\\
... '<theader>'\\
... '<tr>'\\
... '<th></th>'\\
... '<th>Bill</th>'\\
... '<th>Erroll</th>'\\
... '<th>Freddie</th>'\\
... '<th>Miles</th>'\\
... '<th>Thelonious</th>'\\
... '<th>OVERALL</th>'\\
... '</tr>'\\
... '</theader>'\\
... '<tbody>'\\
... '<tr>'\\
... '<th>piano</th>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>1</td>'\\
... '<td>3</td>'\\
... '</tr>'\\
... '<tr>'\\
... '<th>sax</th>'\\
... '<td>1</td>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>1</td>'\\
... '</tr>'\\
... '<tr>'\\
... '<th>trumpet</th>'\\
... '<td>0</td>'\\
... '<td>0</td>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>0</td>'\\
... '<td>2</td>'\\
... '</tr>'\\
... '</tbody>'\\
... '<tfoot>'\\
... '<tr>'\\
... '<th>OVERALL</th>'\\
... '<td>2</td>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>1</td>'\\
... '<td>6</td>'\\
... '</tr>'\\
... '</tfoot>'\\
... '</table>'
..
>>> awaited == re.sub(' |\\n|<BLANKLINE>', '', str(response))
True
""" |
"""
Discrete Fourier Transform (:mod:`numpy.fft`)
=============================================
.. currentmodule:: numpy.fft
Standard FFTs
-------------
.. autosummary::
:toctree: generated/
fft Discrete Fourier transform.
ifft Inverse discrete Fourier transform.
fft2 Discrete Fourier transform in two dimensions.
ifft2 Inverse discrete Fourier transform in two dimensions.
fftn Discrete Fourier transform in N-dimensions.
ifftn Inverse discrete Fourier transform in N dimensions.
Real FFTs
---------
.. autosummary::
:toctree: generated/
rfft Real discrete Fourier transform.
irfft Inverse real discrete Fourier transform.
rfft2 Real discrete Fourier transform in two dimensions.
irfft2 Inverse real discrete Fourier transform in two dimensions.
rfftn Real discrete Fourier transform in N dimensions.
irfftn Inverse real discrete Fourier transform in N dimensions.
Hermitian FFTs
--------------
.. autosummary::
:toctree: generated/
hfft Hermitian discrete Fourier transform.
ihfft Inverse Hermitian discrete Fourier transform.
Helper routines
---------------
.. autosummary::
:toctree: generated/
fftfreq Discrete Fourier Transform sample frequencies.
rfftfreq DFT sample frequencies (for usage with rfft, irfft).
fftshift Shift zero-frequency component to center of spectrum.
ifftshift Inverse of fftshift.
Background information
----------------------
Fourier analysis is fundamentally a method for expressing a function as a
sum of periodic components, and for recovering the function from those
components. When both the function and its Fourier transform are
replaced with discretized counterparts, it is called the discrete Fourier
transform (DFT). The DFT has become a mainstay of numerical computing in
part because of a very fast algorithm for computing it, called the Fast
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_
provide an accessible introduction to Fourier analysis and its
applications.
Because the discrete Fourier transform separates its input into
components that contribute at discrete frequencies, it has a great number
of applications in digital signal processing, e.g., for filtering, and in
this context the discretized input to the transform is customarily
referred to as a *signal*, which exists in the *time domain*. The output
is called a *spectrum* or *transform* and exists in the *frequency
domain*.
Implementation details
----------------------
There are many ways to define the DFT, varying in the sign of the
exponent, normalization, etc. In this implementation, the DFT is defined
as
.. math::
A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\}
\\qquad k = 0,\\ldots,n-1.
The DFT is in general defined for complex inputs and outputs, and a
single-frequency component at linear frequency :math:`f` is
represented by a complex exponential
:math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t`
is the sampling interval.
The values in the result follow so-called "standard" order: If ``A =
fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the mean of
the signal), which is always purely real for real inputs. Then ``A[1:n/2]``
contains the positive-frequency terms, and ``A[n/2+1:]`` contains the
negative-frequency terms, in order of decreasingly negative frequency.
For an even number of input points, ``A[n/2]`` represents both positive and
negative Nyquist frequency, and is also purely real for real input. For
an odd number of input points, ``A[(n-1)/2]`` contains the largest positive
frequency, while ``A[(n+1)/2]`` contains the largest negative frequency.
The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies
of corresponding elements in the output. The routine
``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the
zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes
that shift.
When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)``
is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum.
The phase spectrum is obtained by ``np.angle(A)``.
The inverse DFT is defined as
.. math::
a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\}
\\qquad m = 0,\\ldots,n-1.
It differs from the forward transform by the sign of the exponential
argument and the default normalization by :math:`1/n`.
Normalization
-------------
The default normalization has the direct transforms unscaled and the inverse
transforms are scaled by :math:`1/n`. It is possible to obtain unitary
transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is
`None`) so that both direct and inverse transforms will be scaled by
:math:`1/\\sqrt{n}`.
Real and Hermitian transforms
-----------------------------
When the input is purely real, its transform is Hermitian, i.e., the
component at frequency :math:`f_k` is the complex conjugate of the
component at frequency :math:`-f_k`, which means that for real
inputs there is no information in the negative frequency components that
is not already available from the positive frequency components.
The family of `rfft` functions is
designed to operate on real inputs, and exploits this symmetry by
computing only the positive frequency components, up to and including the
Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex
output points. The inverses of this family assumes the same symmetry of
its input, and for an output of ``n`` points uses ``n/2+1`` input points.
Correspondingly, when the spectrum is purely real, the signal is
Hermitian. The `hfft` family of functions exploits this symmetry by
using ``n/2+1`` complex points in the input (time) domain for ``n`` real
points in the frequency domain.
In higher dimensions, FFTs are used, e.g., for image analysis and
filtering. The computational efficiency of the FFT means that it can
also be a faster way to compute large convolutions, using the property
that a convolution in the time domain is equivalent to a point-by-point
multiplication in the frequency domain.
Higher dimensions
-----------------
In two dimensions, the DFT is defined as
.. math::
A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1}
a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\}
\\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1,
which extends in the obvious way to higher dimensions, and the inverses
in higher dimensions also extend in the same way.
References
----------
.. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the
machine calculation of complex Fourier series," *Math. Comput.*
19: 297-301.
.. [NR] NAME NAME NAME and NAME
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
12-13. Cambridge Univ. Press, Cambridge, UK.
Examples
--------
For examples, see the various functions.
""" |
"""Module for interactive demos using IPython.
This module implements a few classes for running Python scripts interactively
in IPython for demonstrations. With very simple markup (a few tags in
comments), you can control points where the script stops executing and returns
control to IPython.
Provided classes
----------------
The classes are (see their docstrings for further details):
- Demo: pure python demos
- IPythonDemo: demos with input to be processed by IPython as if it had been
typed interactively (so magics work, as well as any other special syntax you
may have added via input prefilters).
- LineDemo: single-line version of the Demo class. These demos are executed
one line at a time, and require no markup.
- IPythonLineDemo: IPython version of the LineDemo class (the demo is
executed a line at a time, but processed via IPython).
- ClearMixin: mixin to make Demo classes with less visual clutter. It
declares an empty marquee and a pre_cmd that clears the screen before each
block (see Subclassing below).
- ClearDemo, ClearIPDemo: mixin-enabled versions of the Demo and IPythonDemo
classes.
Inheritance diagram:
.. inheritance-diagram:: IPython.lib.demo
:parts: 3
Subclassing
-----------
The classes here all include a few methods meant to make customization by
subclassing more convenient. Their docstrings below have some more details:
- marquee(): generates a marquee to provide visible on-screen markers at each
block start and end.
- pre_cmd(): run right before the execution of each block.
- post_cmd(): run right after the execution of each block. If the block
raises an exception, this is NOT called.
Operation
---------
The file is run in its own empty namespace (though you can pass it a string of
arguments as if in a command line environment, and it will see those as
sys.argv). But at each stop, the global IPython namespace is updated with the
current internal demo namespace, so you can work interactively with the data
accumulated so far.
By default, each block of code is printed (with syntax highlighting) before
executing it and you have to confirm execution. This is intended to show the
code to an audience first so you can discuss it, and only proceed with
execution once you agree. There are a few tags which allow you to modify this
behavior.
The supported tags are:
# <demo> stop
Defines block boundaries, the points where IPython stops execution of the
file and returns to the interactive prompt.
You can optionally mark the stop tag with extra dashes before and after the
word 'stop', to help visually distinguish the blocks in a text editor:
# <demo> --- stop ---
# <demo> silent
Make a block execute silently (and hence automatically). Typically used in
cases where you have some boilerplate or initialization code which you need
executed but do not want to be seen in the demo.
# <demo> auto
Make a block execute automatically, but still being printed. Useful for
simple code which does not warrant discussion, since it avoids the extra
manual confirmation.
# <demo> auto_all
This tag can _only_ be in the first block, and if given it overrides the
individual auto tags to make the whole demo fully automatic (no block asks
for confirmation). It can also be given at creation time (or the attribute
set later) to override what's in the file.
While _any_ python file can be run as a Demo instance, if there are no stop
tags the whole file will run in a single block (no different that calling
first %pycat and then %run). The minimal markup to make this useful is to
place a set of stop tags; the other tags are only there to let you fine-tune
the execution.
This is probably best explained with the simple example file below. You can
copy this into a file named ex_demo.py, and try running it via::
from IPython.demo import Demo
d = Demo('ex_demo.py')
d()
Each time you call the demo object, it runs the next block. The demo object
has a few useful methods for navigation, like again(), edit(), jump(), seek()
and back(). It can be reset for a new run via reset() or reloaded from disk
(in case you've edited the source) via reload(). See their docstrings below.
Note: To make this simpler to explore, a file called "demo-exercizer.py" has
been added to the "docs/examples/core" directory. Just cd to this directory in
an IPython session, and type::
%run demo-exercizer.py
and then follow the directions.
Example
-------
The following is a very simple example of a valid demo file.
::
#################### EXAMPLE DEMO <ex_demo.py> ###############################
'''A simple interactive demo to illustrate the use of IPython's Demo class.'''
print 'Hello, welcome to an interactive IPython demo.'
# The mark below defines a block boundary, which is a point where IPython will
# stop execution and return to the interactive prompt. The dashes are actually
# optional and used only as a visual aid to clearly separate blocks while
# editing the demo code.
# <demo> stop
x = 1
y = 2
# <demo> stop
# the mark below makes this block as silent
# <demo> silent
print 'This is a silent block, which gets executed but not printed.'
# <demo> stop
# <demo> auto
print 'This is an automatic block.'
print 'It is executed without asking for confirmation, but printed.'
z = x+y
print 'z=',x
# <demo> stop
# This is just another normal block.
print 'z is now:', z
print 'bye!'
################### END EXAMPLE DEMO <ex_demo.py> ############################
""" |
"""
===============
Array Internals
===============
Internal organization of numpy arrays
=====================================
It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy".
Numpy arrays consist of two major components, the raw array data (from now on,
referred to as the data buffer), and the information about the raw array data.
The data buffer is typically what people think of as arrays in C or Fortran,
a contiguous (and fixed) block of memory containing fixed sized data items.
Numpy also contains a significant set of data that describes how to interpret
the data in the data buffer. This extra information contains (among other things):
1) The basic data element's size in bytes
2) The start of the data within the data buffer (an offset relative to the
beginning of the data buffer).
3) The number of dimensions and the size of each dimension
4) The separation between elements for each dimension (the 'stride'). This
does not have to be a multiple of the element size
5) The byte order of the data (which may not be the native byte order)
6) Whether the buffer is read-only
7) Information (via the dtype object) about the interpretation of the basic
data element. The basic data element may be as simple as a int or a float,
or it may be a compound object (e.g., struct-like), a fixed character field,
or Python object pointers.
8) Whether the array is to interpreted as C-order or Fortran-order.
This arrangement allow for very flexible use of arrays. One thing that it allows
is simple changes of the metadata to change the interpretation of the array buffer.
Changing the byteorder of the array is a simple change involving no rearrangement
of the data. The shape of the array can be changed very easily without changing
anything in the data buffer or any data copying at all
Among other things that are made possible is one can create a new array metadata
object that uses the same data buffer
to create a new view of that data buffer that has a different interpretation
of the buffer (e.g., different shape, offset, byte order, strides, etc) but
shares the same data bytes. Many operations in numpy do just this such as
slices. Other operations, such as transpose, don't move data elements
around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
Typically these new versions of the array metadata but the same data buffer are
new 'views' into the data buffer. There is a different ndarray object, but it
uses the same data buffer. This is why it is necessary to force copies through
use of the .copy() method if one really wants to make a new and independent
copy of the data buffer.
New views into arrays mean the the object reference counts for the data buffer
increase. Simply doing away with the original array object will not remove the
data buffer if other views of it still exist.
Multidimensional Array Indexing Order Issues
============================================
What is the right way to index
multi-dimensional arrays? Before you jump to conclusions about the one and
true way to index multi-dimensional arrays, it pays to understand why this is
a confusing issue. This section will try to explain in detail how numpy
indexing works and why we adopt the convention we do for images, and when it
may be appropriate to adopt other conventions.
The first thing to understand is
that there are two conflicting conventions for indexing 2-dimensional arrays.
Matrix notation uses the first index to indicate which row is being selected and
the second index to indicate which column is selected. This is opposite the
geometrically oriented-convention for images where people generally think the
first index represents x position (i.e., column) and the second represents y
position (i.e., row). This alone is the source of much confusion;
matrix-oriented users and image-oriented users expect two different things with
regard to indexing.
The second issue to understand is how indices correspond
to the order the array is stored in memory. In Fortran the first index is the
most rapidly varying index when moving through the elements of a two
dimensional array as it is stored in memory. If you adopt the matrix
convention for indexing, then this means the matrix is stored one column at a
time (since the first index moves to the next row as it changes). Thus Fortran
is considered a Column-major language. C has just the opposite convention. In
C, the last index changes most rapidly as one moves through the array as
stored in memory. Thus C is a Row-major language. The matrix is stored by
rows. Note that in both cases it presumes that the matrix convention for
indexing is being used, i.e., for both Fortran and C, the first index is the
row. Note this convention implies that the indexing convention is invariant
and that the data order changes to keep that so.
But that's not the only way
to look at it. Suppose one has large two-dimensional arrays (images or
matrices) stored in data files. Suppose the data are stored by rows rather than
by columns. If we are to preserve our index convention (whether matrix or
image) that means that depending on the language we use, we may be forced to
reorder the data if it is read into memory to preserve our indexing
convention. For example if we read row-ordered data into memory without
reordering, it will match the matrix indexing convention for C, but not for
Fortran. Conversely, it will match the image indexing convention for Fortran,
but not for C. For C, if one is using data stored in row order, and one wants
to preserve the image index convention, the data must be reordered when
reading into memory.
In the end, which you do for Fortran or C depends on
which is more important, not reordering data or preserving the indexing
convention. For large images, reordering data is potentially expensive, and
often the indexing convention is inverted to avoid that.
The situation with
numpy makes this issue yet more complicated. The internal machinery of numpy
arrays is flexible enough to accept any ordering of indices. One can simply
reorder indices by manipulating the internal stride information for arrays
without reordering the data at all. Numpy will know how to map the new index
order to the data without moving the data.
So if this is true, why not choose
the index order that matches what you most expect? In particular, why not define
row-ordered images to use the image convention? (This is sometimes referred
to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
order options for array ordering in numpy.) The drawback of doing this is
potential performance penalties. It's common to access the data sequentially,
either implicitly in array operations or explicitly by looping over rows of an
image. When that is done, then the data will be accessed in non-optimal order.
As the first index is incremented, what is actually happening is that elements
spaced far apart in memory are being sequentially accessed, with usually poor
memory access speeds. For example, for a two dimensional image 'im' defined so
that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
Python behavior then im[0] would represent a column at x=0. Yet that data
would be spread over the whole array since the data are stored in row order.
Despite the flexibility of numpy's indexing, it can't really paper over the fact
basic operations are rendered inefficient because of data order or that getting
contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
im[0]), thus one can't use an idiom such as for row in im; for col in im does
work, but doesn't yield contiguous column data.
As it turns out, numpy is
smart enough when dealing with ufuncs to determine which index is the most
rapidly varying one in memory and uses that for the innermost loop. Thus for
ufuncs there is no large intrinsic advantage to either approach in most cases.
On the other hand, use of .flat with an FORTRAN ordered array will lead to
non-optimal memory access as adjacent elements in the flattened array (iterator,
actually) are not contiguous in memory.
Indeed, the fact is that Python
indexing on lists and other sequences naturally leads to an outside-to inside
ordering (the first index gets the largest grouping, the next the next largest,
and the last gets the smallest element). Since image data are normally stored
by rows, this corresponds to position within rows being the last item indexed.
If you do want to use Fortran ordering realize that
there are two approaches to consider: 1) accept that the first index is just not
the most rapidly changing in memory and have all your I/O routines reorder
your data when going from memory to disk or visa versa, or use numpy's
mechanism for mapping the first index to the most rapidly varying data. We
recommend the former if possible. The disadvantage of the latter is that many
of numpy's functions will yield arrays without Fortran ordering unless you are
careful to use the 'order' keyword. Doing this would be highly inconvenient.
Otherwise we recommend simply learning to reverse the usual order of indices
when accessing elements of an array. Granted, it goes against the grain, but
it is more in line with Python semantics and the natural order of the data.
""" |
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
# SKR03
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
|
#__all__ = ['matlab_to_complex','complex_to_matlab','sparse_to_ijv']
#
#from pydec.dec import SimplicialComplex,d,star,delta
#from pydec.dec.cochain import Cochain
#
##from scipy import concatenate,random,rand,sparse,zeros,shape,Int,array,matrix,arange,ArrayType
#from scipy import *
#from scipy.io.mio import loadmat,savemat
#
#
#def matlab_to_complex(filename,vertex_array_name = 'v',simplex_array_name='s'):
# """
# Load a complex from a MAT file
# SciPy only supports MAT v5, so save from Matlab with the -V4 option
#
# save filename var1 var2 -V4
# """
#
# dict = {}
# loadmat(filename,dict)
# v = dict[vertex_array_name]
# s = dict[simplex_array_name]
#
# s = s.astype(int32)
# s -= 1
#
# for name,arr in dict.iteritems():
# print name,shape(arr)
#
#
# return SimplicialComplex(v,s)
#
#def complex_to_matlab(filename,complex):
# """
# Write a complex and all associated operators to a MAT file
# """
#
# mat_dict = {}
#
# ## Export Operators in IJV format
# for dim in range(complex.complex_dimension + 1):
# primal_f = complex.get_cochain_basis(dim,True)
# dual_f = complex.get_cochain_basis(dim,False)
#
# mat_dict['primal_d'+str(dim)] = sparse_to_ijv(d(primal_f).v)
# mat_dict['dual_d'+str(complex.complex_dimension - dim)] = sparse_to_ijv(d(dual_f).v)
#
# mat_dict['primal_star'+str(dim)] = sparse_to_ijv(star(primal_f).v)
# mat_dict['dual_star'+str(complex.complex_dimension - dim)] = sparse_to_ijv(star(dual_f).v)
#
#
# ##Change 0-based indexing to 1-based
# for ijv in mat_dict.itervalues():
# ijv[:,[0,1]] += 1
#
#
# for dim in range(1,complex.complex_dimension):
# i_to_s = complex[dim].index_to_simplex
# s_to_i = complex[dim].simplex_to_index
#
# i2s = concatenate([ array([list(i_to_s[x])]) for x in sorted(i_to_s.keys())])
# i2s += 1
# mat_dict['sigma'+str(dim)] = i2s.astype(float64)
#
#
# ## 0 and N handled as special cases,
# ## 0 because not all verticies are the face of some simplex
# ## N because the topmost simplices may have an orientation that should be preserved
# mat_dict['sigma0'] = array(matrix(arange(1,len(complex.vertices)+1)).transpose().astype(float64))
# mat_dict['sigma'+str(complex.complex_dimension)] = complex.simplices.astype(float64) + 1
#
# mat_dict['v'] = complex.vertices
# mat_dict['s'] = complex.simplices.astype(float64) + 1
#
#
#
#
# savemat(filename,mat_dict)
#
#
#def sparse_to_ijv(sparse_matrix):
# """
# Convert a sparse matrix to a ijv representation.
# For a matrix with N non-zeros, a N by 3 matrix will be returned
#
# Row and Column indices start at 0
#
# If the row and column entries do not span the matrix dimensions, an additional
# zero entry is added for the lower right corner of the matrix
# """
# csr_matrix = sparse_matrix.tocsr()
# ijv = zeros((csr_matrix.size,3))
#
# max_row = -1
# max_col = -1
# for ii in xrange(csr_matrix.size):
# ir, ic = csr_matrix.rowcol(ii)
# data = csr_matrix.getdata(ii)
# ijv[ii] = (ir,ic,data)
# max_row = max(max_row,ir)
# max_col = max(max_col,ic)
#
#
# rows,cols = shape(csr_matrix)
# if max_row != (rows - 1) or max_col != (cols - 1):
# ijv = concatenate((ijv,array([[rows-1,cols-1,0]])))
#
# return ijv
#
#
#
#
#import unittest
#
#class Test_sparse_to_ijv(unittest.TestCase):
# def setUp(self):
# random.seed(0) #make tests repeatable
#
# def testsparse_to_ijv(self):
# cases = []
# cases.append(((1,1),[(0,0)]))
# cases.append(((1,3),[(0,0),(0,2)]))
# cases.append(((7,1),[(5,0),(2,0),(4,0),(6,0)]))
# cases.append(((5,5),[(0,0),(1,3),(0,4),(0,3),(3,2),(2,0),(4,3)]))
#
# for dim,l in cases:
# s = sparse.lil_matrix(dim)
# for r,c in l:
# s[r,c] = 1
# ijv = sparse_to_ijv(s)
#
# self.assertEqual(shape(ijv),(len(l),3))
#
# for i,j,v in ijv:
# self.assert_((i,j) in l)
# self.assertEqual(v,1)
#
#
#class TestFile(unittest.TestCase):
# def setUp(self):
# pass
#
# def testMatlab(self):
# sc = matlab_to_complex("../resources/matlab/meshes/unitSqr14")
# complex_to_matlab("/home/nathan/Desktop/unitSqr14_out",sc)
#
#
#
#if __name__ == '__main__':
# unittest.main()
|
"""
========================
Broadcasting over arrays
========================
The term broadcasting describes how numpy treats arrays with different
shapes during arithmetic operations. Subject to certain constraints,
the smaller array is "broadcast" across the larger array so that they
have compatible shapes. Broadcasting provides a means of vectorizing
array operations so that looping occurs in C instead of Python. It does
this without making needless copies of data and usually leads to
efficient algorithm implementations. There are, however, cases where
broadcasting is a bad idea because it leads to inefficient use of memory
that slows computation.
NumPy operations are usually done on pairs of arrays on an
element-by-element basis. In the simplest case, the two arrays must
have exactly the same shape, as in the following example:
>>> a = np.array([1.0, 2.0, 3.0])
>>> b = np.array([2.0, 2.0, 2.0])
>>> a * b
array([ 2., 4., 6.])
NumPy's broadcasting rule relaxes this constraint when the arrays'
shapes meet certain constraints. The simplest broadcasting example occurs
when an array and a scalar value are combined in an operation:
>>> a = np.array([1.0, 2.0, 3.0])
>>> b = 2.0
>>> a * b
array([ 2., 4., 6.])
The result is equivalent to the previous example where ``b`` was an array.
We can think of the scalar ``b`` being *stretched* during the arithmetic
operation into an array with the same shape as ``a``. The new elements in
``b`` are simply copies of the original scalar. The stretching analogy is
only conceptual. NumPy is smart enough to use the original scalar value
without actually making copies, so that broadcasting operations are as
memory and computationally efficient as possible.
The code in the second example is more efficient than that in the first
because broadcasting moves less memory around during the multiplication
(``b`` is a scalar rather than an array).
General Broadcasting Rules
==========================
When operating on two arrays, NumPy compares their shapes element-wise.
It starts with the trailing dimensions, and works its way forward. Two
dimensions are compatible when
1) they are equal, or
2) one of them is 1
If these conditions are not met, a
``ValueError: frames are not aligned`` exception is thrown, indicating that
the arrays have incompatible shapes. The size of the resulting array
is the maximum size along each dimension of the input arrays.
Arrays do not need to have the same *number* of dimensions. For example,
if you have a ``256x256x3`` array of RGB values, and you want to scale
each color in the image by a different value, you can multiply the image
by a one-dimensional array with 3 values. Lining up the sizes of the
trailing axes of these arrays according to the broadcast rules, shows that
they are compatible::
Image (3d array): 256 x 256 x 3
Scale (1d array): 3
Result (3d array): 256 x 256 x 3
When either of the dimensions compared is one, the larger of the two is
used. In other words, the smaller of two axes is stretched or "copied"
to match the other.
In the following example, both the ``A`` and ``B`` arrays have axes with
length one that are expanded to a larger size during the broadcast
operation::
A (4d array): 8 x 1 x 6 x 1
B (3d array): 7 x 1 x 5
Result (4d array): 8 x 7 x 6 x 5
Here are some more examples::
A (2d array): 5 x 4
B (1d array): 1
Result (2d array): 5 x 4
A (2d array): 5 x 4
B (1d array): 4
Result (2d array): 5 x 4
A (3d array): 15 x 3 x 5
B (3d array): 15 x 1 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 1
Result (3d array): 15 x 3 x 5
Here are examples of shapes that do not broadcast::
A (1d array): 3
B (1d array): 4 # trailing dimensions do not match
A (2d array): 2 x 1
B (3d array): 8 x 4 x 3 # second from last dimensions mismatched
An example of broadcasting in practice::
>>> x = np.arange(4)
>>> xx = x.reshape(4,1)
>>> y = np.ones(5)
>>> z = np.ones((3,4))
>>> x.shape
(4,)
>>> y.shape
(5,)
>>> x + y
<type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape
>>> xx.shape
(4, 1)
>>> y.shape
(5,)
>>> (xx + y).shape
(4, 5)
>>> xx + y
array([[ 1., 1., 1., 1., 1.],
[ 2., 2., 2., 2., 2.],
[ 3., 3., 3., 3., 3.],
[ 4., 4., 4., 4., 4.]])
>>> x.shape
(4,)
>>> z.shape
(3, 4)
>>> (x + z).shape
(3, 4)
>>> x + z
array([[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.]])
Broadcasting provides a convenient way of taking the outer product (or
any other outer operation) of two arrays. The following example shows an
outer addition operation of two 1-d arrays::
>>> a = np.array([0.0, 10.0, 20.0, 30.0])
>>> b = np.array([1.0, 2.0, 3.0])
>>> a[:, np.newaxis] + b
array([[ 1., 2., 3.],
[ 11., 12., 13.],
[ 21., 22., 23.],
[ 31., 32., 33.]])
Here the ``newaxis`` index operator inserts a new axis into ``a``,
making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array
with ``b``, which has shape ``(3,)``, yields a ``4x3`` array.
See `this article <http://www.scipy.org/EricsBroadcastingDoc>`_
for illustrations of broadcasting concepts.
""" |
# -*- coding: utf-8 -*-
#
# Copyright (C) 2011-2018 NAME <EMAIL>
# Copyright (C) 2011 xt <EMAIL>
# Copyright (C) 2012 NAME "FiXato" NAME <EMAIL>
# Copyright (C) 2012 USERNAME <EMAIL>
# Copyright (C) 2013 NAME <EMAIL>
# Copyright (C) 2013 NAME <EMAIL>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# Shorten URLs with own HTTP server.
# (this script requires Python >= 2.6)
#
# How does it work?
#
# 1. The URLs displayed in buffers are shortened and stored in memory (saved in
# a file when script is unloaded).
# 2. URLs shortened can be displayed below messages, in a dedicated buffer, or
# as HTML page in your browser.
# 3. This script embeds an HTTP server, which will redirect shortened URLs
# to real URL and display list of all URLs if you browse address without
# URL key.
# 4. It is recommended to customize/protect the HTTP server using script
# options (see /help urlserver).
#
# List of URLs:
# - in WeeChat: /urlserver
# - in browser: http://myhost.org:1234/
#
# History:
#
# 2018-09-30, NAME <EMAIL>:
# v2.3: fix regex in help of option "http_allowed_ips"
# 2017-07-26, NAME <EMAIL>:
# v2.2: fix write on socket with python 3.x
# 2016-11-01, NAME <EMAIL>:
# v2.1: add option "msg_filtered"
# 2016-01-20, NAME <EMAIL>:
# v2.0: add option "http_open_in_new_page"
# 2015-05-16, NAME <EMAIL>:
# v1.9: add option "http_auth_redirect", fix flake8 warnings
# 2015-04-14, NAME <EMAIL>:
# v1.8: evaluate option "http_auth" (to use secured data)
# 2013-12-09, WakiMiko
# v1.7: use HTTPS for youtube embedding
# 2013-12-09, NAME <EMAIL>:
# v1.6: add reason phrase after HTTP code 302 and empty line at the end
# 2013-12-05, NAME <EMAIL>:
# v1.5: replace HTTP 301 by 302
# 2013-12-05, NAME <EMAIL>:
# v1.4: use HTTP 301 instead of meta for the redirection when
# there is no referer in request
# 2013-11-29, NAME <EMAIL>
# v1.3: - make it possible to run reverse proxy in a subdirectory by
# generating relative links and using the <base> tag. to use this,
# set http_hostname_display to 'domain.tld/subdir'.
# - mention favicon explicitly (now works in subdirectories, too).
# - update favicon to new weechat logo.
# - set meta referrer to never in redirect page, so chrome users'
# referrers are hidden, too
# - fix http_auth in chrome and other browsers which send header
# names in lower case
# 2013-05-04, NAME <EMAIL>
# v1.2: added a "http_scheme_display" option. This makes it possible to run
# the server behind a reverse proxy with https:// URLs.
# 2013-03-25, NAME (@irc.freenode.net):
# v1.1: made links relative in the html, so that they can be followed when
# accessing the listing remotely using the weechat box's IP directly.
# 2012-12-12, USERNAME <EMAIL>:
# v1.0: add options "http_time_format", "display_msg_in_url" (works with
# relay/irc), "color_in_msg", "separators"
# 2012-04-18, NAME "FiXato" NAME <EMAIL>:
# v0.9: add options "http_autostart", "http_port_display"
# "url_min_length" can now be set to -1 to auto-detect minimal url
# length; also, if port is 80 now, :80 will no longer be added to the
# shortened url.
# 2012-04-17, NAME "FiXato" NAME <EMAIL>:
# v0.8: add more CSS support by adding options "http_fg_color",
# "http_css_url" and "http_title", add descriptive classes to most
# html elements.
# 2012-04-11, NAME <EMAIL>:
# v0.7: fix truncated HTML page (thanks to xt), fix base64 decoding with
# Python 3.x
# 2012-01-19, NAME <EMAIL>:
# v0.6: add option "http_hostname_display"
# 2012-01-03, NAME <EMAIL>:
# v0.5: make script compatible with Python 3.x
# 2011-10-31, NAME <EMAIL>:
# v0.4: add options "http_embed_youtube_size" and "http_bg_color",
# add extensions jpeg/bmp/svg for embedded images
# 2011-10-30, NAME <EMAIL>:
# v0.3: escape HTML chars for page with list of URLs, add option
# "http_prefix_suffix", disable highlights on urlserver buffer
# 2011-10-30, NAME <EMAIL>:
# v0.2: fix error on loading of file "urlserver_list.txt" when it is empty
# 2011-10-30, NAME <EMAIL>:
# v0.1: initial release
#
|
#
# tested on | Windows native | Linux cross-compilation
# ------------------------+-------------------+---------------------------
# MSVS C++ 2010 Express | WORKS | n/a
# Mingw-w64 | WORKS | WORKS
# Mingw-w32 | WORKS | WORKS
# MinGW | WORKS | untested
#
#####
# Notes about MSVS C++ :
#
# - MSVC2010-Express compiles to 32bits only.
#
#####
# Notes about Mingw-w64 and Mingw-w32 under Windows :
#
# - both can be installed using the official installer :
# http://mingw-w64.sourceforge.net/download.php#mingw-builds
#
# - if you want to compile both 32bits and 64bits, don't forget to
# run the installer twice to install them both.
#
# - install them into a path that does not contain spaces
# ( example : "C:/Mingw-w32", "C:/Mingw-w64" )
#
# - if you want to compile faster using the "-j" option, don't forget
# to install the appropriate version of the Pywin32 python extension
# available from : http://sourceforge.net/projects/pywin32/files/
#
# - before running scons, you must add into the environment path
# the path to the "/bin" directory of the Mingw version you want
# to use :
#
# set PATH=C:/Mingw-w32/bin;%PATH%
#
# - then, scons should be able to detect gcc.
# - Mingw-w32 only compiles 32bits.
# - Mingw-w64 only compiles 64bits.
#
# - it is possible to add them both at the same time into the PATH env,
# if you also define the MINGW32_PREFIX and MINGW64_PREFIX environment
# variables.
# For instance, you could store that set of commands into a .bat script
# that you would run just before scons :
#
# set PATH=C:\mingw-w32\bin;%PATH%
# set PATH=C:\mingw-w64\bin;%PATH%
# set MINGW32_PREFIX=C:\mingw-w32\bin\
# set MINGW64_PREFIX=C:\mingw-w64\bin\
#
#####
# Notes about Mingw, Mingw-w64 and Mingw-w32 under Linux :
#
# - default toolchain prefixes are :
# "i586-mingw32msvc-" for MinGW
# "i686-w64-mingw32-" for Mingw-w32
# "x86_64-w64-mingw32-" for Mingw-w64
#
# - if both MinGW and Mingw-w32 are installed on your system
# Mingw-w32 should take the priority over MinGW.
#
# - it is possible to manually override prefixes by defining
# the MINGW32_PREFIX and MINGW64_PREFIX environment variables.
#
#####
# Notes about Mingw under Windows :
#
# - this is the MinGW version from http://mingw.org/
# - install it into a path that does not contain spaces
# ( example : "C:/MinGW" )
# - several DirectX headers might be missing. You can copy them into
# the C:/MinGW/include" directory from this page :
# https://code.google.com/p/mingw-lib/source/browse/trunk/working/avcodec_to_widget_5/directx_include/
# - before running scons, add the path to the "/bin" directory :
# set PATH=C:/MinGW/bin;%PATH%
# - scons should be able to detect gcc.
#
#####
# TODO :
#
# - finish to cleanup this script to remove all the remains of previous hacks and workarounds
# - make it work with the Windows7 SDK that is supposed to enable 64bits compilation for MSVC2010-Express
# - confirm it works well with other Visual Studio versions.
# - update the wiki about the pywin32 extension required for the "-j" option under Windows.
# - update the wiki to document MINGW32_PREFIX and MINGW64_PREFIX
#
|
{
'name': 'Web',
'category': 'Hidden',
'version': 'IP_ADDRESS',
'description':
"""
OpenERP Web core module.
========================
This module provides the core of the OpenERP Web Client.
""",
'depends': [],
'auto_install': True,
'post_load': 'wsgi_postload',
'js' : [
"static/src/fixbind.js",
"static/lib/datejs/globalization/en-US.js",
"static/lib/datejs/core.js",
"static/lib/datejs/parser.js",
"static/lib/datejs/sugarpak.js",
"static/lib/datejs/extras.js",
"static/lib/jquery/jquery-1.8.3.js",
"static/lib/jquery.MD5/jquery.md5.js",
"static/lib/jquery.form/jquery.form.js",
"static/lib/jquery.validate/jquery.validate.js",
"static/lib/jquery.ba-bbq/jquery.ba-bbq.js",
"static/lib/spinjs/spin.js",
"static/lib/jquery.autosize/jquery.autosize.js",
"static/lib/jquery.blockUI/jquery.blockUI.js",
"static/lib/jquery.placeholder/jquery.placeholder.js",
"static/lib/jquery.ui/js/jquery-ui-1.9.1.custom.js",
"static/lib/jquery.ui.timepicker/js/jquery-ui-timepicker-addon.js",
"static/lib/jquery.ui.notify/js/jquery.notify.js",
"static/lib/jquery.deferred-queue/jquery.deferred-queue.js",
"static/lib/jquery.scrollTo/jquery.scrollTo-min.js",
"static/lib/jquery.tipsy/jquery.tipsy.js",
"static/lib/jquery.textext/jquery.textext.js",
"static/lib/jquery.printarea/jquery.PrintArea.js",
"static/lib/jquery.timeago/jquery.timeago.js",
"static/lib/qweb/qweb2.js",
"static/lib/underscore/underscore.js",
"static/lib/underscore/underscore.string.js",
"static/lib/backbone/backbone.js",
"static/lib/cleditor/jquery.cleditor.js",
"static/lib/py.js/lib/py.js",
"static/src/js/boot.js",
"static/src/js/testing.js",
"static/src/js/pyeval.js",
"static/src/js/corelib.js",
"static/src/js/coresetup.js",
"static/src/js/dates.js",
"static/src/js/formats.js",
"static/src/js/chrome.js",
"static/src/js/views.js",
"static/src/js/data.js",
"static/src/js/data_export.js",
"static/src/js/search.js",
"static/src/js/view_form.js",
"static/src/js/view_list.js",
"static/src/js/view_list_editable.js",
"static/src/js/view_tree.js",
],
'css' : [
"static/lib/jquery.ui.bootstrap/css/custom-theme/jquery-ui-1.9.0.custom.css",
"static/lib/jquery.ui.timepicker/css/jquery-ui-timepicker-addon.css",
"static/lib/jquery.ui.notify/css/ui.notify.css",
"static/lib/jquery.tipsy/tipsy.css",
"static/lib/jquery.textext/jquery.textext.css",
"static/src/css/base.css",
"static/src/css/data_export.css",
"static/lib/cleditor/jquery.cleditor.css",
],
'qweb' : [
"static/src/xml/*.xml",
],
'test': [
"static/test/testing.js",
"static/test/class.js",
"static/test/registry.js",
"static/test/form.js",
"static/test/data.js",
"static/test/list-utils.js",
"static/test/formats.js",
"static/test/rpc.js",
"static/test/evals.js",
"static/test/search.js",
"static/test/Widget.js",
"static/test/list.js",
"static/test/list-editable.js",
"static/test/mutex.js"
],
'bootstrap': True,
} |
# -*- coding: utf-8 -*-
# -- Dual Licence ----------------------------------------------------------
############################################################################
# GPL License #
# #
# This file is a SCons (http://www.scons.org/) builder #
# Copyright (c) 2012-14, NAME <EMAIL> #
# This program is free software: you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as #
# published by the Free Software Foundation, either version 3 of the #
# License, or (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program. If not, see <http://www.gnu.org/licenses/>. #
############################################################################
# --------------------------------------------------------------------------
############################################################################
# BSD 3-Clause License #
# #
# This file is a SCons (http://www.scons.org/) builder #
# Copyright (c) 2012-14, NAME <EMAIL> #
# All rights reserved. #
# #
# Redistribution and use in source and binary forms, with or without #
# modification, are permitted provided that the following conditions are #
# met: #
# #
# 1. Redistributions of source code must retain the above copyright #
# notice, this list of conditions and the following disclaimer. #
# #
# 2. Redistributions in binary form must reproduce the above copyright #
# notice, this list of conditions and the following disclaimer in the #
# documentation and/or other materials provided with the distribution. #
# #
# 3. Neither the name of the copyright holder nor the names of its #
# contributors may be used to endorse or promote products derived from #
# this software without specific prior written permission. #
# #
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS #
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT #
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A #
# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT #
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, #
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED #
# TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR #
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF #
# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING #
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS #
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #
############################################################################
# The Unpack Builder can be used for unpacking archives (eg Zip, TGZ, BZ, ... ).
# The emitter of the Builder reads the archive data and creates a returning file list
# the builder extract the archive. The environment variable stores a dictionary "UNPACK"
# for set different extractions (subdict "EXTRACTOR"):
# {
# PRIORITY => a value for setting the extractor order (lower numbers = extractor is used earlier)
# SUFFIX => defines a list with file suffixes, which should be handled with this extractor
# EXTRACTSUFFIX => suffix of the extract command
# EXTRACTFLAGS => a string parameter for the RUN command for extracting the data
# EXTRACTCMD => full extract command of the builder
# RUN => the main program which will be started (if the parameter is empty, the extractor will be ignored)
# LISTCMD => the listing command for the emitter
# LISTFLAGS => the string options for the RUN command for showing a list of files
# LISTSUFFIX => suffix of the list command
# LISTEXTRACTOR => a optional Python function, that is called on each output line of the
# LISTCMD for extracting file & dir names, the function need two parameters (first line number,
# second line content) and must return a string with the file / dir path (other value types
# will be ignored)
# }
# Other options in the UNPACK dictionary are:
# STOPONEMPTYFILE => bool variable for stoping if the file has empty size (default True)
# VIWEXTRACTOUTPUT => shows the output messages of the extraction command (default False)
# EXTRACTDIR => path in that the data will be extracted (default #)
#
# The file which is handled by the first suffix match of the extractor, the extractor list can be append for other files.
# The order of the extractor dictionary creates the listing & extractor command eg file extension .tar.gz should be
# before .gz, because the tar.gz is extract in one shoot.
#
# Under *nix system these tools are supported: tar, bzip2, gzip, unzip
# Under Windows only 7-Zip (http://www.7-zip.org/) is supported
|
# from cfme.modeling.base import parent_of_type
# from cfme.utils.appliance import ViaREST, MiqImplementationContext
# from . import RegionCollection, ZoneCollection, ServerCollection, Server, Zone, Region
# @MiqImplementationContext.external_for(RegionCollection.all, ViaREST)
# def region_all(self):
# self.appliance.rest_api.collections.regions.reload()
# region_collection = self.appliance.rest_api.collections.regions
# regions = [self.instantiate(region.region) for region in region_collection]
# return regions
# @MiqImplementationContext.external_for(ZoneCollection.all, ViaREST)
# def zone_all(self):
# zone_collection = self.appliance.rest_api.collections.zones
# zones = []
# parent = self.filters.get('parent')
# for zone in zone_collection:
# zone.reload(attributes=['region_number'])
# if parent and zone.region_number != parent.number:
# continue
# zones.append(self.instantiate(
# name=zone.name, description=zone.description, id=zone.id
# ))
# # TODO: This code needs a refactor once the attributes can be loaded from the collection
# return zones
# @MiqImplementationContext.external_for(ServerCollection.all, ViaREST)
# def server_all(self):
# server_collection = self.appliance.rest_api.collections.servers
# servers = []
# parent = self.filters.get('parent')
# slave_only = self.filters.get('slave', False)
# for server in server_collection:
# server.reload(attributes=['zone_id'])
# if parent and server.zone_id != parent.id:
# continue
# if slave_only and server.is_master:
# continue
# servers.append(self.instantiate(name=server.name, sid=server.id))
# # TODO: This code needs a refactor once the attributes can be loaded from the collection
# return servers
# @MiqImplementationContext.external_for(ServerCollection.get_master, ViaREST)
# def get_master(self):
# server_collection = self.appliance.rest_api.collections.servers
# server = server_collection.find_by(is_master=True)[0]
# return self.instantiate(name=server.name, sid=server.id)
# @MiqImplementationContext.external_for(Server.zone, ViaREST)
# def zone(self):
# possible_parent = parent_of_type(self, Zone)
# if self._zone:
# return self._zone
# elif possible_parent:
# self._zone = possible_parent
# else:
# server_res = self.appliance.rest_api.collections.servers.find_by(id=self.sid)
# server = server_res[0]
# server.reload(attributes=['zone'])
# zone = server.zone
# zone_obj = self.appliance.collections.zones.instantiate(
# name=zone.name, description=zone.description, id=zone.id
# )
# self._zone = zone_obj
# return self._zone
# @MiqImplementationContext.external_for(Server.slave_servers, ViaREST)
# def slave_servers(self):
# return self.zone.collections.servers.filter({'slave': True}).all()
# @MiqImplementationContext.external_for(Zone.region, ViaREST)
# def region(self):
# possible_parent = parent_of_type(self, Region)
# if self._region:
# return self._region
# elif possible_parent:
# self._region = possible_parent
# else:
# zone_res = self.appliance.rest_api.collections.zones.find_by(id=self.id)
# zone = zone_res[0]
# zone.reload(attributes=['region_number'])
# region_obj = self.appliance.collections.regions.instantiate(number=zone.region_number)
# self._region = region_obj
# return self._region
|
# # -*- coding:utf-8 -*-
# '''
# 会员后台管理
# '''
#
# from pycate.model.shoucang_model import MShoucang
# import core.base_handler as base_handler
#
#
# class TuiHandler(base_handler.PycateBaseHandler):
# def initialize(self, hinfo=''):
# self.init_condition()
# self.mshoucang = MShoucang()
#
# def get(self, url_str=''):
# if len(url_str) > 0:
# par_arr = url_str.split('/')
# if self.user_name is None or self.user_name == '':
# self.redirect('/member/login')
# if url_str == '':
# self.set_status(400)
# self.render('404.html')
# elif len(par_arr) > 0:
# self.listcity(par_arr)
# else:
# self.set_status(400)
# self.render('404.html')
#
#
# def get_condition(self, switch):
# '''
# 用于listcity(),获取列出的条件。
# '''
# if switch == 'all':
# condition = {'userid': self.user_name}
# elif switch == 'notrefresh':
# # 过期
# condition = {'userid': self.user_name, 'def_refresh': 0, 'def_banned': 0, 'def_valid': 1}
# elif switch == 'normal':
# # 正常发布的
# condition = {'userid': self.user_name, 'def_refresh': 1, 'def_banned': 0, 'def_valid': 1}
# elif switch == 'banned':
# # 过期
# condition = {'userid': self.user_name, 'def_banned': 1}
# elif switch == 'novalid':
# # 未审核信息
# condition = {'userid': self.user_name, 'def_banned': 0, 'def_valid': 0}
# elif switch == 'tuiguang':
# condition = {"catid": {"$in": self.muser_info.get_vip_cats()}, 'userid': self.user_name}
# elif switch == 'notg':
# condition = {"catid": {"$in": self.muser_info.get_vip_cats()},
# 'userid': self.user_name,
# 'def_tuiguang': 0}
# elif switch == 'jianli':
# condition = {'userid': self.user_name, 'parentid': '0900'}
# elif switch == 'zhaopin':
# condition = {'userid': self.user_name, 'parentid': '0700'}
# return (condition)
#
# def get_vip_menu(self, pararr):
# parentid = pararr[0]
# switch = pararr[1]
# head_menu = ''
# ac1 = ''
# ac2 = ''
# ac3 = ''
# if switch == 'all':
# ac1 = 'activemenu'
# elif switch == 'notrefresh':
# ac2 = 'activemenu'
# elif switch == 'notg':
# ac3 = 'activemenu'
# if len(pararr) == 2:
# head_menu = '''<ul class="vipmenu">
# <li><a onclick="js_show_page('/tui/{0}/all')" class="{1}">所有消息</a></li>
# <li><a onclick="js_show_page('/tui/{0}/notrefresh')" class="{2}">过期消息</a></li></ul>
# <li><a onclick="js_show_page('/tui/{0}/notg')" class="{3}">未推广</a></li></ul>
# '''.format(parentid, ac1, ac2, ac3)
# return (head_menu)
#
# def listcity(self, pararr):
# # 所有的都是list下面的
# parentid = pararr[0]
# switch = pararr[1]
# if parentid in self.muser_info.get_vip_cats():
# pass
# else:
# self.write('<span class="red">联系管理员开通此分类的VIP推广权限.</span>')
# return
# condition = self.get_condition(switch)
# condition['parentid'] = pararr[0]
#
# user_published_infos = self.minfo.get_by_condition(condition)
# kwd = {
# 'cityid': self.city_name,
# 'cityname': self.mcity.get_cityname_by_id(self.city_name),
# 'vip_cat': self.muser_info.get_vip_cats(),
# 'action': switch,
# 'parentid': parentid,
# 'head_menu': self.get_vip_menu(pararr)
# }
# wuserinfo = self.muser_info.get_by_username()
# wuservip = self.muser_vip.get_by_parentid(parentid)
# print(switch)
# if parentid == 'zhaopin':
# self.render('tpl_user/p_list_jianli.html',
# user_published_infos=user_published_infos,
# kwd=kwd,
# wuserinfo=wuserinfo,
# wuservip=wuservip,
# )
# elif parentid == '0700':
# self.render('tui/tui_listcity.html',
# user_published_infos=user_published_infos,
# kwd=kwd,
# wuserinfo=wuserinfo,
# wuservip=wuservip,
# )
# elif parentid == '0300':
# self.render('tui/tui_0300.html',
# user_published_infos=user_published_infos,
# kwd=kwd,
# wuserinfo=wuserinfo,
# wuservip=wuservip,
# )
# else:
# self.render('tui/tui_listcity.html',
# user_published_infos=user_published_infos,
# kwd=kwd,
# wuserinfo=wuserinfo,
# wuservip=wuservip,
# )
#
|
"""
The pyscript module provides functionality for transpiling Python code
to JavaScript.
Quick intro
-----------
This is a brief intro for using PyScript. For more details see the
sections below.
PyScript is a tool to write JavaScript using (a subset) of the Python
language. All relevant buildins, and the methods of list, dict and str
are supported. Not supported are set, slicing with steps,
``**kwargs``, ``with``, ``yield``. Importing is currently limited to
some names in the ``time`` and ``sys`` modules. Other than that, most
Python code should work as expected, though if you pry hard enough the
JavaScript may shine through. As a rule of thumb, the code should behave
as expected when correct, but error reporting may not be very Pythonic.
The most important functions you need to know about are
:func:`py2js <flexx.pyscript.py2js>` and
:func:`evalpy <flexx.pyscript.evalpy>`.
In principal you do not need knowledge of JavaScript to write PyScript
code.
Goals
-----
There is an increase in Python projects that target web technology to
handle visualization and user interaction.
PyScript grew out of a desire to allow writing JavaScript callbacks in
Python, to allow user-defined interaction to be flexible, fast, and
stand-alone.
This resulted in the following two main goals:
* To make writing JavaScript easier and less frustrating, by letting
people write it with the Python syntax and buildins, and fixing some
of JavaScripts quirks.
* To allow JavaScript snippets to be defined naturally inside a Python
program.
Code produced by PyScript works standalone. Any (PyScript-compatible)
Python snippet can be converted to JS; you don't need another JS library
to run it.
PyScript can also be used to develop standalone JavaScript (AMD) modules.
Although ``import`` is currently not yet supported. We'll have to see
how that works out.
PyScript is just JavaScript
---------------------------
The purpose of projects like Skulpt or PyJS is to enable full Python
support in the browser. This approach will always be plagued by a
fundamental limitation: libraries that are not pure Python (like numpy)
will not work.
PyScript takes a more modest approach; it is a tool that allows one to
write JavaScript with a Python syntax. PyScript is just JavaScript.
This means that depending on what you want to achieve, you may still need
to know a thing or two about how JavaScript works. Further, not all Python
code can be converted (e.g. ``**kwargs`` are not supported), and
lists and dicts are really just JavaScript arrays and objects, respectively.
Pythonic
--------
PyScript makes writing JS more "Pythonic". Apart from allowing Python syntax
for loops, classes, etc, all relevant Python buildins are supported,
as well as the methods of list, dict and str. E.g. you can use
``print()``, ``range()``, ``L.append()``, ``D.update()``, etc.
The empty list and dict evaluate to false (whereas in JS it's
true), and ``isinstance()`` just works (whereas JS' ``typeof`` is
broken).
Deep comparisons are supported (e.g. for ``==`` and ``in``), so you can
actually compare two lists or dicts, or even a structure of nested
lists/dicts. Lists can be combined with the plus operator, and lists
and strings can be repeated with the multiply (star) operator. Class
methods are bound functions.
.. _pyscript-caveats:
Caveats
-------
PyScript fixes some of JS's quirks, but it's still just JavaScript.
Here's a list of things to keep an eye out for. This list is likely
incomplete. We recommend familiarizing yourself with JavaScript if you
plan to make heavy use of PyScript.
* JavasScript has a concept of ``null`` (i.e. ``None``), as well as
``undefined``. Sometimes you may want to use ``if x is None or x is
undefined: ...``.
* Accessing an attribute that does not exist will not raise an
AttributeError but yield ``undefined``.
* Magic functions on classes (e.g. for operator overloading) do not work.
* Calling an object that starts with a capital letter is assumed to be
a class instantiation (using ``new``): PyScript classes *must* start
with a capital letter, and any other callables must not.
PyScript is valid Python
------------------------
Other than e.g. RapydScript, PyScript is valid Python. This allows
creating modules that are a mix of real Python and PyScript. You can easily
write code that runs correctly both as Python and PyScript. Raw JS can
be included by defining a function with only a docstring.
PyScript itself (the compiler) is written in Python. Perhaps PyScript can
at some point compile itself, so that it becomes possible to define
PyScript inside HTML documents.
Performance
-----------
Because PyScript produces relatively bare JavaScript, it is pretty fast.
Faster than CPython, and significantly faster than Brython and friends.
Check out ``examples/app/benchmark.py``.
Nevertheless, the overhead to realize the more Pythonic behavior can
have a negative impact on performance in tight loops (in comparison to
having writing the JS by hand). The recommended approach is to write
performance critical code in pure JavaScript if necessary. This can be
done by defining a function with only a docstring (containing the JS
code).
.. _pyscript-support:
Support
-------
This is an overview of the language features that PyScript
supports/lacks.
Not currently supported:
* importing limited (maybe we should translate an import to a ``require()``?)
* the ``set`` class (JS has no set, but we could create one?)
* slicing with steps (JS does not support this)
* support for ``**kwargs`` (maps badly to JS call mechanism)
* The ``with`` statement (no equivalent in JS)
* Generators, i.e. ``yield`` (not widely supported in JS)
Supported basics:
* numbers, strings, lists, dicts (the latter become JS arrays and objects)
* operations: binary, unary, boolean, power, integer division, ``in`` operator
* comparisons (``==`` -> ``==``, ``is`` -> ``===``)
* tuple packing and unpacking
* basic string formatting
* slicing with start end end (though not with step)
* if-statements and single-line if-expressions
* while-loops and for-loops supporting continue, break, and else-clauses
* for-loops using ``range()``
* for-loop over arrays
* for-loop over dict/object using ``.keys()``, ``.values()`` and ``.items()``
* function calls can have ``*args``
* function defs can have default arguments and ``*args``
* lambda expressions
* list comprehensions
* classes, with (single) inheritance, and the use of ``super()``
* raising and catching exceptions, assertions
* creation of "modules"
* globals / nonlocal
* preliminary support for importing module (only ``time`` and ``sys`` for now).
Supported Python conveniences:
* use of ``self`` is translated to ``this``
* ``print()`` becomes ``console.log()`` (also supports ``sep`` and ``end``)
* ``isinstance()`` Just Works (for primitive types as well as
user-defined classes)
* an empty list or dict evaluates to False as in Python.
* all Python buildin functions that make sense in JS are supported:
isinstance, issubclass, callable, hasattr, getattr, setattr, delattr,
print, len, max, min, chr, ord, dict, list, tuple, range, pow, sum,
round, int, float, str, bool, abs, divmod, all, any, enumerate, zip,
reversed, sorted, filter, map.
* all methods of list, dict and str are supported (except a few string
methods: encode format format_map isdecimal isdigit isprintable maketrans)
* the default return value of a function is ``None``/``null`` instead
of ``undefined``.
* list concatenation using the plus operator, and list/str repeating
using the star operator.
* deep comparisons.
* class methods are bound functions (i.e. ``this`` is fixed to the
instance).
* functions that are defined in another function and that do not have
self/this as a first argument, are bound the the same instance as the
function in which it is defined.
""" |
"""
========
Glossary
========
.. glossary::
along an axis
Axes are defined for arrays with more than one dimension. A
2-dimensional array has two corresponding axes: the first running
vertically downwards across rows (axis 0), and the second running
horizontally across columns (axis 1).
Many operation can take place along one of these axes. For example,
we can sum each row of an array, in which case we operate along
columns, or axis 1::
>>> x = np.arange(12).reshape((3,4))
>>> x
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.sum(axis=1)
array([ 6, 22, 38])
array
A homogeneous container of numerical elements. Each element in the
array occupies a fixed amount of memory (hence homogeneous), and
can be a numerical element of a single type (such as float, int
or complex) or a combination (such as ``(float, int, float)``). Each
array has an associated data-type (or ``dtype``), which describes
the numerical type of its elements::
>>> x = np.array([1, 2, 3], float)
>>> x
array([ 1., 2., 3.])
>>> x.dtype # floating point number, 64 bits of memory per element
dtype('float64')
# More complicated data type: each array element is a combination of
# and integer and a floating point number
>>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)])
array([(1, 2.0), (3, 4.0)],
dtype=[('x', '<i4'), ('y', '<f8')])
Fast element-wise operations, called `ufuncs`_, operate on arrays.
array_like
Any sequence that can be interpreted as an ndarray. This includes
nested lists, tuples, scalars and existing arrays.
attribute
A property of an object that can be accessed using ``obj.attribute``,
e.g., ``shape`` is an attribute of an array::
>>> x = np.array([1, 2, 3])
>>> x.shape
(3,)
BLAS
`Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_
broadcast
NumPy can do operations on arrays whose shapes are mismatched::
>>> x = np.array([1, 2])
>>> y = np.array([[3], [4]])
>>> x
array([1, 2])
>>> y
array([[3],
[4]])
>>> x + y
array([[4, 5],
[5, 6]])
See `doc.broadcasting`_ for more information.
C order
See `row-major`
column-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In column-major order, the leftmost index "varies the
fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the column-major order as::
[1, 4, 2, 5, 3, 6]
Column-major order is also known as the Fortran order, as the Fortran
programming language uses it.
decorator
An operator that transforms a function. For example, a ``log``
decorator may be defined to print debugging information upon
function execution::
>>> def log(f):
... def new_logging_func(*args, **kwargs):
... print("Logging call with parameters:", args, kwargs)
... return f(*args, **kwargs)
...
... return new_logging_func
Now, when we define a function, we can "decorate" it using ``log``::
>>> @log
... def add(a, b):
... return a + b
Calling ``add`` then yields:
>>> add(1, 2)
Logging call with parameters: (1, 2) {}
3
dictionary
Resembling a language dictionary, which provides a mapping between
words and descriptions thereof, a Python dictionary is a mapping
between two objects::
>>> x = {1: 'one', 'two': [1, 2]}
Here, `x` is a dictionary mapping keys to values, in this case
the integer 1 to the string "one", and the string "two" to
the list ``[1, 2]``. The values may be accessed using their
corresponding keys::
>>> x[1]
'one'
>>> x['two']
[1, 2]
Note that dictionaries are not stored in any specific order. Also,
most mutable (see *immutable* below) objects, such as lists, may not
be used as keys.
For more information on dictionaries, read the
`Python tutorial <http://docs.python.org/tut>`_.
Fortran order
See `column-major`
flattened
Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details.
immutable
An object that cannot be modified after execution is called
immutable. Two common examples are strings and tuples.
instance
A class definition gives the blueprint for constructing an object::
>>> class House(object):
... wall_colour = 'white'
Yet, we have to *build* a house before it exists::
>>> h = House() # build a house
Now, ``h`` is called a ``House`` instance. An instance is therefore
a specific realisation of a class.
iterable
A sequence that allows "walking" (iterating) over items, typically
using a loop such as::
>>> x = [1, 2, 3]
>>> [item**2 for item in x]
[1, 4, 9]
It is often used in combination with ``enumerate``::
>>> keys = ['a','b','c']
>>> for n, k in enumerate(keys):
... print("Key %d: %s" % (n, k))
...
Key 0: a
Key 1: b
Key 2: c
list
A Python container that can hold any number of objects or items.
The items do not have to be of the same type, and can even be
lists themselves::
>>> x = [2, 2.0, "two", [2, 2.0]]
The list `x` contains 4 items, each which can be accessed individually::
>>> x[2] # the string 'two'
'two'
>>> x[3] # a list, containing an integer 2 and a float 2.0
[2, 2.0]
It is also possible to select more than one item at a time,
using *slicing*::
>>> x[0:2] # or, equivalently, x[:2]
[2, 2.0]
In code, arrays are often conveniently expressed as nested lists::
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
For more information, read the section on lists in the `Python
tutorial <http://docs.python.org/tut>`_. For a mapping
type (key-value), see *dictionary*.
mask
A boolean array, used to select only certain elements for an operation::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> mask = (x > 2)
>>> mask
array([False, False, False, True, True], dtype=bool)
>>> x[mask] = -1
>>> x
array([ 0, 1, 2, -1, -1])
masked array
Array that suppressed values indicated by a mask::
>>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True])
>>> x
masked_array(data = [-- 2.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
>>> x + [1, 2, 3]
masked_array(data = [-- 4.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
Masked arrays are often used when operating on arrays containing
missing or invalid entries.
matrix
A 2-dimensional ndarray that preserves its two-dimensional nature
throughout operations. It has certain special operations, such as ``*``
(matrix multiplication) and ``**`` (matrix power), defined::
>>> x = np.mat([[1, 2], [3, 4]])
>>> x
matrix([[1, 2],
[3, 4]])
>>> x**2
matrix([[ 7, 10],
[15, 22]])
method
A function associated with an object. For example, each ndarray has a
method called ``repeat``::
>>> x = np.array([1, 2, 3])
>>> x.repeat(2)
array([1, 1, 2, 2, 3, 3])
ndarray
See *array*.
record array
An `ndarray`_ with `structured data type`_ which has been subclassed as
np.recarray and whose dtype is of type np.record, making the
fields of its data type to be accessible by attribute.
reference
If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore,
``a`` and ``b`` are different names for the same Python object.
row-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In row-major order, the rightmost index "varies
the fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the row-major order as::
[1, 2, 3, 4, 5, 6]
Row-major order is also known as the C order, as the C programming
language uses it. New Numpy arrays are by default in row-major order.
self
Often seen in method signatures, ``self`` refers to the instance
of the associated class. For example:
>>> class Paintbrush(object):
... color = 'blue'
...
... def paint(self):
... print("Painting the city %s!" % self.color)
...
>>> p = Paintbrush()
>>> p.color = 'red'
>>> p.paint() # self refers to 'p'
Painting the city red!
slice
Used to select only certain elements from a sequence::
>>> x = range(5)
>>> x
[0, 1, 2, 3, 4]
>>> x[1:3] # slice from 1 to 3 (excluding 3 itself)
[1, 2]
>>> x[1:5:2] # slice from 1 to 5, but skipping every second element
[1, 3]
>>> x[::-1] # slice a sequence in reverse
[4, 3, 2, 1, 0]
Arrays may have more than one dimension, each which can be sliced
individually::
>>> x = np.array([[1, 2], [3, 4]])
>>> x
array([[1, 2],
[3, 4]])
>>> x[:, 1]
array([2, 4])
structured data type
A data type composed of other datatypes
tuple
A sequence that may contain a variable number of types of any
kind. A tuple is immutable, i.e., once constructed it cannot be
changed. Similar to a list, it can be indexed and sliced::
>>> x = (1, 'one', [1, 2])
>>> x
(1, 'one', [1, 2])
>>> x[0]
1
>>> x[:2]
(1, 'one')
A useful concept is "tuple unpacking", which allows variables to
be assigned to the contents of a tuple::
>>> x, y = (1, 2)
>>> x, y = 1, 2
This is often used when a function returns multiple values:
>>> def return_many():
... return 1, 'alpha', None
>>> a, b, c = return_many()
>>> a, b, c
(1, 'alpha', None)
>>> a
1
>>> b
'alpha'
ufunc
Universal function. A fast element-wise array operation. Examples include
``add``, ``sin`` and ``logical_or``.
view
An array that does not own its data, but refers to another array's
data instead. For example, we may create a view that only shows
every second element of another array::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> y = x[::2]
>>> y
array([0, 2, 4])
>>> x[0] = 3 # changing x changes y as well, since y is a view on x
>>> y
array([3, 2, 4])
wrapper
Python is a high-level (highly abstracted, or English-like) language.
This abstraction comes at a price in execution speed, and sometimes
it becomes necessary to use lower level languages to do fast
computations. A wrapper is code that provides a bridge between
high and the low level languages, allowing, e.g., Python to execute
code written in C or Fortran.
Examples include ctypes, SWIG and Cython (which wraps C and C++)
and f2py (which wraps Fortran).
""" |
"""
Writing Plugins
---------------
nose supports plugins for test collection, selection, observation and
reporting. There are two basic rules for plugins:
* Plugin classes should subclass :class:`nose.plugins.Plugin`.
* Plugins may implement any of the methods described in the class
:doc:`IPluginInterface <interface>` in nose.plugins.base. Please note that
this class is for documentary purposes only; plugins may not subclass
IPluginInterface.
Hello World
===========
Here's a basic plugin. It doesn't do much so read on for more ideas or dive
into the :doc:`IPluginInterface <interface>` to see all available hooks.
.. code-block:: python
import logging
import os
from nose.plugins import Plugin
log = logging.getLogger('nose.plugins.helloworld')
class HelloWorld(Plugin):
name = 'helloworld'
def options(self, parser, env=os.environ):
super(HelloWorld, self).options(parser, env=env)
def configure(self, options, conf):
super(HelloWorld, self).configure(options, conf)
if not self.enabled:
return
def finalize(self, result):
log.info('Hello pluginized world!')
Registering
===========
.. Note::
Important note: the following applies only to the default
plugin manager. Other plugin managers may use different means to
locate and load plugins.
For nose to find a plugin, it must be part of a package that uses
setuptools_, and the plugin must be included in the entry points defined
in the setup.py for the package:
.. code-block:: python
setup(name='Some plugin',
# ...
entry_points = {
'nose.plugins.0.10': [
'someplugin = someplugin:SomePlugin'
]
},
# ...
)
Once the package is installed with install or develop, nose will be able
to load the plugin.
.. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools
Registering a plugin without setuptools
=======================================
It is currently possible to register a plugin programmatically by
creating a custom nose runner like this :
.. code-block:: python
import nose
from yourplugin import YourPlugin
if __name__ == '__main__':
nose.main(addplugins=[YourPlugin()])
Defining options
================
All plugins must implement the methods ``options(self, parser, env)``
and ``configure(self, options, conf)``. Subclasses of nose.plugins.Plugin
that want the standard options should call the superclass methods.
nose uses optparse.OptionParser from the standard library to parse
arguments. A plugin's ``options()`` method receives a parser
instance. It's good form for a plugin to use that instance only to add
additional arguments that take only long arguments (--like-this). Most
of nose's built-in arguments get their default value from an environment
variable.
A plugin's ``configure()`` method receives the parsed ``OptionParser`` options
object, as well as the current config object. Plugins should configure their
behavior based on the user-selected settings, and may raise exceptions
if the configured behavior is nonsensical.
Logging
=======
nose uses the logging classes from the standard library. To enable users
to view debug messages easily, plugins should use ``logging.getLogger()`` to
acquire a logger in the ``nose.plugins`` namespace.
Recipes
=======
* Writing a plugin that monitors or controls test result output
Implement any or all of ``addError``, ``addFailure``, etc., to monitor test
results. If you also want to monitor output, implement
``setOutputStream`` and keep a reference to the output stream. If you
want to prevent the builtin ``TextTestResult`` output, implement
``setOutputSteam`` and *return a dummy stream*. The default output will go
to the dummy stream, while you send your desired output to the real stream.
Example: `examples/html_plugin/htmlplug.py`_
* Writing a plugin that handles exceptions
Subclass :doc:`ErrorClassPlugin <errorclasses>`.
Examples: :doc:`nose.plugins.deprecated <deprecated>`,
:doc:`nose.plugins.skip <skip>`
* Writing a plugin that adds detail to error reports
Implement ``formatError`` and/or ``formatFailure``. The error tuple
you return (error class, error message, traceback) will replace the
original error tuple.
Examples: :doc:`nose.plugins.capture <capture>`,
:doc:`nose.plugins.failuredetail <failuredetail>`
* Writing a plugin that loads tests from files other than python modules
Implement ``wantFile`` and ``loadTestsFromFile``. In ``wantFile``,
return True for files that you want to examine for tests. In
``loadTestsFromFile``, for those files, return an iterable
containing TestCases (or yield them as you find them;
``loadTestsFromFile`` may also be a generator).
Example: :doc:`nose.plugins.doctests <doctests>`
* Writing a plugin that prints a report
Implement ``begin`` if you need to perform setup before testing
begins. Implement ``report`` and output your report to the provided stream.
Examples: :doc:`nose.plugins.cover <cover>`, :doc:`nose.plugins.prof <prof>`
* Writing a plugin that selects or rejects tests
Implement any or all ``want*`` methods. Return False to reject the test
candidate, True to accept it -- which means that the test candidate
will pass through the rest of the system, so you must be prepared to
load tests from it if tests can't be loaded by the core loader or
another plugin -- and None if you don't care.
Examples: :doc:`nose.plugins.attrib <attrib>`,
:doc:`nose.plugins.doctests <doctests>`, :doc:`nose.plugins.testid <testid>`
More Examples
=============
See any builtin plugin or example plugin in the examples_ directory in
the nose source distribution. There is a list of third-party plugins
`on jottit`_.
.. _examples/html_plugin/htmlplug.py: http://python-nose.googlecode.com/svn/trunk/examples/html_plugin/htmlplug.py
.. _examples: http://python-nose.googlecode.com/svn/trunk/examples
.. _on jottit: http://nose-plugins.jottit.com/
""" |
# (c) 2013, NAME <EMAIL> red hat, inc
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# take a list of files and (optionally) a list of paths
# return the first existing file found in the paths
# [file1, file2, file3], [path1, path2, path3]
# search order is:
# path1/file1
# path1/file2
# path1/file3
# path2/file1
# path2/file2
# path2/file3
# path3/file1
# path3/file2
# path3/file3
# first file found with os.path.exists() is returned
# no file matches raises ansibleerror
# EXAMPLES
# - name: copy first existing file found to /some/file
# action: copy src=$item dest=/some/file
# with_first_found:
# - files: foo ${inventory_hostname} bar
# paths: /tmp/production /tmp/staging
# that will look for files in this order:
# /tmp/production/foo
# ${inventory_hostname}
# bar
# /tmp/staging/foo
# ${inventory_hostname}
# bar
# - name: copy first existing file found to /some/file
# action: copy src=$item dest=/some/file
# with_first_found:
# - files: /some/place/foo ${inventory_hostname} /some/place/else
# that will look for files in this order:
# /some/place/foo
# $relative_path/${inventory_hostname}
# /some/place/else
# example - including tasks:
# tasks:
# - include: $item
# with_first_found:
# - files: generic
# paths: tasks/staging tasks/production
# this will include the tasks in the file generic where it is found first (staging or production)
# example simple file lists
#tasks:
#- name: first found file
# action: copy src=$item dest=/etc/file.cfg
# with_first_found:
# - files: foo.${inventory_hostname} foo
# example skipping if no matched files
# First_found also offers the ability to control whether or not failing
# to find a file returns an error or not
#
#- name: first found file - or skip
# action: copy src=$item dest=/etc/file.cfg
# with_first_found:
# - files: foo.${inventory_hostname}
# skip: true
# example a role with default configuration and configuration per host
# you can set multiple terms with their own files and paths to look through.
# consider a role that sets some configuration per host falling back on a default config.
#
#- name: some configuration template
# template: src={{ item }} dest=/etc/file.cfg mode=0444 owner=root group=root
# with_first_found:
# - files:
# - ${inventory_hostname}/etc/file.cfg
# paths:
# - ../../../templates.overwrites
# - ../../../templates
# - files:
# - etc/file.cfg
# paths:
# - templates
# the above will return an empty list if the files cannot be found at all
# if skip is unspecificed or if it is set to false then it will return a list
# error which can be caught bye ignore_errors: true for that action.
# finally - if you want you can use it, in place to replace first_available_file:
# you simply cannot use the - files, path or skip options. simply replace
# first_available_file with with_first_found and leave the file listing in place
#
#
# - name: with_first_found like first_available_file
# action: copy src=$item dest=/tmp/faftest
# with_first_found:
# - ../files/foo
# - ../files/bar
# - ../files/baz
# ignore_errors: true
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (C) 2009-2014:
# NAME EMAIL NAME EMAIL NAME EMAIL NAME EMAIL This file is part of Shinken.
#
# Shinken is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Shinken is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Shinken. If not, see <http://www.gnu.org/licenses/>.
# Calendar date
# -------------
# '(\d{4})-(\d{2})-(\d{2}) - (\d{4})-(\d{2})-(\d{2}) / (\d+) ([0-9:, -]+)'
# => len = 8 => CALENDAR_DATE
#
# '(\d{4})-(\d{2})-(\d{2}) / (\d+) ([0-9:, -]+)'
# => len = 5 => CALENDAR_DATE
#
# '(\d{4})-(\d{2})-(\d{2}) - (\d{4})-(\d{2})-(\d{2}) ([0-9:, -]+)'
# => len = 7 => CALENDAR_DATE
#
# '(\d{4})-(\d{2})-(\d{2}) ([0-9:, -]+)'
# => len = 4 => CALENDAR_DATE
#
# Month week day
# --------------
# '([a-z]*) (\d+) ([a-z]*) - ([a-z]*) (\d+) ([a-z]*) / (\d+) ([0-9:, -]+)'
# => len = 8 => MONTH WEEK DAY
# e.g.: wednesday 1 january - thursday 2 july / 3
#
# '([a-z]*) (\d+) - ([a-z]*) (\d+) / (\d+) ([0-9:, -]+)' => len = 6
# e.g.: february 1 - march 15 / 3 => MONTH DATE
# e.g.: monday 2 - thusday 3 / 2 => WEEK DAY
# e.g.: day 2 - day 6 / 3 => MONTH DAY
#
# '([a-z]*) (\d+) - (\d+) / (\d+) ([0-9:, -]+)' => len = 6
# e.g.: february 1 - 15 / 3 => MONTH DATE
# e.g.: thursday 2 - 4 => WEEK DAY
# e.g.: day 1 - 4 => MONTH DAY
#
# '([a-z]*) (\d+) ([a-z]*) - ([a-z]*) (\d+) ([a-z]*) ([0-9:, -]+)' => len = 7
# e.g.: wednesday 1 january - thursday 2 july => MONTH WEEK DAY
#
# '([a-z]*) (\d+) - (\d+) ([0-9:, -]+)' => len = 7
# e.g.: thursday 2 - 4 => WEEK DAY
# e.g.: february 1 - 15 / 3 => MONTH DATE
# e.g.: day 1 - 4 => MONTH DAY
#
# '([a-z]*) (\d+) - ([a-z]*) (\d+) ([0-9:, -]+)' => len = 5
# e.g.: february 1 - march 15 => MONTH DATE
# e.g.: monday 2 - thusday 3 => WEEK DAY
# e.g.: day 2 - day 6 => MONTH DAY
#
# '([a-z]*) (\d+) ([0-9:, -]+)' => len = 3
# e.g.: february 3 => MONTH DATE
# e.g.: thursday 2 => WEEK DAY
# e.g.: day 3 => MONTH DAY
#
# '([a-z]*) (\d+) ([a-z]*) ([0-9:, -]+)' => len = 4
# e.g.: thusday 3 february => MONTH WEEK DAY
#
# '([a-z]*) ([0-9:, -]+)' => len = 6
# e.g.: thusday => normal values
#
# Types: CALENDAR_DATE
# MONTH WEEK DAY
# WEEK DAY
# MONTH DATE
# MONTH DAY
#
|
"""
===================
Universal Functions
===================
Ufuncs are, generally speaking, mathematical functions or operations that are
applied element-by-element to the contents of an array. That is, the result
in each output array element only depends on the value in the corresponding
input array (or arrays) and on no other array elements. Numpy comes with a
large suite of ufuncs, and scipy extends that suite substantially. The simplest
example is the addition operator: ::
>>> np.array([0,2,3,4]) + np.array([1,1,-1,2])
array([1, 3, 2, 6])
The unfunc module lists all the available ufuncs in numpy. Documentation on
the specific ufuncs may be found in those modules. This documentation is
intended to address the more general aspects of unfuncs common to most of
them. All of the ufuncs that make use of Python operators (e.g., +, -, etc.)
have equivalent functions defined (e.g. add() for +)
Type coercion
=============
What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of
two different types? What is the type of the result? Typically, the result is
the higher of the two types. For example: ::
float32 + float64 -> float64
int8 + int32 -> int32
int16 + float32 -> float32
float32 + complex64 -> complex64
There are some less obvious cases generally involving mixes of types
(e.g. uints, ints and floats) where equal bit sizes for each are not
capable of saving all the information in a different type of equivalent
bit size. Some examples are int32 vs float32 or uint32 vs int32.
Generally, the result is the higher type of larger size than both
(if available). So: ::
int32 + float32 -> float64
uint32 + int32 -> int64
Finally, the type coercion behavior when expressions involve Python
scalars is different than that seen for arrays. Since Python has a
limited number of types, combining a Python int with a dtype=np.int8
array does not coerce to the higher type but instead, the type of the
array prevails. So the rules for Python scalars combined with arrays is
that the result will be that of the array equivalent the Python scalar
if the Python scalar is of a higher 'kind' than the array (e.g., float
vs. int), otherwise the resultant type will be that of the array.
For example: ::
Python int + int8 -> int8
Python float + int8 -> float64
ufunc methods
=============
Binary ufuncs support 4 methods.
**.reduce(arr)** applies the binary operator to elements of the array in
sequence. For example: ::
>>> np.add.reduce(np.arange(10)) # adds all elements of array
45
For multidimensional arrays, the first dimension is reduced by default: ::
>>> np.add.reduce(np.arange(10).reshape(2,5))
array([ 5, 7, 9, 11, 13])
The axis keyword can be used to specify different axes to reduce: ::
>>> np.add.reduce(np.arange(10).reshape(2,5),axis=1)
array([10, 35])
**.accumulate(arr)** applies the binary operator and generates an an
equivalently shaped array that includes the accumulated amount for each
element of the array. A couple examples: ::
>>> np.add.accumulate(np.arange(10))
array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45])
>>> np.multiply.accumulate(np.arange(1,9))
array([ 1, 2, 6, 24, 120, 720, 5040, 40320])
The behavior for multidimensional arrays is the same as for .reduce(),
as is the use of the axis keyword).
**.reduceat(arr,indices)** allows one to apply reduce to selected parts
of an array. It is a difficult method to understand. See the documentation
at:
**.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and
arr2. It will work on multidimensional arrays (the shape of the result is
the concatenation of the two input shapes.: ::
>>> np.multiply.outer(np.arange(3),np.arange(4))
array([[0, 0, 0, 0],
[0, 1, 2, 3],
[0, 2, 4, 6]])
Output arguments
================
All ufuncs accept an optional output array. The array must be of the expected
output shape. Beware that if the type of the output array is of a different
(and lower) type than the output result, the results may be silently truncated
or otherwise corrupted in the downcast to the lower type. This usage is useful
when one wants to avoid creating large temporary arrays and instead allows one
to reuse the same array memory repeatedly (at the expense of not being able to
use more convenient operator notation in expressions). Note that when the
output argument is used, the ufunc still returns a reference to the result.
>>> x = np.arange(2)
>>> np.add(np.arange(2),np.arange(2.),x)
array([0, 2])
>>> x
array([0, 2])
and & or as ufuncs
==================
Invariably people try to use the python 'and' and 'or' as logical operators
(and quite understandably). But these operators do not behave as normal
operators since Python treats these quite differently. They cannot be
overloaded with array equivalents. Thus using 'and' or 'or' with an array
results in an error. There are two alternatives:
1) use the ufunc functions logical_and() and logical_or().
2) use the bitwise operators & and \\|. The drawback of these is that if
the arguments to these operators are not boolean arrays, the result is
likely incorrect. On the other hand, most usages of logical_and and
logical_or are with boolean arrays. As long as one is careful, this is
a convenient way to apply these operators.
""" |
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True, default_section='DEFAULT',
interpolation=<unset>, converters=<unset>):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
When `default_section' is given, the name of the special section is
named accordingly. By default it is called ``"DEFAULT"`` but this can
be customized to point to any other valid section name. Its current
value can be retrieved using the ``parser_instance.default_section``
attribute and may be modified at runtime.
When `interpolation` is given, it should be an Interpolation subclass
instance. It will be used as the handler for option value
pre-processing when using getters. RawConfigParser object s don't do
any sort of interpolation, whereas ConfigParser uses an instance of
BasicInterpolation. The library also provides a ``zc.buildbot``
inspired ExtendedInterpolation implementation.
When `converters` is given, it should be a dictionary where each key
represents the name of a type converter and each value is a callable
implementing the conversion from string to the desired datatype. Every
converter gets its corresponding get*() method on the parser object and
section proxies.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
{'seed': 0, 'showLeaves': True, 'armLevels': 0, 'leafDist': '6', 'baseSize': 0.3499999940395355, 'loopFrames': 0, 'af3': 4.0, 'previewArm': False, 'leafangle': -45.0, 'useParentAngle': True, 'handleType': '0', 'branches': (0, 60, 30, 10), 'autoTaper': True, 'splitAngle': (12.0, 18.0, 16.0, 0.0), 'baseSize_s': 0.800000011920929, 'closeTip': False, 'af2': 1.0, 'prune': False, 'scale0': 1.0, 'rMode': 'rotate', 'useOldDownAngle': False, 'scaleV0': 0.10000000149011612, 'splitBias': 0.0, 'resU': 2, 'curveBack': (0.0, -5.0, 0.0, 0.0), 'scale': 12.0, 'shape': '8', 'leafDownAngle': 45.0, 'af1': 1.0, 'ratio': 0.019999999552965164, 'horzLeaves': True, 'leafRotate': 137.5, 'minRadius': 0.0020000000949949026, 'bevelRes': 2, 'splitByLen': True, 'rootFlare': 1.149999976158142, 'makeMesh': False, 'downAngleV': (0.0, 25.0, 30.0, 10.0), 'levels': 3, 'scaleV': 2.0, 'armAnim': False, 'lengthV': (0.05000000074505806, 0.20000000298023224, 0.3499999940395355, 0.0), 'pruneWidth': 0.3100000023841858, 'gustF': 0.07500000298023224, 'taper': (1.0, 1.0, 1.0, 1.0), 'splitAngleV': (2.0, 2.0, 0.0, 0.0), 'prunePowerLow': 0.0010000000474974513, 'leafScaleT': 0.20000000298023224, 'leafScaleX': 0.5, 'leafRotateV': 0.0, 'ratioPower': 1.399999976158142, 'segSplits': (0.3499999940395355, 0.3499999940395355, 0.3499999940395355, 0.0), 'downAngle': (90.0, 60.0, 50.0, 45.0), 'rotateV': (0.0, 0.0, 0.0, 0.0), 'gust': 1.0, 'attractUp': (0.0, -1.0, -0.6499999761581421, 0.0), 'leafScaleV': 0.25, 'frameRate': 1.0, 'curveV': (100.0, 80.0, 80.0, 0.0), 'boneStep': (1, 1, 1, 1), 'customShape': (0.699999988079071, 1.0, 0.30000001192092896, 0.5900000333786011), 'pruneBase': 0.30000001192092896, 'leafAnim': False, 'curveRes': (10, 8, 3, 1), 'nrings': 0, 'bevel': True, 'taperCrown': 0.0, 'baseSplits': 2, 'leafShape': 'hex', 'splitHeight': 0.550000011920929, 'wind': 1.0, 'curve': (0.0, -30.0, -25.0, 0.0), 'rotate': (137.5, 137.5, 137.5, 137.5), 'length': (1.0, 0.33000001311302185, 0.375, 0.44999998807907104), 'leafScale': 0.20000000298023224, 'attractOut': (0.0, 0.20000000298023224, 0.25, 0.0), 'prunePowerHigh': 0.10000000149011612, 'branchDist': 1.5, 'useArm': False, 'pruneRatio': 1.0, 'shapeS': '7', 'leafDownAngleV': 10.0, 'pruneWidthPeak': 0.5, 'radiusTweak': (1.0, 1.0, 1.0, 1.0), 'leaves': 16} |
"""
The react module provides functionality for Reactive Programming (RP) and
Functional Reactive Programming (FRP).
It is a bit difficult to explain what FRP really is. This is because
every implementation has its own take on it, and because it requires a
bit of a paradigm shift compared to classic event-driven programming.
FRP does not have to be difficult and we think our implementation of
``flexx.react`` is relatively easy to use. This brief guide takes you
through some of the FRP aspects using code examples.
What is FRP
-----------
(Don't worry if the next two paragraphs sound complicated;
things should start to make sense when we explain thing using code.)
*Where event-driven programming is about reacting to things that happen,
RP is about staying up to date with changing signals.*
In RP the different components in an application communicate via streams
of data. In other words, components keep track of (and react to) the
*signal values* of other components. All signals (except source/input
signals) have one or more upstream signals, and can combine and or
modify these to produce a new signal value. The value of each signal
is *cached*, so that the operations applied to the signal values only
have to be performed when any upstream signal has changed. When a signal
changes its value, it will *notify* its downstream signals, so that
everything stays up-to-date.
In ``flexx.react`` signals are addressed using a string. This may seem
unusual at first, but it allows easy binding for signals on classes,
allow signal loops, and has other advantages that we'll discuss when
we talk about dynamism.
Signals
-------
A signal can be created by decorating a function. In RP-speak, the
function is "lifted" to a signal:
.. code-block:: py
# The function greet() is used to react to signal "name"
@react.connect('name')
def greet(n):
print('hello %!' % n)
The example above looks quite similar to how some event-drive applications
allow binding callbacks to events. There are, however, a few differences:
a) The greet function has now become a signal object, which has an output
of its own (although the output is None in this case, because the
function does not return a value, more on that below); b) The function
(which we'd call the "callback" in an event driven system) does not
accept an event object, but a value that corresponds to the upstream
signal value.
One other advantage of a RP system is that signals can *connect to
multiple upsteam signals*:
.. code-block:: py
@react.connect('first_name', 'last_name')
def greet(first, last):
print('hello %s %s!' % (first, last)
This is a feature that saves a lot of overhead. For any "callback" that
you define, you specify *exactly* what input signals there are, and it will
always be up to date. Doing that in an event-driven system quickly results
in a spaghetti of callbacks and boilerplate to keep track of state.
The function of a signal gets called directly when any of the
upstream signals (or the upstream-upstream signals) change. The return value
of the function represents the output signal value, which can also be None.
When the return value is ``undefined`` (from ``react.undefined`` or
``pyscript.undefined``), the value is ignored and the signal maintains
its current value.
Source and input signals
------------------------
Signals must start somewhere. The *source signal* has a ``_set()`` method
that the programmer can use to set the value of the signal:
.. code-block:: py
@react.source
def name(n):
return n
The function for this source signal is very simple. You usually want
to do some input checking and/or normalization here. Especialy if the input
comes from the user, as is the case with the input signal.
The *input signal* is a source signal that can be called with an argument
to set its value:
.. code-block:: py
@react.input
def name(n='john NAME
if not isinstance(n, str):
raise ValueError('Name must be a string')
return n.capitalized()
# And later ...
name('jane NAME can also see how the default value of the function argument can be
used to specify the initial signal value.
Source and input signals generally do not have upstream signals, but
they can have them.
A complete example
------------------
.. code-block:: py
@react.input
def first_name(s='john'):
return str(s)
@react.input
def last_name(s='NAME
return str(s)
@react.connect('first_name', 'last_name')
def full_name(first, 'last'):
return '%s %s' % (first, last)
@react.connect('full_name')
def greet(name):
print('hello %s!' % name)
Lazy signals
------------
In contrast to normal signals, a *lazy signal* does not update immediately
when the upstream signals changes. It is updated automatically (lazily)
whenever its value is queried. Note that this has little effect when
there is a normal signal downstream.
Lazy signals can be convenient in a situation where values changes rapidly,
while the current value is only needed sparingly. To create, use the
``lazy()`` decorator:
.. code-block:: py
@react.lazy('first_name', 'last_name')
def full_name(first, last):
return '%s %s' % (first, last)
Caching
-------
.. code-block:: py
@react.input
def data_select(id):
return str(id)
@react.input
def data_clean(clean):
return bool(clean)
@react.connect('data_select')
def data(id):
open_connection(id)
return get_data_from_the_web() # this may take a while
@react.connect('data', 'data_clean')
def show_data(data, clean):
if clean:
data = clean_func(data)
plotter.show(data)
This hypothetical example shows how caching helps keep apps efficient.
The ``data`` signal will only update when the ``data_select`` changes.
When ``data_clean`` is changes, the ``show_data`` signal updates, but
it will use the cached value of the data.
The HasSignals class
--------------------
It is often convenient to create classes that have signals. To do so,
inherit from the ``HasSignals`` class:
.. code-block:: py
class Person(react.HasSignals):
def __init__(self, father):
assert isinstance(father, Person)
self.father = father
react.HasSignals.__init__(self)
@react.input
def first_name(s):
return s
@react.connect('father.last_name')
def last_name(s):
return s
@react.connect('first_name', 'last_name')
de greet(first, last):
print('hello %s %s!' % (first, last))
The above example show how you can directly refer to signals on the
object using their name, and even use dot notation to address the signal
of an attribute of the object.
It also shows that the signal functions do not have a ``self`` argument.
They do not have to, but they can if they needs access to the instance.
Dynamism
--------
With dynamism, you can refer to signals of signals, and have the signal
connections be made automatically. Let's modify the last example a bit:
.. code-block:: py
class Person(react.HasSignals):
def __init__(self, father):
self.father(father)
react.HasSignals.__init__(self)
@react.input
def father(f):
assert isinstance(f, Person)
return f
@react.connect('father.last_name')
def last_name(s):
return s
...
In this case, the last name of the father will change when either the father
changes, or the father changes its name. Dynamism also supports star notation:
.. code-block:: py
class Person(react.HasSignals):
@react.input
def children(cc):
assert isinstance(cc, tuple)
assert all([isinstance(c, Person) for c in cc])
return cc
@react.connect('children.*')
def child_names(*names):
return ', '.join(name)
Signal history
--------------
The signal object provides a bit more information than only its value.
The most notable is the value of the signal before the last change.
.. code-block:: py
class Person(react.HasSignals):
@react.connect('first_name'):
def react_to_name_change(self, new_name):
old_name = self.first_name.last_value
new_name = self.first_name.value # == new_name
The signal value also holds information on value update times, but this
is currently private. We'll have to see if this is reliable and
convenient enough to make it public.
Functional RP
-------------
The "F" in FRP stands for functional. Currently, there is limited
support for that, for example:
.. code-block:: py
filter = lambda x: x>0
@react.connect(react.filter(filter, 'number'))
def show_positive_numbers(v):
print(v)
This functionality is to be extended in the future.
Some things just are events
---------------------------
Many things can be described as changing signal values. Even
"left_mouse_down" works pretty well. However, some things really *are*
events, like key presses and timers. How to handle these is still
something we'd need to work out ...
""" |
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |