comments
stringlengths 2
31.4k
|
---|
#!/usr/bin/env python
# (c) 2013, NAME <EMAIL>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
#
# Author: NAME <EMAIL>
#
# Description:
# This module queries local or remote Docker daemons and generates
# inventory information.
#
# This plugin does not support targeting of specific hosts using the --host
# flag. Instead, it queries the Docker API for each container, running
# or not, and returns this data all once.
#
# The plugin returns the following custom attributes on Docker containers:
# docker_args
# docker_config
# docker_created
# docker_driver
# docker_exec_driver
# docker_host_config
# docker_hostname_path
# docker_hosts_path
# docker_id
# docker_image
# docker_name
# docker_network_settings
# docker_path
# docker_resolv_conf_path
# docker_state
# docker_volumes
# docker_volumes_rw
#
# Requirements:
# The docker-py module: https://github.com/dotcloud/docker-py
#
# Notes:
# A config file can be used to configure this inventory module, and there
# are several environment variables that can be set to modify the behavior
# of the plugin at runtime:
# DOCKER_CONFIG_FILE
# DOCKER_HOST
# DOCKER_VERSION
# DOCKER_TIMEOUT
# DOCKER_PRIVATE_SSH_PORT
# DOCKER_DEFAULT_IP
#
# Environment Variables:
# environment variable: DOCKER_CONFIG_FILE
# description:
# - A path to a Docker inventory hosts/defaults file in YAML format
# - A sample file has been provided, colocated with the inventory
# file called 'docker.yml'
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_HOST
# description:
# - The socket on which to connect to a Docker daemon API
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_VERSION
# description:
# - Version of the Docker API to use
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_TIMEOUT
# description:
# - Timeout in seconds for connections to Docker daemon API
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_PRIVATE_SSH_PORT
# description:
# - The private port (container port) on which SSH is listening
# for connections
# default: 22
# required: false
# environment variable: DOCKER_DEFAULT_IP
# description:
# - This environment variable overrides the container SSH connection
# IP address (aka, 'ansible_ssh_host')
#
# This option allows one to override the ansible_ssh_host whenever
# Docker has exercised its default behavior of binding private ports
# to all interfaces of the Docker host. This behavior, when dealing
# with remote Docker hosts, does not allow Ansible to determine
# a proper host IP address on which to connect via SSH to containers.
# By default, this inventory module assumes all IP_ADDRESS-exposed
# ports to be bound to localhost:<port>. To override this
# behavior, for example, to bind a container's SSH port to the public
# interface of its host, one must manually set this IP.
#
# It is preferable to begin to launch Docker containers with
# ports exposed on publicly accessible IP addresses, particularly
# if the containers are to be targeted by Ansible for remote
# configuration, not accessible via localhost SSH connections.
#
# Docker containers can be explicitly exposed on IP addresses by
# a) starting the daemon with the --ip argument
# b) running containers with the -P/--publish ip::containerPort
# argument
# default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker
# required: false
#
# Examples:
# Use the config file:
# DOCKER_CONFIG_FILE=./docker.yml docker.py --list
#
# Connect to docker instance on localhost port 4243
# DOCKER_HOST=tcp://localhost:4243 docker.py --list
#
# Any container's ssh port exposed on IP_ADDRESS will mapped to
# another IP address (where Ansible will attempt to connect via SSH)
# DOCKER_DEFAULT_IP=IP_ADDRESS docker.py --list
|
"""
=====================================
Structured Arrays (aka Record Arrays)
=====================================
Introduction
============
Numpy provides powerful capabilities to create arrays of structs or records.
These arrays permit one to manipulate the data by the structs or by fields of
the struct. A simple example will show what is meant.: ::
>>> x = np.zeros((2,),dtype=('i4,f4,a10'))
>>> x[:] = [(1,2.,'Hello'),(2,3.,"World")]
>>> x
array([(1, 2.0, 'Hello'), (2, 3.0, 'World')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
Here we have created a one-dimensional array of length 2. Each element of
this array is a record that contains three items, a 32-bit integer, a 32-bit
float, and a string of length 10 or less. If we index this array at the second
position we get the second record: ::
>>> x[1]
(2,3.,"World")
Conveniently, one can access any field of the array by indexing using the
string that names that field. In this case the fields have received the
default names 'f0', 'f1' and 'f2'.
>>> y = x['f1']
>>> y
array([ 2., 3.], dtype=float32)
>>> y[:] = 2*y
>>> y
array([ 4., 6.], dtype=float32)
>>> x
array([(1, 4.0, 'Hello'), (2, 6.0, 'World')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
In these examples, y is a simple float array consisting of the 2nd field
in the record. But, rather than being a copy of the data in the structured
array, it is a view, i.e., it shares exactly the same memory locations.
Thus, when we updated this array by doubling its values, the structured
array shows the corresponding values as doubled as well. Likewise, if one
changes the record, the field view also changes: ::
>>> x[1] = (-1,-1.,"Master")
>>> x
array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')],
dtype=[('f0', '>i4'), ('f1', '>f4'), ('f2', '|S10')])
>>> y
array([ 4., -1.], dtype=float32)
Defining Structured Arrays
==========================
One defines a structured array through the dtype object. There are
**several** alternative ways to define the fields of a record. Some of
these variants provide backward compatibility with Numeric, numarray, or
another module, and should not be used except for such purposes. These
will be so noted. One specifies record structure in
one of four alternative ways, using an argument (as supplied to a dtype
function keyword or a dtype object constructor itself). This
argument must be one of the following: 1) string, 2) tuple, 3) list, or
4) dictionary. Each of these is briefly described below.
1) String argument (as used in the above examples).
In this case, the constructor expects a comma-separated list of type
specifiers, optionally with extra shape information.
The type specifiers can take 4 different forms: ::
a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f4, f8, c8, c16, a<n>
(representing bytes, ints, unsigned ints, floats, complex and
fixed length strings of specified byte lengths)
b) int8,...,uint8,...,float32, float64, complex64, complex128
(this time with bit sizes)
c) older Numeric/numarray type specifications (e.g. Float32).
Don't use these in new code!
d) Single character type specifiers (e.g H for unsigned short ints).
Avoid using these unless you must. Details can be found in the
Numpy book
These different styles can be mixed within the same string (but why would you
want to do that?). Furthermore, each type specifier can be prefixed
with a repetition number, or a shape. In these cases an array
element is created, i.e., an array within a record. That array
is still referred to as a single field. An example: ::
>>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64')
>>> x
array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])],
dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))])
By using strings to define the record structure, it precludes being
able to name the fields in the original definition. The names can
be changed as shown later, however.
2) Tuple argument: The only relevant tuple case that applies to record
structures is when a structure is mapped to an existing data type. This
is done by pairing in a tuple, the existing data type with a matching
dtype definition (using any of the variants being described here). As
an example (using a definition using a list, so see 3) for further
details): ::
>>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')]))
>>> x
array([0, 0, 0])
>>> x['r']
array([0, 0, 0], dtype=uint8)
In this case, an array is produced that looks and acts like a simple int32 array,
but also has definitions for fields that use only one byte of the int32 (a bit
like Fortran equivalencing).
3) List argument: In this case the record structure is defined with a list of
tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field
('' is permitted), 2) the type of the field, and 3) the shape (optional).
For example:
>>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
>>> x
array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])],
dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))])
4) Dictionary argument: two different forms are permitted. The first consists
of a dictionary with two required keys ('names' and 'formats'), each having an
equal sized list of values. The format list contains any type/shape specifier
allowed in other contexts. The names must be strings. There are two optional
keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to
the required two where offsets contain integer offsets for each field, and
titles are objects containing metadata for each field (these do not have
to be strings), where the value of None is permitted. As an example: ::
>>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[('col1', '>i4'), ('col2', '>f4')])
The other dictionary form permitted is a dictionary of name keys with tuple
values specifying type, offset, and an optional title.
>>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')])
Accessing and modifying field names
===================================
The field names are an attribute of the dtype object defining the record structure.
For the last example: ::
>>> x.dtype.names
('col1', 'col2')
>>> x.dtype.names = ('x', 'y')
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')])
>>> x.dtype.names = ('x', 'y', 'z') # wrong number of names
<type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2
Accessing field titles
====================================
The field titles provide a standard place to put associated info for fields.
They do not have to be strings.
>>> x.dtype.fields['x'][2]
'title 1'
""" |
# -*- coding: utf-8 -*-
#
# hill_tononi_Vp.py
#
# This file is part of NEST.
#
# Copyright (C) 2004 The NEST Initiative
#
# NEST is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# NEST is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with NEST. If not, see <http://www.gnu.org/licenses/>.
# ! ===========================================
# ! NEST Topology Module: A Case-Based Tutorial
# ! ===========================================
# !
# ! :Author: Hans Ekkehard Plesser
# ! :Institution: Norwegian University of Life Sciences
# ! :Version: 0.4
# ! :Date: 21 November 2012
# ! :Copyright: The NEST Initiative (2004)
# ! :License: Creative Commons Attribution License
# !
# ! **NOTE:** The network generated by this script does generate
# ! dynamics in which the activity of the entire system, especially
# ! Rp and Vp oscillates with approx 5 Hz. This is different from
# ! the full model. Deviations are due to the different model type
# ! and the elimination of a number of connections, with no changes
# ! to the weights.
# !
# ! Introduction
# ! ============
# !
# ! This tutorial shows you how to implement a simplified version of the
# ! Hill-Tononi model of the early visual pathway using the NEST Topology
# ! module. The model is described in the paper
# !
# ! NAME and G. Tononi.
# ! Modeling Sleep and Wakefulness in the Thalamocortical System.
# ! J Neurophysiology **93**:1671-1698 (2005).
# ! Freely available via `doi 10.1152/jn.00915.2004
# ! <http://dx.doi.org/10.1152/jn.00915.2004>`_.
# !
# ! We simplify the model somewhat both to keep this tutorial a bit
# ! shorter, and because some details of the Hill-Tononi model are not
# ! currently supported by NEST. Simplifications include:
# !
# ! 1. We use the ``iaf_cond_alpha`` neuron model, which is
# ! simpler than the Hill-Tononi model.
# !
# ! #. As the ``iaf_cond_alpha`` neuron model only supports two
# ! synapses (labeled "ex" and "in"), we only include AMPA and
# ! GABA_A synapses.
# !
# ! #. We ignore the secondary pathway (Ts, Rs, Vs), since it adds just
# ! more of the same from a technical point of view.
# !
# ! #. Synaptic delays follow a Gaussian distribution in the HT
# ! model. This implies actually a Gaussian distributions clipped at
# ! some small, non-zero delay, since delays must be
# ! positive. Currently, there is a bug in the Topology module when
# ! using clipped Gaussian distribution. We therefore draw delays from a
# ! uniform distribution.
# !
# ! #. Some further adaptations are given at the appropriate locations in
# ! the script.
# !
# ! This tutorial is divided in the following sections:
# !
# ! Philosophy_
# ! Discusses the philosophy applied to model implementation in this
# ! tutorial
# !
# ! Preparations_
# ! Neccessary steps to use NEST and the Topology Module
# !
# ! `Configurable Parameters`_
# ! Define adjustable network parameters
# !
# ! `Neuron Models`_
# ! Define the neuron models needed by the network model
# !
# ! Populations_
# ! Create Populations
# !
# ! `Synapse models`_
# ! Define the synapse models used in the network model
# !
# ! Connections_
# ! Create Connections
# !
# ! `Example simulation`_
# ! Perform a small simulation for illustration. This
# ! section also discusses the setup for recording.
# ! Philosophy
# ! ==========
# ! A network models has two essential components: *populations* and
# ! *projections*. We first use NEST's ``CopyModel()`` mechanism to
# ! create specific models for all populations and subpopulations in
# ! the network, and then create the populations using the Topology
# ! modules ``CreateLayer()`` function.
# !
# ! We use a two-stage process to create the connections, mainly
# ! because the same configurations are required for a number of
# ! projections: we first define dictionaries specifying the
# ! connections, then apply these dictionaries later.
# !
# ! The way in which we declare the network model here is an
# ! example. You should not consider it the last word: we expect to see
# ! a significant development in strategies and tools for network
# ! descriptions in the future. The following contributions to CNS\*09
# ! seem particularly interesting
# !
# ! - NAME & NAME Declarative model description and
# ! code generation for hybrid individual- and population-based
# ! simulations of the early visual system (P57);
# ! - NAMEok, NAME & NAME Describing
# ! and exchanging models of neurons and neuronal networks with
# ! NeuroML (F1);
# !
# ! as well as the following paper which will apply in PLoS
# ! Computational Biology shortly:
# !
# ! - NAME NAME & Hans Ekkehard Plesser.
# ! Towards reproducible descriptions of neuronal network models.
# ! Preparations
# ! ============
# ! Please make sure that your ``PYTHONPATH`` is set correctly, so
# ! that Python can find the NEST Python module.
# ! **Note:** By default, the script does not show any graphics.
# ! Set ``SHOW_FIGURES`` to ``True`` to activate graphics.
# ! This example uses the function GetLeaves, which is deprecated. A
# ! deprecation warning is therefore issued. For details about deprecated
# ! functions, see documentation.
|
###############################################################################
# Tested so far:
#
# SamrConnect5
# SamrConnect4
# SamrConnect2
# SamrConnect
# SamrOpenDomain
# SamrOpenGroup
# SamrOpenAlias
# SamrOpenUser
# SamrEnumerateDomainsInSamServer
# SamrEnumerateGroupsInDomain
# SamrEnumerateAliasesInDomain
# SamrEnumerateUsersInDomain
# SamrLookupDomainInSamServer
# SamrLookupNamesInDomain
# SamrLookupIdsInDomain
# SamrGetGroupsForUser
# SamrQueryDisplayInformation3
# SamrQueryDisplayInformation2
# SamrQueryDisplayInformation
# SamrGetDisplayEnumerationIndex2
# SamrGetDisplayEnumerationIndex
# SamrCreateGroupInDomain
# SamrCreateAliasInDomain
# SamrCreateUser2InDomain
# SamrCreateUserInDomain
# SamrQueryInformationDomain2
# SamrQueryInformationDomain
# SamrQueryInformationGroup
# SamrQueryInformationAlias
# SamrQueryInformationUser2
# SamrQueryInformationUser
# SamrDeleteUser
# SamrDeleteAlias
# SamrDeleteGroup
# SamrAddMemberToGroup
# SamrRemoveMemberFromGroup
# SamrGetMembersInGroup
# SamrGetMembersInAlias
# SamrAddMemberToAlias
# SamrRemoveMemberFromAlias
# SamrAddMultipleMembersToAlias
# SamrRemoveMultipleMembersFromAlias
# SamrRemoveMemberFromForeignDomain
# SamrGetAliasMembership
# SamrCloseHandle
# SamrSetMemberAttributesOfGroup
# SamrGetUserDomainPasswordInformation
# SamrGetDomainPasswordInformation
# SamrRidToSid
# SamrSetDSRMPassword
# SamrValidatePassword
# SamrQuerySecurityObject
# SamrSetSecurityObject
# SamrSetInformationDomain
# SamrSetInformationGroup
# SamrSetInformationAlias
# SamrSetInformationUser2
# SamrChangePasswordUser
# SamrOemChangePasswordUser2
# SamrUnicodeChangePasswordUser2
# hSamrConnect5
# hSamrConnect4
# hSamrConnect2
# hSamrConnect
# hSamrOpenDomain
# hSamrOpenGroup
# hSamrOpenAlias
# hSamrOpenUser
# hSamrEnumerateDomainsInSamServer
# hSamrEnumerateGroupsInDomain
# hSamrEnumerateAliasesInDomain
# hSamrEnumerateUsersInDomain
# hSamrQueryDisplayInformation3
# hSamrQueryDisplayInformation2
# hSamrQueryDisplayInformation
# hSamrGetDisplayEnumerationIndex2
# hSamrGetDisplayEnumerationIndex
# hSamrCreateGroupInDomain
# hSamrCreateAliasInDomain
# hSamrCreateUser2InDomain
# hSamrCreateUserInDomain
# hSamrQueryInformationDomain2
# hSamrQueryInformationDomain
# hSamrQueryInformationGroup
# hSamrQueryInformationAlias
# SamrQueryInformationUser2
# hSamrSetInformationDomain
# hSamrSetInformationGroup
# hSamrSetInformationAlias
# hSamrSetInformationUser2
# hSamrDeleteGroup
# hSamrDeleteAlias
# hSamrDeleteUser
# hSamrAddMemberToGroup
# hSamrRemoveMemberFromGroup
# hSamrGetMembersInGroup
# hSamrAddMemberToAlias
# hSamrRemoveMemberFromAlias
# hSamrGetMembersInAlias
# hSamrRemoveMemberFromForeignDomain
# hSamrAddMultipleMembersToAlias
# hSamrRemoveMultipleMembersFromAlias
# hSamrGetGroupsForUser
# hSamrGetAliasMembership
# hSamrChangePasswordUser
# hSamrUnicodeChangePasswordUser2
# hSamrLookupDomainInSamServer
# hSamrSetSecurityObject
# hSamrQuerySecurityObject
# hSamrCloseHandle
# hSamrGetUserDomainPasswordInformation
# hSamrGetDomainPasswordInformation
# hSamrRidToSid
# hSamrValidatePassword
# hSamrLookupNamesInDomain
# hSamrLookupIdsInDomain
#
# ToDo:
#
# Shouldn't dump errors against a win7
################################################################################
|
"""
============
Array basics
============
Array types and conversions between types
=========================================
NumPy supports a much greater variety of numerical types than Python does.
This section shows which are available, and how to modify an array's data-type.
The primitive types supported are tied closely to those in C:
.. list-table::
:header-rows: 1
* - Numpy type
- C type
- Description
* - `np.bool`
- ``bool``
- Boolean (True or False) stored as a byte
* - `np.byte`
- ``signed char``
- Platform-defined
* - `np.ubyte`
- ``unsigned char``
- Platform-defined
* - `np.short`
- ``short``
- Platform-defined
* - `np.ushort`
- ``unsigned short``
- Platform-defined
* - `np.intc`
- ``int``
- Platform-defined
* - `np.uintc`
- ``unsigned int``
- Platform-defined
* - `np.int_`
- ``long``
- Platform-defined
* - `np.uint`
- ``unsigned long``
- Platform-defined
* - `np.longlong`
- ``long long``
- Platform-defined
* - `np.ulonglong`
- ``unsigned long long``
- Platform-defined
* - `np.half` / `np.float16`
-
- Half precision float:
sign bit, 5 bits exponent, 10 bits mantissa
* - `np.single`
- ``float``
- Platform-defined single precision float:
typically sign bit, 8 bits exponent, 23 bits mantissa
* - `np.double`
- ``double``
- Platform-defined double precision float:
typically sign bit, 11 bits exponent, 52 bits mantissa.
* - `np.longdouble`
- ``long double``
- Platform-defined extended-precision float
* - `np.csingle`
- ``float complex``
- Complex number, represented by two single-precision floats (real and imaginary components)
* - `np.cdouble`
- ``double complex``
- Complex number, represented by two double-precision floats (real and imaginary components).
* - `np.clongdouble`
- ``long double complex``
- Complex number, represented by two extended-precision floats (real and imaginary components).
Since many of these have platform-dependent definitions, a set of fixed-size
aliases are provided:
.. list-table::
:header-rows: 1
* - Numpy type
- C type
- Description
* - `np.int8`
- ``int8_t``
- Byte (-128 to 127)
* - `np.int16`
- ``int16_t``
- Integer (-32768 to 32767)
* - `np.int32`
- ``int32_t``
- Integer (-2147483648 to 2147483647)
* - `np.int64`
- ``int64_t``
- Integer (-9223372036854775808 to 9223372036854775807)
* - `np.uint8`
- ``uint8_t``
- Unsigned integer (0 to 255)
* - `np.uint16`
- ``uint16_t``
- Unsigned integer (0 to 65535)
* - `np.uint32`
- ``uint32_t``
- Unsigned integer (0 to 4294967295)
* - `np.uint64`
- ``uint64_t``
- Unsigned integer (0 to 18446744073709551615)
* - `np.intp`
- ``intptr_t``
- Integer used for indexing, typically the same as ``ssize_t``
* - `np.uintp`
- ``uintptr_t``
- Integer large enough to hold a pointer
* - `np.float32`
- ``float``
-
* - `np.float64` / `np.float_`
- ``double``
- Note that this matches the precision of the builtin python `float`.
* - `np.complex64`
- ``float complex``
- Complex number, represented by two 32-bit floats (real and imaginary components)
* - `np.complex128` / `np.complex_`
- ``double complex``
- Note that this matches the precision of the builtin python `complex`.
NumPy numerical types are instances of ``dtype`` (data-type) objects, each
having unique characteristics. Once you have imported NumPy using
::
>>> import numpy as np
the dtypes are available as ``np.bool_``, ``np.float32``, etc.
Advanced types, not listed in the table above, are explored in
section :ref:`structured_arrays`.
There are 5 basic numerical types representing booleans (bool), integers (int),
unsigned integers (uint) floating point (float) and complex. Those with numbers
in their name indicate the bitsize of the type (i.e. how many bits are needed
to represent a single value in memory). Some types, such as ``int`` and
``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit
vs. 64-bit machines). This should be taken into account when interfacing
with low-level code (such as C or Fortran) where the raw memory is addressed.
Data-types can be used as functions to convert python numbers to array scalars
(see the array scalar section for an explanation), python sequences of numbers
to arrays of that type, or as arguments to the dtype keyword that many numpy
functions or methods accept. Some examples::
>>> import numpy as np
>>> x = np.float32(1.0)
>>> x
1.0
>>> y = np.int_([1,2,4])
>>> y
array([1, 2, 4])
>>> z = np.arange(3, dtype=np.uint8)
>>> z
array([0, 1, 2], dtype=uint8)
Array types can also be referred to by character codes, mostly to retain
backward compatibility with older packages such as Numeric. Some
documentation may still refer to these, for example::
>>> np.array([1, 2, 3], dtype='f')
array([ 1., 2., 3.], dtype=float32)
We recommend using dtype objects instead.
To convert the type of an array, use the .astype() method (preferred) or
the type itself as a function. For example: ::
>>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE
array([ 0., 1., 2.])
>>> np.int8(z)
array([0, 1, 2], dtype=int8)
Note that, above, we use the *Python* float object as a dtype. NumPy knows
that ``int`` refers to ``np.int_``, ``bool`` means ``np.bool_``,
that ``float`` is ``np.float_`` and ``complex`` is ``np.complex_``.
The other data-types do not have Python equivalents.
To determine the type of an array, look at the dtype attribute::
>>> z.dtype
dtype('uint8')
dtype objects also contain information about the type, such as its bit-width
and its byte-order. The data type can also be used indirectly to query
properties of the type, such as whether it is an integer::
>>> d = np.dtype(int)
>>> d
dtype('int32')
>>> np.issubdtype(d, np.integer)
True
>>> np.issubdtype(d, np.floating)
False
Array Scalars
=============
NumPy generally returns elements of arrays as array scalars (a scalar
with an associated dtype). Array scalars differ from Python scalars, but
for the most part they can be used interchangeably (the primary
exception is for versions of Python older than v2.x, where integer array
scalars cannot act as indices for lists and tuples). There are some
exceptions, such as when code requires very specific attributes of a scalar
or when it checks specifically whether a value is a Python scalar. Generally,
problems are easily fixed by explicitly converting array scalars
to Python scalars, using the corresponding Python type function
(e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``).
The primary advantage of using array scalars is that
they preserve the array type (Python may not have a matching scalar type
available, e.g. ``int16``). Therefore, the use of array scalars ensures
identical behaviour between arrays and scalars, irrespective of whether the
value is inside an array or not. NumPy scalars also have many of the same
methods arrays do.
Overflow Errors
===============
The fixed size of NumPy numeric types may cause overflow errors when a value
requires more memory than available in the data type. For example,
`numpy.power` evaluates ``100 * 10 ** 8`` correctly for 64-bit integers,
but gives 1874919424 (incorrect) for a 32-bit integer.
>>> np.power(100, 8, dtype=np.int64)
10000000000000000
>>> np.power(100, 8, dtype=np.int32)
1874919424
The behaviour of NumPy and Python integer types differs significantly for
integer overflows and may confuse users expecting NumPy integers to behave
similar to Python's ``int``. Unlike NumPy, the size of Python's ``int`` is
flexible. This means Python integers may expand to accommodate any integer and
will not overflow.
NumPy provides `numpy.iinfo` and `numpy.finfo` to verify the
minimum or maximum values of NumPy integer and floating point values
respectively ::
>>> np.iinfo(np.int) # Bounds of the default integer on this system.
iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
>>> np.iinfo(np.int32) # Bounds of a 32-bit integer
iinfo(min=-2147483648, max=2147483647, dtype=int32)
>>> np.iinfo(np.int64) # Bounds of a 64-bit integer
iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
If 64-bit integers are still too small the result may be cast to a
floating point number. Floating point numbers offer a larger, but inexact,
range of possible values.
>>> np.power(100, 100, dtype=np.int64) # Incorrect even with 64-bit int
0
>>> np.power(100, 100, dtype=np.float64)
1e+200
Extended Precision
==================
Python's floating-point numbers are usually 64-bit floating-point numbers,
nearly equivalent to ``np.float64``. In some unusual situations it may be
useful to use floating-point numbers with more precision. Whether this
is possible in numpy depends on the hardware and on the development
environment: specifically, x86 machines provide hardware floating-point
with 80-bit precision, and while most C compilers provide this as their
``long double`` type, MSVC (standard for Windows builds) makes
``long double`` identical to ``double`` (64 bits). NumPy makes the
compiler's ``long double`` available as ``np.longdouble`` (and
``np.clongdouble`` for the complex numbers). You can find out what your
numpy provides with ``np.finfo(np.longdouble)``.
NumPy does not provide a dtype with more precision than C's
``long double``\\; in particular, the 128-bit IEEE quad precision
data type (FORTRAN's ``REAL*16``\\) is not available.
For efficient memory alignment, ``np.longdouble`` is usually stored
padded with zero bits, either to 96 or 128 bits. Which is more efficient
depends on hardware and development environment; typically on 32-bit
systems they are padded to 96 bits, while on 64-bit systems they are
typically padded to 128 bits. ``np.longdouble`` is padded to the system
default; ``np.float96`` and ``np.float128`` are provided for users who
want specific padding. In spite of the names, ``np.float96`` and
``np.float128`` provide only as much precision as ``np.longdouble``,
that is, 80 bits on most x86 machines and 64 bits in standard
Windows builds.
Be warned that even if ``np.longdouble`` offers more precision than
python ``float``, it is easy to lose that extra precision, since
python often forces values to pass through ``float``. For example,
the ``%`` formatting operator requires its arguments to be converted
to standard python types, and it is therefore impossible to preserve
extended precision even if many decimal places are requested. It can
be useful to test your code with the value
``1 + np.finfo(np.longdouble).eps``.
""" |
"""
===============
Array Internals
===============
Internal organization of numpy arrays
=====================================
It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy".
Numpy arrays consist of two major components, the raw array data (from now on,
referred to as the data buffer), and the information about the raw array data.
The data buffer is typically what people think of as arrays in C or Fortran,
a contiguous (and fixed) block of memory containing fixed sized data items.
Numpy also contains a significant set of data that describes how to interpret
the data in the data buffer. This extra information contains (among other things):
1) The basic data element's size in bytes
2) The start of the data within the data buffer (an offset relative to the
beginning of the data buffer).
3) The number of dimensions and the size of each dimension
4) The separation between elements for each dimension (the 'stride'). This
does not have to be a multiple of the element size
5) The byte order of the data (which may not be the native byte order)
6) Whether the buffer is read-only
7) Information (via the dtype object) about the interpretation of the basic
data element. The basic data element may be as simple as a int or a float,
or it may be a compound object (e.g., struct-like), a fixed character field,
or Python object pointers.
8) Whether the array is to interpreted as C-order or Fortran-order.
This arrangement allow for very flexible use of arrays. One thing that it allows
is simple changes of the metadata to change the interpretation of the array buffer.
Changing the byteorder of the array is a simple change involving no rearrangement
of the data. The shape of the array can be changed very easily without changing
anything in the data buffer or any data copying at all
Among other things that are made possible is one can create a new array metadata
object that uses the same data buffer
to create a new view of that data buffer that has a different interpretation
of the buffer (e.g., different shape, offset, byte order, strides, etc) but
shares the same data bytes. Many operations in numpy do just this such as
slices. Other operations, such as transpose, don't move data elements
around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
Typically these new versions of the array metadata but the same data buffer are
new 'views' into the data buffer. There is a different ndarray object, but it
uses the same data buffer. This is why it is necessary to force copies through
use of the .copy() method if one really wants to make a new and independent
copy of the data buffer.
New views into arrays mean the the object reference counts for the data buffer
increase. Simply doing away with the original array object will not remove the
data buffer if other views of it still exist.
Multidimensional Array Indexing Order Issues
============================================
What is the right way to index
multi-dimensional arrays? Before you jump to conclusions about the one and
true way to index multi-dimensional arrays, it pays to understand why this is
a confusing issue. This section will try to explain in detail how numpy
indexing works and why we adopt the convention we do for images, and when it
may be appropriate to adopt other conventions.
The first thing to understand is
that there are two conflicting conventions for indexing 2-dimensional arrays.
Matrix notation uses the first index to indicate which row is being selected and
the second index to indicate which column is selected. This is opposite the
geometrically oriented-convention for images where people generally think the
first index represents x position (i.e., column) and the second represents y
position (i.e., row). This alone is the source of much confusion;
matrix-oriented users and image-oriented users expect two different things with
regard to indexing.
The second issue to understand is how indices correspond
to the order the array is stored in memory. In Fortran the first index is the
most rapidly varying index when moving through the elements of a two
dimensional array as it is stored in memory. If you adopt the matrix
convention for indexing, then this means the matrix is stored one column at a
time (since the first index moves to the next row as it changes). Thus Fortran
is considered a Column-major language. C has just the opposite convention. In
C, the last index changes most rapidly as one moves through the array as
stored in memory. Thus C is a Row-major language. The matrix is stored by
rows. Note that in both cases it presumes that the matrix convention for
indexing is being used, i.e., for both Fortran and C, the first index is the
row. Note this convention implies that the indexing convention is invariant
and that the data order changes to keep that so.
But that's not the only way
to look at it. Suppose one has large two-dimensional arrays (images or
matrices) stored in data files. Suppose the data are stored by rows rather than
by columns. If we are to preserve our index convention (whether matrix or
image) that means that depending on the language we use, we may be forced to
reorder the data if it is read into memory to preserve our indexing
convention. For example if we read row-ordered data into memory without
reordering, it will match the matrix indexing convention for C, but not for
Fortran. Conversely, it will match the image indexing convention for Fortran,
but not for C. For C, if one is using data stored in row order, and one wants
to preserve the image index convention, the data must be reordered when
reading into memory.
In the end, which you do for Fortran or C depends on
which is more important, not reordering data or preserving the indexing
convention. For large images, reordering data is potentially expensive, and
often the indexing convention is inverted to avoid that.
The situation with
numpy makes this issue yet more complicated. The internal machinery of numpy
arrays is flexible enough to accept any ordering of indices. One can simply
reorder indices by manipulating the internal stride information for arrays
without reordering the data at all. Numpy will know how to map the new index
order to the data without moving the data.
So if this is true, why not choose
the index order that matches what you most expect? In particular, why not define
row-ordered images to use the image convention? (This is sometimes referred
to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
order options for array ordering in numpy.) The drawback of doing this is
potential performance penalties. It's common to access the data sequentially,
either implicitly in array operations or explicitly by looping over rows of an
image. When that is done, then the data will be accessed in non-optimal order.
As the first index is incremented, what is actually happening is that elements
spaced far apart in memory are being sequentially accessed, with usually poor
memory access speeds. For example, for a two dimensional image 'im' defined so
that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
Python behavior then im[0] would represent a column at x=0. Yet that data
would be spread over the whole array since the data are stored in row order.
Despite the flexibility of numpy's indexing, it can't really paper over the fact
basic operations are rendered inefficient because of data order or that getting
contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
im[0]), thus one can't use an idiom such as for row in im; for col in im does
work, but doesn't yield contiguous column data.
As it turns out, numpy is
smart enough when dealing with ufuncs to determine which index is the most
rapidly varying one in memory and uses that for the innermost loop. Thus for
ufuncs there is no large intrinsic advantage to either approach in most cases.
On the other hand, use of .flat with an FORTRAN ordered array will lead to
non-optimal memory access as adjacent elements in the flattened array (iterator,
actually) are not contiguous in memory.
Indeed, the fact is that Python
indexing on lists and other sequences naturally leads to an outside-to inside
ordering (the first index gets the largest grouping, the next the next largest,
and the last gets the smallest element). Since image data are normally stored
by rows, this corresponds to position within rows being the last item indexed.
If you do want to use Fortran ordering realize that
there are two approaches to consider: 1) accept that the first index is just not
the most rapidly changing in memory and have all your I/O routines reorder
your data when going from memory to disk or visa versa, or use numpy's
mechanism for mapping the first index to the most rapidly varying data. We
recommend the former if possible. The disadvantage of the latter is that many
of numpy's functions will yield arrays without Fortran ordering unless you are
careful to use the 'order' keyword. Doing this would be highly inconvenient.
Otherwise we recommend simply learning to reverse the usual order of indices
when accessing elements of an array. Granted, it goes against the grain, but
it is more in line with Python semantics and the natural order of the data.
""" |
# # ===============================================================================
# # Copyright 2014 NAME #
# # Licensed under the Apache License, Version 2.0 (the "License");
# # you may not use this file except in compliance with the License.
# # You may obtain a copy of the License at
# #
# # http://www.apache.org/licenses/LICENSE-2.0
# #
# # Unless required by applicable law or agreed to in writing, software
# # distributed under the License is distributed on an "AS IS" BASIS,
# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# # See the License for the specific language governing permissions and
# # limitations under the License.
# # ===============================================================================
# from pychron.core.ui import set_toolkit
# set_toolkit('qt4')
#
#
# # ============= enthought library imports =======================
# import os
# from traits.api import HasTraits, Instance
#
# # ============= standard library imports ========================
# # ============= local library imports ==========================
# import yaml
# from pychron.experiment.utilities.identifier import strip_runid
# from pychron.database.offline_bridge import DatabaseBridge
# from pychron.database.isotope_database_manager import IsotopeDatabaseManager
# from pychron.processing.export.export_manager import ExportManager
#
# # from pychron.processing.export.exporter import MassSpecExporter
# from pychron.paths import paths
#
# paths.build('_dev')
# from pychron.core.helpers.logger_setup import logging_setup
# logging_setup('foo')
#
# class DataSet(HasTraits):
# pass
#
#
# class PychronDataSet(DataSet):
# manager = Instance(IsotopeDatabaseManager)
#
# def _manager_default(self):
# return IsotopeDatabaseManager(connect=False, bind=False)
#
# def connect(self, d):
# connection = d.get('connection')
# db=self.manager.db
# db.name = connection.get('name', 'pychron_dataset')
# db.host = connection.get('host', 'localhost')
# db.username = connection.get('username', 'root')
# db.password = connection.get('password', 'Argon')
# db.kind = connection.get('kind', 'mysql')
#
# db.connect(test=False, force=True)
#
# class MassSpecDataSet(DataSet):
# manager = Instance(ExportManager)
#
# def _manager_default(self):
# m = ExportManager()
# return m
#
# def connect(self, d):
# connection = d.get('connection')
#
# db=self.manager.exporter.importer.db
# db.name=connection.get('name', 'massspec_dataset')
# db.host = connection.get('host', 'localhost')
# db.username = connection.get('username', 'root')
# db.password = connection.get('password', 'Argon')
# db.connect(test=False, force=True)
#
# def get_session(self):
# return self.manager.exporter.importer.db.get_session()
#
#
# class DataSetGenerator(HasTraits):
# # dest=Instance(IsotopeDatabaseManager)
#
# """
# refactor export task. add a exportmanager and then use it here
# """
# source = Instance(IsotopeDatabaseManager)
#
# def _source_default(self):
# r = IsotopeDatabaseManager(connect=False, bind=False)
# r.db.trait_set(kind='mysql',
# host='localhost',
# username='root',
# password='Argon',
# name='pychrondata_dev')
# r.db.connect()
# return r
#
# def generate_from_file(self):
# p = os.path.join(paths.data_dir, 'dataset.yaml')
#
# with open(p, 'r') as rfile:
# yd = yaml.load(rfile)
# print yd
#
# pdataset = yd.get('pychron')
# if pdataset:
# self._generate_pychron_dataset(yd, pdataset)
#
# mdataset = yd.get('massspec')
# if mdataset:
# self._generate_massspec_dataset(yd,mdataset)
#
# def _generate_massspec_dataset(self, yd, d):
# dest=MassSpecDataSet()
# dest.manager.manager=self.source
# dest.connect(d)
# if d.get('build_db'):
# sess=dest.get_session()
# self._make_blank_massspec_database(sess)
# return
#
# db=self.source.db
# with db.session_ctx():
# rids = self._assemble_runids(yd)
# ans = [db.get_unique_analysis(*r) for r in rids]
# ans=self.source.make_analyses(ans, unpack=True)
# dest.manager.export(ans)
#
# def _generate_pychron_dataset(self, yd, d):
# dest = PychronDataSet()
#
# dest.connect(d)
# db = dest.manager.db
# with db.session_ctx() as sess:
# if d.get('build_db'):
# self._make_blank_pychron_database(sess)
# return
# rids = self._assemble_runids(yd)
# self._transfer_pychron_analyses(rids, dest)
#
#
# def _transfer_pychron_analyses(self, rids, dest):
# bridge = DatabaseBridge(source=self.source.db,
# dest=dest.manager.db)
#
# db = self.source.db
# with db.session_ctx() as sess:
# ans = [db.get_unique_analysis(*r) for r in rids]
# bridge.add_analyses(ans)
#
# def _assemble_runids(self, d):
# rids = d['runids']
# return [strip_runid(r) for r in rids]
#
# def _make_blank_pychron_database(self, sess):
# p = os.path.join(os.path.dirname(__file__), 'pychron_dataset.sql')
# with open(p, 'r') as rfile:
# sql = rfile.read()
# sess.execute(sql)
#
# sess.execute('Insert into alembic_version version_num 123456')
#
# def _make_blank_massspec_database(self, sess):
# p = os.path.join(os.path.dirname(__file__), 'massspec_dataset.sql')
# with open(p, 'r') as rfile:
# sql = rfile.read()
# sess.execute(sql)
#
#
# if __name__ == '__main__':
# g = DataSetGenerator()
# g.generate_from_file()
# g.configure_traits()
# # ============= EOF =============================================
#
|
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |
#!/usr/bin/env python
# coding=utf-8
# eggbot_hatch.py
#
# Generate hatch fills for the closed paths (polygons) in the currently
# selected document elements. If no elements are selected, then all the
# polygons throughout the document are hatched. The fill rule is an odd/even
# rule: odd numbered intersections (1, 3, 5, etc.) are a hatch line entering
# a polygon while even numbered intersections (2, 4, 6, etc.) are the same
# hatch line exiting the polygon.
#
# This extension first decomposes the selected <path>, <rect>, <line>,
# <polyline>, <polygon>, <circle>, and <ellipse> elements into individual
# moveto and lineto coordinates using the same procedure that eggbot.py uses
# for plotting. These coordinates are then used to build vertex lists.
# Only the vertex lists corresponding to polygons (closed paths) are
# kept. Note that a single graphical element may be composed of several
# subpaths, each subpath potentially a polygon.
#
# Once the lists of all the vertices are built, potential hatch lines are
# "projected" through the bounding box containing all of the vertices.
# For each potential hatch line, all intersections with all the polygon
# edges are determined. These intersections are stored as decimal fractions
# indicating where along the length of the hatch line the intersection
# occurs. These values will always be in the range [0, 1]. A value of 0
# indicates that the intersection is at the start of the hatch line, a value
# of 0.5 midway, and a value of 1 at the end of the hatch line.
#
# For a given hatch line, all the fractional values are sorted and any
# duplicates removed. Duplicates occur, for instance, when the hatch
# line passes through a polygon vertex and thus intersects two edges
# segments of the polygon: the end of one edge and the start of
# another.
#
# Once sorted and duplicates removed, an odd/even rule is applied to
# determine which segments of the potential hatch line are within
# polygons. These segments found to be within polygons are then saved
# and become the hatch fill lines which will be drawn.
#
# With each saved hatch fill line, information about which SVG graphical
# element it is within is saved. This way, the hatch fill lines can
# later be grouped with the element they are associated with. This makes
# it possible to manipulate the two -- graphical element and hatch lines --
# as a single object within Inkscape.
#
# Note: we also save the transformation matrix for each graphical element.
# That way, when we group the hatch fills with the element they are
# filling, we can invert the transformation. That is, in order to compute
# the hatch fills, we first have had apply ALL applicable transforms to
# all the graphical elements. We need to do that so that we know where in
# the drawing each of the graphical elements are relative to one another.
# However, this means that the hatch lines have been computed in a setting
# where no further transforms are needed. If we then put these hatch lines
# into the same groups as the elements being hatched in the ORIGINAL
# drawing, then the hatch lines will have transforms applied again. So,
# once we compute the hatch lines, we need to invert the transforms of
# the group they will be placed in and apply this inverse transform to the
# hatch lines. Hence the need to save the transform matrix for every
# graphical element.
# Written by NAME for the Eggbot Project
# dan dot newman at mtbaldy dot us
# Updated by NAME 6/14/2012
# Added tolerance parameter
# Update by NAME, 6/20/2012
# Add min span/gap width
# Updated by NAME 1/8/2016
# Added live preview and correct issue with nonzero min gap
# https://github.com/evil-mad/EggBot/issues/32
# Updated by NAME 1/11/2016 thru 3/15/2016
# shel at shel dot net
# Added feature: Option to inset the hatch segments from boundaries
# Added feature: Option to join hatch segments that are "nearby", to minimize pen lifts
# The joins are made using cubic Bezier segments.
# https://github.com/evil-mad/EggBot/issues/36
# Updated by NAME 12/6/2017
# Modified hatch fill to create hatches as a relevant object it found on the SVG tree
# This prevents extremely complex plots from generating glitches
# Modifications are limited to recursivelyTraverseSvg and effect methods
# Current software version:
# (v2.3.2, March 27, 2020)
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
"""
========================
Broadcasting over arrays
========================
The term broadcasting describes how numpy treats arrays with different
shapes during arithmetic operations. Subject to certain constraints,
the smaller array is "broadcast" across the larger array so that they
have compatible shapes. Broadcasting provides a means of vectorizing
array operations so that looping occurs in C instead of Python. It does
this without making needless copies of data and usually leads to
efficient algorithm implementations. There are, however, cases where
broadcasting is a bad idea because it leads to inefficient use of memory
that slows computation.
NumPy operations are usually done on pairs of arrays on an
element-by-element basis. In the simplest case, the two arrays must
have exactly the same shape, as in the following example:
>>> a = np.array([1.0, 2.0, 3.0])
>>> b = np.array([2.0, 2.0, 2.0])
>>> a * b
array([ 2., 4., 6.])
NumPy's broadcasting rule relaxes this constraint when the arrays'
shapes meet certain constraints. The simplest broadcasting example occurs
when an array and a scalar value are combined in an operation:
>>> a = np.array([1.0, 2.0, 3.0])
>>> b = 2.0
>>> a * b
array([ 2., 4., 6.])
The result is equivalent to the previous example where ``b`` was an array.
We can think of the scalar ``b`` being *stretched* during the arithmetic
operation into an array with the same shape as ``a``. The new elements in
``b`` are simply copies of the original scalar. The stretching analogy is
only conceptual. NumPy is smart enough to use the original scalar value
without actually making copies, so that broadcasting operations are as
memory and computationally efficient as possible.
The code in the second example is more efficient than that in the first
because broadcasting moves less memory around during the multiplication
(``b`` is a scalar rather than an array).
General Broadcasting Rules
==========================
When operating on two arrays, NumPy compares their shapes element-wise.
It starts with the trailing dimensions, and works its way forward. Two
dimensions are compatible when
1) they are equal, or
2) one of them is 1
If these conditions are not met, a
``ValueError: frames are not aligned`` exception is thrown, indicating that
the arrays have incompatible shapes. The size of the resulting array
is the maximum size along each dimension of the input arrays.
Arrays do not need to have the same *number* of dimensions. For example,
if you have a ``256x256x3`` array of RGB values, and you want to scale
each color in the image by a different value, you can multiply the image
by a one-dimensional array with 3 values. Lining up the sizes of the
trailing axes of these arrays according to the broadcast rules, shows that
they are compatible::
Image (3d array): 256 x 256 x 3
Scale (1d array): 3
Result (3d array): 256 x 256 x 3
When either of the dimensions compared is one, the larger of the two is
used. In other words, the smaller of two axes is stretched or "copied"
to match the other.
In the following example, both the ``A`` and ``B`` arrays have axes with
length one that are expanded to a larger size during the broadcast
operation::
A (4d array): 8 x 1 x 6 x 1
B (3d array): 7 x 1 x 5
Result (4d array): 8 x 7 x 6 x 5
Here are some more examples::
A (2d array): 5 x 4
B (1d array): 1
Result (2d array): 5 x 4
A (2d array): 5 x 4
B (1d array): 4
Result (2d array): 5 x 4
A (3d array): 15 x 3 x 5
B (3d array): 15 x 1 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 1
Result (3d array): 15 x 3 x 5
Here are examples of shapes that do not broadcast::
A (1d array): 3
B (1d array): 4 # trailing dimensions do not match
A (2d array): 2 x 1
B (3d array): 8 x 4 x 3 # second from last dimensions mismatched
An example of broadcasting in practice::
>>> x = np.arange(4)
>>> xx = x.reshape(4,1)
>>> y = np.ones(5)
>>> z = np.ones((3,4))
>>> x.shape
(4,)
>>> y.shape
(5,)
>>> x + y
<type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape
>>> xx.shape
(4, 1)
>>> y.shape
(5,)
>>> (xx + y).shape
(4, 5)
>>> xx + y
array([[ 1., 1., 1., 1., 1.],
[ 2., 2., 2., 2., 2.],
[ 3., 3., 3., 3., 3.],
[ 4., 4., 4., 4., 4.]])
>>> x.shape
(4,)
>>> z.shape
(3, 4)
>>> (x + z).shape
(3, 4)
>>> x + z
array([[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.],
[ 1., 2., 3., 4.]])
Broadcasting provides a convenient way of taking the outer product (or
any other outer operation) of two arrays. The following example shows an
outer addition operation of two 1-d arrays::
>>> a = np.array([0.0, 10.0, 20.0, 30.0])
>>> b = np.array([1.0, 2.0, 3.0])
>>> a[:, np.newaxis] + b
array([[ 1., 2., 3.],
[ 11., 12., 13.],
[ 21., 22., 23.],
[ 31., 32., 33.]])
Here the ``newaxis`` index operator inserts a new axis into ``a``,
making it a two-dimensional ``4x1`` array. Combining the ``4x1`` array
with ``b``, which has shape ``(3,)``, yields a ``4x3`` array.
See `this article <http://www.scipy.org/EricsBroadcastingDoc>`_
for illustrations of broadcasting concepts.
""" |
"""
imsize map_coordinates fourier_shift
50 0.016211 0.00944495
84 0.0397182 0.0161059
118 0.077543 0.0443089
153 0.132948 0.058187
187 0.191808 0.0953341
221 0.276543 0.12069
255 0.357552 0.182863
289 0.464547 0.26451
324 0.622776 0.270612
358 0.759015 0.713239
392 0.943339 0.441262
426 1.12885 0.976379
461 1.58367 1.26116
495 1.62482 0.824757
529 1.83506 1.19455
563 3.21001 2.82487
597 2.64892 2.23473
632 2.74313 2.21019
666 3.07002 2.49054
700 3.50138 1.59507
Fourier outperforms map_coordinates slightly. It wraps, though, while
map_coordinates in general does not.
With skimage:
imsize map_coordinates fourier_shift skimage
50 0.0154819 0.00862598 0.0100191
84 0.0373471 0.0164428 0.0299141
118 0.0771091 0.0555351 0.047652
153 0.128651 0.0582621 0.108211
187 0.275812 0.129408 0.17893
221 0.426893 0.177555 0.281367
255 0.571022 0.26866 0.354988
289 0.75541 0.412766 0.415558
324 1.02605 0.402632 0.617405
358 1.14151 0.975867 0.512207
392 1.51085 0.549434 0.904133
426 1.72907 1.28387 0.948763
461 2.03424 1.79091 1.09984
495 2.23595 0.976755 1.49104
529 2.59915 1.95115 1.47774
563 3.34082 3.03312 1.76769
597 3.43117 2.84357 2.67582
632 4.06516 4.19464 2.22102
666 6.22056 3.65876 2.39756
700 5.06125 2.00939 2.73733
Fourier's all over the place, probably because of a strong dependence on
primeness. Comparable to skimage for some cases though.
""" |
"""
Basic functions used by several sub-packages and
useful to have in the main name-space.
Type Handling
-------------
================ ===================
iscomplexobj Test for complex object, scalar result
isrealobj Test for real object, scalar result
iscomplex Test for complex elements, array result
isreal Test for real elements, array result
imag Imaginary part
real Real part
real_if_close Turns complex number with tiny imaginary part to real
isneginf Tests for negative infinity, array result
isposinf Tests for positive infinity, array result
isnan Tests for nans, array result
isinf Tests for infinity, array result
isfinite Tests for finite numbers, array result
isscalar True if argument is a scalar
nan_to_num Replaces NaN's with 0 and infinities with large numbers
cast Dictionary of functions to force cast to each type
common_type Determine the minimum common type code for a group
of arrays
mintypecode Return minimal allowed common typecode.
================ ===================
Index Tricks
------------
================ ===================
mgrid Method which allows easy construction of N-d
'mesh-grids'
``r_`` Append and construct arrays: turns slice objects into
ranges and concatenates them, for 2d arrays appends rows.
index_exp Konrad Hinsen's index_expression class instance which
can be useful for building complicated slicing syntax.
================ ===================
Useful Functions
----------------
================ ===================
select Extension of where to multiple conditions and choices
extract Extract 1d array from flattened array according to mask
insert Insert 1d array of values into Nd array according to mask
linspace Evenly spaced samples in linear space
logspace Evenly spaced samples in logarithmic space
fix Round x to nearest integer towards zero
mod Modulo mod(x,y) = x % y except keeps sign of y
amax Array maximum along axis
amin Array minimum along axis
ptp Array max-min along axis
cumsum Cumulative sum along axis
prod Product of elements along axis
cumprod Cumluative product along axis
diff Discrete differences along axis
angle Returns angle of complex argument
unwrap Unwrap phase along given axis (1-d algorithm)
sort_complex Sort a complex-array (based on real, then imaginary)
trim_zeros Trim the leading and trailing zeros from 1D array.
vectorize A class that wraps a Python function taking scalar
arguments into a generalized function which can handle
arrays of arguments using the broadcast rules of
numerix Python.
================ ===================
Shape Manipulation
------------------
================ ===================
squeeze Return a with length-one dimensions removed.
atleast_1d Force arrays to be > 1D
atleast_2d Force arrays to be > 2D
atleast_3d Force arrays to be > 3D
vstack Stack arrays vertically (row on row)
hstack Stack arrays horizontally (column on column)
column_stack Stack 1D arrays as columns into 2D array
dstack Stack arrays depthwise (along third dimension)
stack Stack arrays along a new axis
split Divide array into a list of sub-arrays
hsplit Split into columns
vsplit Split into rows
dsplit Split along third dimension
================ ===================
Matrix (2D Array) Manipulations
-------------------------------
================ ===================
fliplr 2D array with columns flipped
flipud 2D array with rows flipped
rot90 Rotate a 2D array a multiple of 90 degrees
eye Return a 2D array with ones down a given diagonal
diag Construct a 2D array from a vector, or return a given
diagonal from a 2D array.
mat Construct a Matrix
bmat Build a Matrix from blocks
================ ===================
Polynomials
-----------
================ ===================
poly1d A one-dimensional polynomial class
poly Return polynomial coefficients from roots
roots Find roots of polynomial given coefficients
polyint Integrate polynomial
polyder Differentiate polynomial
polyadd Add polynomials
polysub Substract polynomials
polymul Multiply polynomials
polydiv Divide polynomials
polyval Evaluate polynomial at given argument
================ ===================
Iterators
---------
================ ===================
Arrayterator A buffered iterator for big arrays.
================ ===================
Import Tricks
-------------
================ ===================
ppimport Postpone module import until trying to use it
ppimport_attr Postpone module import until trying to use its attribute
ppresolve Import postponed module and return it.
================ ===================
Machine Arithmetics
-------------------
================ ===================
machar_single Single precision floating point arithmetic parameters
machar_double Double precision floating point arithmetic parameters
================ ===================
Threading Tricks
----------------
================ ===================
ParallelExec Execute commands in parallel thread.
================ ===================
1D Array Set Operations
-----------------------
Set operations for 1D numeric arrays based on sort() function.
================ ===================
ediff1d Array difference (auxiliary function).
unique Unique elements of an array.
intersect1d Intersection of 1D arrays with unique elements.
setxor1d Set exclusive-or of 1D arrays with unique elements.
in1d Test whether elements in a 1D array are also present in
another array.
union1d Union of 1D arrays with unique elements.
setdiff1d Set difference of 1D arrays with unique elements.
================ ===================
""" |
#!/usr/bin/env python
#
# @brief YAT: YAml Template text processor
# @date $Date: 2008-02-09 20:04:27 $
# @author NAME <EMAIL>
#
# Copyright (C) 2008 NAME All rights reserved.
#
# $Id: yat.py 775 2008-07-28 16:14:45Z n-ando $
#
#
# Usage:
#------------------------------------------------------------
# import yaml
# import yat
#
# dict = yaml.load(open(filename, "r").read())
# t = yat.Template(template, "\[", "\]")
# result = t.generate(dict)
#------------------------------------------------------------
#
# 1. Simple directive:
# [dictionary_key]
#
# Nested dictionaries can be expressed by dotted expression.
#
# example:
# dict = {"a": "This is a",
# "b": {"1": "This is b.1",
# "2": "This is b.2"}
# }
#
# template:
# [a]
#
# [b.1]
#
# [b.2]
#
# result:
# This is a
# This is b.1
# This is b.2
#
#
# 2. "for" directive:
# [for key in list] statement [endfor]
#
# Iterative evaluation for listed values is performed by "for" statement.
# In iteration at each evaluation, the value of the list is assigned to
# "key". The "key" also can be the nested dictionary directive.
#
# example:
# dict = {"list": [0, 1, 2],
# "listed_dict": [
# {"name": "x", "value": "1.0"},
# {"name": "y", "value": "0.2"},
# {"name": "z", "value": "0.1"}]}
#
# template:
# [for lst in list]
# [lst],
# [endfor]
# [for lst in listed_dict]
# [lst.name]: [lst.value]
#
# [endfor]
#
# result:
# 1, 2, 3,
# x: 1.0
# y: 0.2
# x: 0.1
#
#
# 3. "if-index" directive:
# [for key in val]
# [if-index key is first|even|odd|last|NUMBER] statement1
# [elif-index key is first|even|odd|last|NUMBER] statement2
# [endif][endfor]
#
# "if-index" is used to specify the index of the "for" iteration.
# The "key" string which is defined in the "for" statement is used as index.
# A number or predefined directives such as "first", "even", "odd" and
# "last" can be used to specify the index.
#
# example:
# dict = {"list": [0,1,2,3,4,5,6,7,8,9,10]}
#
# template:
# [for key in list]
# [if-index key is 3] [key] is hoge!!
# [elif-index key is 6] [key] is foo!!
# [elif-index key is 9] [key] is bar!!
# [elif-index key is first] [key] is first
# [elif-index key is last] Omoro-------!!!!
# [elif-index key is odd] [key] is odd number
# [elif-index key is even] [key] is even number
# [endif]
# [endfor]
#
# result:
# 0 is first
# 1 is odd number
# 2 is even number
# 3 is hoge!!
# 4 is even number
# 5 is odd number
# 6 is foo!!
# 7 is odd number
# 8 is even number
# 9 is bar!!
# Omoro-------!!!!
#
#
# 4. "if" directive: [if key is value] text1 [else] text2 [endif]
# If "key" is "value", "text1" appears, otherwise "text2" appears.
#
# example:
# dict = {"key1": "a", "key2": "b"}
#
# template:
# [if key1 is a]
# The key1 is "a".
# [else]
# This key1 is not "a".
# [endif]
#
# result:
# The key1 is "a".
#
#
# 5. "if-any" directive: [if-any key1] text1 [else] text2 [endif]
# If the "key1" exists in the dictionary, "text1" appears, otherwise
# "text2" appears.
#
# example:
# dict = {"key1": "a", "key2": "b"}
#
# template:
# [if-any key1]
# key1 exists.
# [endif][if-any key3]
# key3 exists.
# [else]
# key3 does not exists.
# [endif]
#
# result:
# key1 exists.
# key3 does not exists.
#
#
# 6. bracket and comment:
# [[] is left bracket if begin mark is "["
# [# comment ] is comment if begin/end marks are "[" and "]"
#
# example:
# dict = {}
#
# template:
# [[]bracket]
# [# comment]
#
# result:
# [bracket]
#
|
"""This module tests SyntaxErrors.
Here's an example of the sort of thing that is tested.
>>> def f(x):
... global x
Traceback (most recent call last):
SyntaxError: name 'x' is parameter and global
The tests are all raise SyntaxErrors. They were created by checking
each C call that raises SyntaxError. There are several modules that
raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and
symtable.c.
The parser itself outlaws a lot of invalid syntax. None of these
errors are tested here at the moment. We should add some tests; since
there are infinitely many programs with invalid syntax, we would need
to be judicious in selecting some.
The compiler generates a synthetic module name for code executed by
doctest. Since all the code comes from the same module, a suffix like
[1] is appended to the module name, As a consequence, changing the
order of tests in this module means renumbering all the errors after
it. (Maybe we should enable the ellipsis option for these tests.)
In ast.c, syntax errors are raised by calling ast_error().
Errors from set_context():
>>> obj.None = 1
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> None = 1
Traceback (most recent call last):
SyntaxError: can't assign to keyword
It's a syntax error to assign to the empty tuple. Why isn't it an
error to assign to the empty list? It will always raise some error at
runtime.
>>> () = 1
Traceback (most recent call last):
SyntaxError: can't assign to ()
>>> f() = 1
Traceback (most recent call last):
SyntaxError: can't assign to function call
>>> del f()
Traceback (most recent call last):
SyntaxError: can't delete function call
>>> a + 1 = 2
Traceback (most recent call last):
SyntaxError: can't assign to operator
>>> (x for x in x) = 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression
>>> 1 = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> "abc" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> b"" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> `1` = 1
Traceback (most recent call last):
SyntaxError: invalid syntax
If the left-hand side of an assignment is a list or tuple, an illegal
expression inside that contain should still cause a syntax error.
This test just checks a couple of cases rather than enumerating all of
them.
>>> (a, "b", c) = (1, 2, 3)
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> [a, b, c + 1] = [1, 2, 3]
Traceback (most recent call last):
SyntaxError: can't assign to operator
>>> a if 1 else b = 1
Traceback (most recent call last):
SyntaxError: can't assign to conditional expression
From compiler_complex_args():
>>> def f(None=1):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_arguments():
>>> def f(x, y=1, z):
... pass
Traceback (most recent call last):
SyntaxError: non-default argument follows default argument
>>> def f(x, None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> def f(*None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> def f(**None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_funcdef():
>>> def None(x):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_call():
>>> def f(it, *varargs):
... return list(it)
>>> L = range(10)
>>> f(x for x in L)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(x for x in L, 1)
Traceback (most recent call last):
SyntaxError: Generator expression must be parenthesized if not sole argument
>>> f((x for x in L), 1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... i244, i245, i246, i247, i248, i249, i250, i251, i252,
... i253, i254, i255)
Traceback (most recent call last):
SyntaxError: more than 255 arguments
The actual error cases counts positional arguments, keyword arguments,
and generator expression arguments separately. This test combines the
three.
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... (x for x in i244), i245, i246, i247, i248, i249, i250, i251,
... i252=1, i253=1, i254=1, i255=1)
Traceback (most recent call last):
SyntaxError: more than 255 arguments
>>> f(lambda x: x[0] = 3)
Traceback (most recent call last):
SyntaxError: lambda cannot contain assignment
The grammar accepts any test (basically, any expression) in the
keyword slot of a call site. Test a few different options.
>>> f(x()=2)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
>>> f(a or b=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
>>> f(x.y=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
More set_context():
>>> (x for x in x) += 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression
>>> None += 1
Traceback (most recent call last):
SyntaxError: can't assign to keyword
>>> f() += 1
Traceback (most recent call last):
SyntaxError: can't assign to function call
Test continue in finally in weird combinations.
continue in for loop under finally should be ok.
>>> def test():
... try:
... pass
... finally:
... for abc in range(10):
... continue
... print(abc)
>>> test()
9
Start simple, a continue in a finally should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
This is essentially a continue in a finally which should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... try:
... continue
... except:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... try:
... continue
... finally:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try: pass
... finally:
... try:
... pass
... except:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
There is one test for a break that is not in a loop. The compiler
uses a single data structure to keep track of try-finally and loops,
so we need to be sure that a break is actually inside a loop. If it
isn't, there should be a syntax error.
>>> try:
... print(1)
... break
... print(2)
... finally:
... print(3)
Traceback (most recent call last):
...
SyntaxError: 'break' outside loop
This should probably raise a better error than a SystemError (or none at all).
In 2.5 there was a missing exception and an assert was triggered in a debug
build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514
>>> while 1:
... while 2:
... while 3:
... while 4:
... while 5:
... while 6:
... while 8:
... while 9:
... while 10:
... while 11:
... while 12:
... while 13:
... while 14:
... while 15:
... while 16:
... while 17:
... while 18:
... while 19:
... while 20:
... while 21:
... while 22:
... break
Traceback (most recent call last):
...
SystemError: too many statically nested blocks
Misuse of the nonlocal statement can lead to a few unique syntax errors.
>>> def f(x):
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: name 'x' is parameter and nonlocal
>>> def f():
... global x
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: name 'x' is nonlocal and global
>>> def f():
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: no binding for nonlocal 'x' found
From SF bug #1705365
>>> nonlocal x
Traceback (most recent call last):
...
SyntaxError: nonlocal declaration not allowed at module level
TODO(jhylton): Figure out how to test SyntaxWarning with doctest.
## >>> def f(x):
## ... def f():
## ... print(x)
## ... nonlocal x
## Traceback (most recent call last):
## ...
## SyntaxWarning: name 'x' is assigned to before nonlocal declaration
## >>> def f():
## ... x = 1
## ... nonlocal x
## Traceback (most recent call last):
## ...
## SyntaxWarning: name 'x' is assigned to before nonlocal declaration
This tests assignment-context; there was a bug in Python 2.5 where compiling
a complex 'if' (one with 'elif') would fail to notice an invalid suite,
leading to spurious errors.
>>> if 1:
... x() = 1
... elif 1:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... x() = 1
... elif 1:
... pass
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... pass
... else:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
Make sure that the old "raise X, Y[, Z]" form is gone:
>>> raise X, Y
Traceback (most recent call last):
...
SyntaxError: invalid syntax
>>> raise X, Y, Z
Traceback (most recent call last):
...
SyntaxError: invalid syntax
>>> f(a=23, a=234)
Traceback (most recent call last):
...
SyntaxError: keyword argument repeated
>>> del ()
Traceback (most recent call last):
SyntaxError: can't delete ()
>>> {1, 2, 3} = 42
Traceback (most recent call last):
SyntaxError: can't assign to literal
Corner-cases that used to fail to raise the correct error:
>>> def f(*, x=lambda __debug__:0): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(*args:(lambda __debug__:0)): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(**kwargs:(lambda __debug__:0)): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> with (lambda *:0): pass
Traceback (most recent call last):
SyntaxError: named arguments must follow bare *
Corner-cases that used to crash:
>>> def f(**__debug__): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(*xx, __debug__): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
""" |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# adapted from http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
# -thepaul
# This is an implementation of wcwidth() and wcswidth() (defined in
# IEEE Std 1002.1-2001) for Unicode.
#
# http://www.opengroup.org/onlinepubs/007904975/functions/wcwidth.html
# http://www.opengroup.org/onlinepubs/007904975/functions/wcswidth.html
#
# In fixed-width output devices, Latin characters all occupy a single
# "cell" position of equal width, whereas ideographic CJK characters
# occupy two such cells. Interoperability between terminal-line
# applications and (teletype-style) character terminals using the
# UTF-8 encoding requires agreement on which character should advance
# the cursor by how many cell positions. No established formal
# standards exist at present on which Unicode character shall occupy
# how many cell positions on character terminals. These routines are
# a first attempt of defining such behavior based on simple rules
# applied to data provided by the Unicode Consortium.
#
# For some graphical characters, the Unicode standard explicitly
# defines a character-cell width via the definition of the East Asian
# FullWidth (F), Wide (W), Half-width (H), and Narrow (Na) classes.
# In all these cases, there is no ambiguity about which width a
# terminal shall use. For characters in the East Asian Ambiguous (A)
# class, the width choice depends purely on a preference of backward
# compatibility with either historic CJK or Western practice.
# Choosing single-width for these characters is easy to justify as
# the appropriate long-term solution, as the CJK practice of
# displaying these characters as double-width comes from historic
# implementation simplicity (8-bit encoded characters were displayed
# single-width and 16-bit ones double-width, even for Greek,
# Cyrillic, etc.) and not any typographic considerations.
#
# Much less clear is the choice of width for the Not East Asian
# (Neutral) class. Existing practice does not dictate a width for any
# of these characters. It would nevertheless make sense
# typographically to allocate two character cells to characters such
# as for instance EM SPACE or VOLUME INTEGRAL, which cannot be
# represented adequately with a single-width glyph. The following
# routines at present merely assign a single-cell width to all
# neutral characters, in the interest of simplicity. This is not
# entirely satisfactory and should be reconsidered before
# establishing a formal standard in this area. At the moment, the
# decision which Not East Asian (Neutral) characters should be
# represented by double-width glyphs cannot yet be answered by
# applying a simple rule from the Unicode database content. Setting
# up a proper standard for the behavior of UTF-8 character terminals
# will require a careful analysis not only of each Unicode character,
# but also of each presentation form, something the author of these
# routines has avoided to do so far.
#
# http://www.unicode.org/unicode/reports/tr11/
#
# NAME -- 2007-05-26 (Unicode 5.0)
#
# Permission to use, copy, modify, and distribute this software
# for any purpose and without fee is hereby granted. The author
# disclaims all warranties with regard to this software.
#
# Latest C version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
# auxiliary function for binary search in interval table
|
"""
.. _statsrefmanual:
==========================================
Statistical functions (:mod:`scipy.stats`)
==========================================
.. currentmodule:: scipy.stats
This module contains a large number of probability distributions as
well as a growing library of statistical functions.
Each univariate distribution is an instance of a subclass of `rv_continuous`
(`rv_discrete` for discrete distributions):
.. autosummary::
:toctree: generated/
rv_continuous
rv_discrete
rv_histogram
Continuous distributions
========================
.. autosummary::
:toctree: generated/
alpha -- Alpha
anglit -- Anglit
arcsine -- Arcsine
argus -- Argus
beta -- Beta
betaprime -- Beta Prime
bradford -- Bradford
burr -- Burr (Type III)
burr12 -- Burr (Type XII)
cauchy -- Cauchy
chi -- Chi
chi2 -- Chi-squared
cosine -- Cosine
crystalball -- Crystalball
dgamma -- Double Gamma
dweibull -- Double Weibull
erlang -- Erlang
expon -- Exponential
exponnorm -- Exponentially Modified Normal
exponweib -- Exponentiated Weibull
exponpow -- Exponential Power
f -- F (Snecdor F)
fatiguelife -- Fatigue Life (Birnbaum-Saunders)
fisk -- Fisk
foldcauchy -- Folded Cauchy
foldnorm -- Folded Normal
genlogistic -- Generalized Logistic
gennorm -- Generalized normal
genpareto -- Generalized Pareto
genexpon -- Generalized Exponential
genextreme -- Generalized Extreme Value
gausshyper -- Gauss Hypergeometric
gamma -- Gamma
gengamma -- Generalized gamma
genhalflogistic -- Generalized Half Logistic
geninvgauss -- Generalized Inverse Gaussian
gilbrat -- Gilbrat
gompertz -- Gompertz (Truncated Gumbel)
gumbel_r -- Right Sided Gumbel, Log-Weibull, Fisher-Tippett, Extreme Value Type I
gumbel_l -- Left Sided Gumbel, etc.
halfcauchy -- Half Cauchy
halflogistic -- Half Logistic
halfnorm -- Half Normal
halfgennorm -- Generalized Half Normal
hypsecant -- Hyperbolic Secant
invgamma -- Inverse Gamma
invgauss -- Inverse Gaussian
invweibull -- Inverse Weibull
johnsonsb -- NAME
johnsonsu -- NAME
kappa4 -- Kappa 4 parameter
kappa3 -- Kappa 3 parameter
ksone -- Distribution of Kolmogorov-Smirnov one-sided test statistic
kstwo -- Distribution of Kolmogorov-Smirnov two-sided test statistic
kstwobign -- Limiting Distribution of scaled Kolmogorov-Smirnov two-sided test statistic.
laplace -- Laplace
levy -- Levy
levy_l
levy_stable
logistic -- Logistic
loggamma -- Log-Gamma
loglaplace -- Log-Laplace (Log Double Exponential)
lognorm -- Log-Normal
loguniform -- Log-Uniform
lomax -- Lomax (Pareto of the second kind)
maxwell -- Maxwell
mielke -- Mielke's Beta-Kappa
moyal -- Moyal
nakagami -- Nakagami
ncx2 -- Non-central chi-squared
ncf -- Non-central F
nct -- Non-central Student's T
norm -- Normal (Gaussian)
norminvgauss -- Normal Inverse Gaussian
pareto -- Pareto
pearson3 -- Pearson type III
powerlaw -- Power-function
powerlognorm -- Power log normal
powernorm -- Power normal
rdist -- R-distribution
rayleigh -- Rayleigh
rice -- Rice
recipinvgauss -- Reciprocal Inverse Gaussian
semicircular -- Semicircular
skewnorm -- Skew normal
t -- Student's T
trapezoid -- Trapezoidal
triang -- Triangular
truncexpon -- Truncated Exponential
truncnorm -- Truncated Normal
tukeylambda -- Tukey-Lambda
uniform -- Uniform
vonmises -- Von-Mises (Circular)
vonmises_line -- Von-Mises (Line)
wald -- Wald
weibull_min -- Minimum Weibull (see Frechet)
weibull_max -- Maximum Weibull (see Frechet)
wrapcauchy -- Wrapped Cauchy
Multivariate distributions
==========================
.. autosummary::
:toctree: generated/
multivariate_normal -- Multivariate normal distribution
matrix_normal -- Matrix normal distribution
dirichlet -- Dirichlet
wishart -- Wishart
invwishart -- Inverse Wishart
multinomial -- Multinomial distribution
special_ortho_group -- SO(N) group
ortho_group -- O(N) group
unitary_group -- U(N) group
random_correlation -- random correlation matrices
multivariate_t -- Multivariate t-distribution
Discrete distributions
======================
.. autosummary::
:toctree: generated/
bernoulli -- Bernoulli
betabinom -- Beta-Binomial
binom -- Binomial
boltzmann -- Boltzmann (Truncated Discrete Exponential)
dlaplace -- Discrete Laplacian
geom -- Geometric
hypergeom -- Hypergeometric
logser -- Logarithmic (Log-Series, Series)
nbinom -- Negative Binomial
nhypergeom -- Negative Hypergeometric
planck -- Planck (Discrete Exponential)
poisson -- Poisson
randint -- Discrete Uniform
skellam -- Skellam
zipf -- Zipf
yulesimon -- Yule-Simon
An overview of statistical functions is given below.
Several of these functions have a similar version in
`scipy.stats.mstats` which work for masked arrays.
Summary statistics
==================
.. autosummary::
:toctree: generated/
describe -- Descriptive statistics
gmean -- Geometric mean
hmean -- Harmonic mean
kurtosis -- Fisher or Pearson kurtosis
mode -- Modal value
moment -- Central moment
skew -- Skewness
kstat --
kstatvar --
tmean -- Truncated arithmetic mean
tvar -- Truncated variance
tmin --
tmax --
tstd --
tsem --
variation -- Coefficient of variation
find_repeats
trim_mean
gstd -- Geometric Standard Deviation
iqr
sem
bayes_mvs
mvsdist
entropy
median_absolute_deviation
median_abs_deviation
Frequency statistics
====================
.. autosummary::
:toctree: generated/
cumfreq
itemfreq
percentileofscore
scoreatpercentile
relfreq
.. autosummary::
:toctree: generated/
binned_statistic -- Compute a binned statistic for a set of data.
binned_statistic_2d -- Compute a 2-D binned statistic for a set of data.
binned_statistic_dd -- Compute a d-D binned statistic for a set of data.
Correlation functions
=====================
.. autosummary::
:toctree: generated/
f_oneway
pearsonr
spearmanr
pointbiserialr
kendalltau
weightedtau
linregress
siegelslopes
theilslopes
multiscale_graphcorr
Statistical tests
=================
.. autosummary::
:toctree: generated/
ttest_1samp
ttest_ind
ttest_ind_from_stats
ttest_rel
chisquare
cramervonmises
power_divergence
kstest
ks_1samp
ks_2samp
epps_singleton_2samp
mannwhitneyu
tiecorrect
rankdata
ranksums
wilcoxon
kruskal
friedmanchisquare
brunnermunzel
combine_pvalues
jarque_bera
.. autosummary::
:toctree: generated/
ansari
bartlett
levene
shapiro
anderson
anderson_ksamp
binom_test
fligner
median_test
mood
skewtest
kurtosistest
normaltest
Transformations
===============
.. autosummary::
:toctree: generated/
boxcox
boxcox_normmax
boxcox_llf
yeojohnson
yeojohnson_normmax
yeojohnson_llf
obrientransform
sigmaclip
trimboth
trim1
zmap
zscore
Statistical distances
=====================
.. autosummary::
:toctree: generated/
wasserstein_distance
energy_distance
Random variate generation
=========================
.. autosummary::
:toctree: generated/
rvs_ratio_uniforms
Circular statistical functions
==============================
.. autosummary::
:toctree: generated/
circmean
circvar
circstd
Contingency table functions
===========================
.. autosummary::
:toctree: generated/
chi2_contingency
contingency.expected_freq
contingency.margins
fisher_exact
Plot-tests
==========
.. autosummary::
:toctree: generated/
ppcc_max
ppcc_plot
probplot
boxcox_normplot
yeojohnson_normplot
Masked statistics functions
===========================
.. toctree::
stats.mstats
Univariate and multivariate kernel density estimation
=====================================================
.. autosummary::
:toctree: generated/
gaussian_kde
Warnings used in :mod:`scipy.stats`
===================================
.. autosummary::
:toctree: generated/
F_onewayConstantInputWarning
F_onewayBadInputSizesWarning
PearsonRConstantInputWarning
PearsonRNearConstantInputWarning
SpearmanRConstantInputWarning
For many more stat related functions install the software R and the
interface package rpy.
""" |
"""
=================
Structured Arrays
=================
Introduction
============
Numpy provides powerful capabilities to create arrays of structured datatype.
These arrays permit one to manipulate the data by named fields. A simple
example will show what is meant.: ::
>>> x = np.array([(1,2.,'Hello'), (2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
>>> x
array([(1, 2.0, 'Hello'), (2, 3.0, 'World')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
Here we have created a one-dimensional array of length 2. Each element of
this array is a structure that contains three items, a 32-bit integer, a 32-bit
float, and a string of length 10 or less. If we index this array at the second
position we get the second structure: ::
>>> x[1]
(2,3.,"World")
Conveniently, one can access any field of the array by indexing using the
string that names that field. ::
>>> y = x['foo']
>>> y
array([ 2., 3.], dtype=float32)
>>> y[:] = 2*y
>>> y
array([ 4., 6.], dtype=float32)
>>> x
array([(1, 4.0, 'Hello'), (2, 6.0, 'World')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
In these examples, y is a simple float array consisting of the 2nd field
in the structured type. But, rather than being a copy of the data in the structured
array, it is a view, i.e., it shares exactly the same memory locations.
Thus, when we updated this array by doubling its values, the structured
array shows the corresponding values as doubled as well. Likewise, if one
changes the structured array, the field view also changes: ::
>>> x[1] = (-1,-1.,"Master")
>>> x
array([(1, 4.0, 'Hello'), (-1, -1.0, 'Master')],
dtype=[('foo', '>i4'), ('bar', '>f4'), ('baz', '|S10')])
>>> y
array([ 4., -1.], dtype=float32)
Defining Structured Arrays
==========================
One defines a structured array through the dtype object. There are
**several** alternative ways to define the fields of a record. Some of
these variants provide backward compatibility with Numeric, numarray, or
another module, and should not be used except for such purposes. These
will be so noted. One specifies record structure in
one of four alternative ways, using an argument (as supplied to a dtype
function keyword or a dtype object constructor itself). This
argument must be one of the following: 1) string, 2) tuple, 3) list, or
4) dictionary. Each of these is briefly described below.
1) String argument.
In this case, the constructor expects a comma-separated list of type
specifiers, optionally with extra shape information. The fields are
given the default names 'f0', 'f1', 'f2' and so on.
The type specifiers can take 4 different forms: ::
a) b1, i1, i2, i4, i8, u1, u2, u4, u8, f2, f4, f8, c8, c16, a<n>
(representing bytes, ints, unsigned ints, floats, complex and
fixed length strings of specified byte lengths)
b) int8,...,uint8,...,float16, float32, float64, complex64, complex128
(this time with bit sizes)
c) older Numeric/numarray type specifications (e.g. Float32).
Don't use these in new code!
d) Single character type specifiers (e.g H for unsigned short ints).
Avoid using these unless you must. Details can be found in the
Numpy book
These different styles can be mixed within the same string (but why would you
want to do that?). Furthermore, each type specifier can be prefixed
with a repetition number, or a shape. In these cases an array
element is created, i.e., an array within a record. That array
is still referred to as a single field. An example: ::
>>> x = np.zeros(3, dtype='3int8, float32, (2,3)float64')
>>> x
array([([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
([0, 0, 0], 0.0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])],
dtype=[('f0', '|i1', 3), ('f1', '>f4'), ('f2', '>f8', (2, 3))])
By using strings to define the record structure, it precludes being
able to name the fields in the original definition. The names can
be changed as shown later, however.
2) Tuple argument: The only relevant tuple case that applies to record
structures is when a structure is mapped to an existing data type. This
is done by pairing in a tuple, the existing data type with a matching
dtype definition (using any of the variants being described here). As
an example (using a definition using a list, so see 3) for further
details): ::
>>> x = np.zeros(3, dtype=('i4',[('r','u1'), ('g','u1'), ('b','u1'), ('a','u1')]))
>>> x
array([0, 0, 0])
>>> x['r']
array([0, 0, 0], dtype=uint8)
In this case, an array is produced that looks and acts like a simple int32 array,
but also has definitions for fields that use only one byte of the int32 (a bit
like Fortran equivalencing).
3) List argument: In this case the record structure is defined with a list of
tuples. Each tuple has 2 or 3 elements specifying: 1) The name of the field
('' is permitted), 2) the type of the field, and 3) the shape (optional).
For example::
>>> x = np.zeros(3, dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
>>> x
array([(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]]),
(0.0, 0.0, [[0.0, 0.0], [0.0, 0.0]])],
dtype=[('x', '>f4'), ('y', '>f4'), ('value', '>f4', (2, 2))])
4) Dictionary argument: two different forms are permitted. The first consists
of a dictionary with two required keys ('names' and 'formats'), each having an
equal sized list of values. The format list contains any type/shape specifier
allowed in other contexts. The names must be strings. There are two optional
keys: 'offsets' and 'titles'. Each must be a correspondingly matching list to
the required two where offsets contain integer offsets for each field, and
titles are objects containing metadata for each field (these do not have
to be strings), where the value of None is permitted. As an example: ::
>>> x = np.zeros(3, dtype={'names':['col1', 'col2'], 'formats':['i4','f4']})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[('col1', '>i4'), ('col2', '>f4')])
The other dictionary form permitted is a dictionary of name keys with tuple
values specifying type, offset, and an optional title. ::
>>> x = np.zeros(3, dtype={'col1':('i1',0,'title 1'), 'col2':('f4',1,'title 2')})
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'col1'), '|i1'), (('title 2', 'col2'), '>f4')])
Accessing and modifying field names
===================================
The field names are an attribute of the dtype object defining the structure.
For the last example: ::
>>> x.dtype.names
('col1', 'col2')
>>> x.dtype.names = ('x', 'y')
>>> x
array([(0, 0.0), (0, 0.0), (0, 0.0)],
dtype=[(('title 1', 'x'), '|i1'), (('title 2', 'y'), '>f4')])
>>> x.dtype.names = ('x', 'y', 'z') # wrong number of names
<type 'exceptions.ValueError'>: must replace all names at once with a sequence of length 2
Accessing field titles
====================================
The field titles provide a standard place to put associated info for fields.
They do not have to be strings. ::
>>> x.dtype.fields['x'][2]
'title 1'
Accessing multiple fields at once
====================================
You can access multiple fields at once using a list of field names: ::
>>> x = np.array([(1.5,2.5,(1.0,2.0)),(3.,4.,(4.,5.)),(1.,3.,(2.,6.))],
dtype=[('x','f4'),('y',np.float32),('value','f4',(2,2))])
Notice that `x` is created with a list of tuples. ::
>>> x[['x','y']]
array([(1.5, 2.5), (3.0, 4.0), (1.0, 3.0)],
dtype=[('x', '<f4'), ('y', '<f4')])
>>> x[['x','value']]
array([(1.5, [[1.0, 2.0], [1.0, 2.0]]), (3.0, [[4.0, 5.0], [4.0, 5.0]]),
(1.0, [[2.0, 6.0], [2.0, 6.0]])],
dtype=[('x', '<f4'), ('value', '<f4', (2, 2))])
The fields are returned in the order they are asked for.::
>>> x[['y','x']]
array([(2.5, 1.5), (4.0, 3.0), (3.0, 1.0)],
dtype=[('y', '<f4'), ('x', '<f4')])
Filling structured arrays
=========================
Structured arrays can be filled by field or row by row. ::
>>> arr = np.zeros((5,), dtype=[('var1','f8'),('var2','f8')])
>>> arr['var1'] = np.arange(5)
If you fill it in row by row, it takes a take a tuple
(but not a list or array!)::
>>> arr[0] = (10,20)
>>> arr
array([(10.0, 20.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (4.0, 0.0)],
dtype=[('var1', '<f8'), ('var2', '<f8')])
Record Arrays
=============
For convenience, numpy provides "record arrays" which allow one to access
fields of structured arrays by attribute rather than by index. Record arrays
are structured arrays wrapped using a subclass of ndarray,
:class:`numpy.recarray`, which allows field access by attribute on the array
object, and record arrays also use a special datatype, :class:`numpy.record`,
which allows field access by attribute on the individual elements of the array.
The simplest way to create a record array is with :func:`numpy.rec.array`: ::
>>> recordarr = np.rec.array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')])
>>> recordarr.bar
array([ 2., 3.], dtype=float32)
>>> recordarr[1:2]
rec.array([(2, 3.0, 'World')],
dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])
>>> recordarr[1:2].foo
array([2], dtype=int32)
>>> recordarr.foo[1:2]
array([2], dtype=int32)
>>> recordarr[1].baz
'World'
numpy.rec.array can convert a wide variety of arguments into record arrays,
including normal structured arrays: ::
>>> arr = array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')])
>>> recordarr = np.rec.array(arr)
The numpy.rec module provides a number of other convenience functions for
creating record arrays, see :ref:`record array creation routines
<routines.array-creation.rec>`.
A record array representation of a structured array can be obtained using the
appropriate :ref:`view`: ::
>>> arr = np.array([(1,2.,'Hello'),(2,3.,"World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')])
>>> recordarr = arr.view(dtype=dtype((np.record, arr.dtype)),
... type=np.recarray)
For convenience, viewing an ndarray as type `np.recarray` will automatically
convert to `np.record` datatype, so the dtype can be left out of the view: ::
>>> recordarr = arr.view(np.recarray)
>>> recordarr.dtype
dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]))
To get back to a plain ndarray both the dtype and type must be reset. The
following view does so, taking into account the unusual case that the
recordarr was not a structured type: ::
>>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray)
Record array fields accessed by index or by attribute are returned as a record
array if the field has a structured type but as a plain ndarray otherwise. ::
>>> recordarr = np.rec.array([('Hello', (1,2)),("World", (3,4))],
... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])])
>>> type(recordarr.foo)
<type 'numpy.ndarray'>
>>> type(recordarr.bar)
<class 'numpy.core.records.recarray'>
Note that if a field has the same name as an ndarray attribute, the ndarray
attribute takes precedence. Such fields will be inaccessible by attribute but
may still be accessed by index.
""" |
"""
Define a simple format for saving numpy arrays to disk with the full
information about them.
The ``.npy`` format is the standard binary file format in NumPy for
persisting a *single* arbitrary NumPy array on disk. The format stores all
of the shape and dtype information necessary to reconstruct the array
correctly even on another machine with a different architecture.
The format is designed to be as simple as possible while achieving
its limited goals.
The ``.npz`` format is the standard format for persisting *multiple* NumPy
arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy``
files, one for each array.
Capabilities
------------
- Can represent all NumPy arrays including nested record arrays and
object arrays.
- Represents the data in its native binary form.
- Supports Fortran-contiguous arrays directly.
- Stores all of the necessary information to reconstruct the array
including shape and dtype on a machine of a different
architecture. Both little-endian and big-endian arrays are
supported, and a file with little-endian numbers will yield
a little-endian array on any machine reading the file. The
types are described in terms of their actual sizes. For example,
if a machine with a 64-bit C "long int" writes out an array with
"long ints", a reading machine with 32-bit C "long ints" will yield
an array with 64-bit integers.
- Is straightforward to reverse engineer. Datasets often live longer than
the programs that created them. A competent developer should be
able to create a solution in his preferred programming language to
read most ``.npy`` files that he has been given without much
documentation.
- Allows memory-mapping of the data. See `open_memmep`.
- Can be read from a filelike stream object instead of an actual file.
- Stores object arrays, i.e. arrays containing elements that are arbitrary
Python objects. Files with object arrays are not to be mmapable, but
can be read and written to disk.
Limitations
-----------
- Arbitrary subclasses of numpy.ndarray are not completely preserved.
Subclasses will be accepted for writing, but only the array data will
be written out. A regular numpy.ndarray object will be created
upon reading the file.
.. warning::
Due to limitations in the interpretation of structured dtypes, dtypes
with fields with empty names will have the names replaced by 'f0', 'f1',
etc. Such arrays will not round-trip through the format entirely
accurately. The data is intact; only the field names will differ. We are
working on a fix for this. This fix will not require a change in the
file format. The arrays with such structures can still be saved and
restored, and the correct dtype may be restored by using the
``loadedarray.view(correct_dtype)`` method.
File extensions
---------------
We recommend using the ``.npy`` and ``.npz`` extensions for files saved
in this format. This is by no means a requirement; applications may wish
to use these file formats but use an extension specific to the
application. In the absence of an obvious alternative, however,
we suggest using ``.npy`` and ``.npz``.
Version numbering
-----------------
The version numbering of these formats is independent of NumPy version
numbering. If the format is upgraded, the code in `numpy.io` will still
be able to read and write Version 1.0 files.
Format Version 1.0
------------------
The first 6 bytes are a magic string: exactly ``\\x93NUMPY``.
The next 1 byte is an unsigned byte: the major version number of the file
format, e.g. ``\\x01``.
The next 1 byte is an unsigned byte: the minor version number of the file
format, e.g. ``\\x00``. Note: the version of the file format is not tied
to the version of the numpy package.
The next 2 bytes form a little-endian unsigned short int: the length of
the header data HEADER_LEN.
The next HEADER_LEN bytes form the header data describing the array's
format. It is an ASCII string which contains a Python literal expression
of a dictionary. It is terminated by a newline (``\\n``) and padded with
spaces (``\\x20``) to make the total length of
``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment
purposes.
The dictionary contains three keys:
"descr" : dtype.descr
An object that can be passed as an argument to the `numpy.dtype`
constructor to create the array's dtype.
"fortran_order" : bool
Whether the array data is Fortran-contiguous or not. Since
Fortran-contiguous arrays are a common form of non-C-contiguity,
we allow them to be written directly to disk for efficiency.
"shape" : tuple of int
The shape of the array.
For repeatability and readability, the dictionary keys are sorted in
alphabetic order. This is for convenience only. A writer SHOULD implement
this if possible. A reader MUST NOT depend on this.
Following the header comes the array data. If the dtype contains Python
objects (i.e. ``dtype.hasobject is True``), then the data is a Python
pickle of the array. Otherwise the data is the contiguous (either C-
or Fortran-, depending on ``fortran_order``) bytes of the array.
Consumers can figure out the number of bytes by multiplying the number
of elements given by the shape (noting that ``shape=()`` means there is
1 element) by ``dtype.itemsize``.
Notes
-----
The ``.npy`` format, including reasons for creating it and a comparison of
alternatives, is described fully in the "npy-format" NEP.
""" |
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
- Standard framework for select-based multiplexing
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
"""
========
Glossary
========
.. glossary::
along an axis
Axes are defined for arrays with more than one dimension. A
2-dimensional array has two corresponding axes: the first running
vertically downwards across rows (axis 0), and the second running
horizontally across columns (axis 1).
Many operation can take place along one of these axes. For example,
we can sum each row of an array, in which case we operate along
columns, or axis 1::
>>> x = np.arange(12).reshape((3,4))
>>> x
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.sum(axis=1)
array([ 6, 22, 38])
array
A homogeneous container of numerical elements. Each element in the
array occupies a fixed amount of memory (hence homogeneous), and
can be a numerical element of a single type (such as float, int
or complex) or a combination (such as ``(float, int, float)``). Each
array has an associated data-type (or ``dtype``), which describes
the numerical type of its elements::
>>> x = np.array([1, 2, 3], float)
>>> x
array([ 1., 2., 3.])
>>> x.dtype # floating point number, 64 bits of memory per element
dtype('float64')
# More complicated data type: each array element is a combination of
# and integer and a floating point number
>>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)])
array([(1, 2.0), (3, 4.0)],
dtype=[('x', '<i4'), ('y', '<f8')])
Fast element-wise operations, called `ufuncs`_, operate on arrays.
array_like
Any sequence that can be interpreted as an ndarray. This includes
nested lists, tuples, scalars and existing arrays.
attribute
A property of an object that can be accessed using ``obj.attribute``,
e.g., ``shape`` is an attribute of an array::
>>> x = np.array([1, 2, 3])
>>> x.shape
(3,)
BLAS
`Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_
broadcast
NumPy can do operations on arrays whose shapes are mismatched::
>>> x = np.array([1, 2])
>>> y = np.array([[3], [4]])
>>> x
array([1, 2])
>>> y
array([[3],
[4]])
>>> x + y
array([[4, 5],
[5, 6]])
See `doc.broadcasting`_ for more information.
C order
See `row-major`
column-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In column-major order, the leftmost index "varies the
fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the column-major order as::
[1, 4, 2, 5, 3, 6]
Column-major order is also known as the Fortran order, as the Fortran
programming language uses it.
decorator
An operator that transforms a function. For example, a ``log``
decorator may be defined to print debugging information upon
function execution::
>>> def log(f):
... def new_logging_func(*args, **kwargs):
... print "Logging call with parameters:", args, kwargs
... return f(*args, **kwargs)
...
... return new_logging_func
Now, when we define a function, we can "decorate" it using ``log``::
>>> @log
... def add(a, b):
... return a + b
Calling ``add`` then yields:
>>> add(1, 2)
Logging call with parameters: (1, 2) {}
3
dictionary
Resembling a language dictionary, which provides a mapping between
words and descriptions thereof, a Python dictionary is a mapping
between two objects::
>>> x = {1: 'one', 'two': [1, 2]}
Here, `x` is a dictionary mapping keys to values, in this case
the integer 1 to the string "one", and the string "two" to
the list ``[1, 2]``. The values may be accessed using their
corresponding keys::
>>> x[1]
'one'
>>> x['two']
[1, 2]
Note that dictionaries are not stored in any specific order. Also,
most mutable (see *immutable* below) objects, such as lists, may not
be used as keys.
For more information on dictionaries, read the
`Python tutorial <http://docs.python.org/tut>`_.
Fortran order
See `column-major`
flattened
Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details.
immutable
An object that cannot be modified after execution is called
immutable. Two common examples are strings and tuples.
instance
A class definition gives the blueprint for constructing an object::
>>> class House(object):
... wall_colour = 'white'
Yet, we have to *build* a house before it exists::
>>> h = House() # build a house
Now, ``h`` is called a ``House`` instance. An instance is therefore
a specific realisation of a class.
iterable
A sequence that allows "walking" (iterating) over items, typically
using a loop such as::
>>> x = [1, 2, 3]
>>> [item**2 for item in x]
[1, 4, 9]
It is often used in combintion with ``enumerate``::
>>> keys = ['a','b','c']
>>> for n, k in enumerate(keys):
... print "Key %d: %s" % (n, k)
...
Key 0: a
Key 1: b
Key 2: c
list
A Python container that can hold any number of objects or items.
The items do not have to be of the same type, and can even be
lists themselves::
>>> x = [2, 2.0, "two", [2, 2.0]]
The list `x` contains 4 items, each which can be accessed individually::
>>> x[2] # the string 'two'
'two'
>>> x[3] # a list, containing an integer 2 and a float 2.0
[2, 2.0]
It is also possible to select more than one item at a time,
using *slicing*::
>>> x[0:2] # or, equivalently, x[:2]
[2, 2.0]
In code, arrays are often conveniently expressed as nested lists::
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
For more information, read the section on lists in the `Python
tutorial <http://docs.python.org/tut>`_. For a mapping
type (key-value), see *dictionary*.
mask
A boolean array, used to select only certain elements for an operation::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> mask = (x > 2)
>>> mask
array([False, False, False, True, True], dtype=bool)
>>> x[mask] = -1
>>> x
array([ 0, 1, 2, -1, -1])
masked array
Array that suppressed values indicated by a mask::
>>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True])
>>> x
masked_array(data = [-- 2.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
>>> x + [1, 2, 3]
masked_array(data = [-- 4.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
Masked arrays are often used when operating on arrays containing
missing or invalid entries.
matrix
A 2-dimensional ndarray that preserves its two-dimensional nature
throughout operations. It has certain special operations, such as ``*``
(matrix multiplication) and ``**`` (matrix power), defined::
>>> x = np.mat([[1, 2], [3, 4]])
>>> x
matrix([[1, 2],
[3, 4]])
>>> x**2
matrix([[ 7, 10],
[15, 22]])
method
A function associated with an object. For example, each ndarray has a
method called ``repeat``::
>>> x = np.array([1, 2, 3])
>>> x.repeat(2)
array([1, 1, 2, 2, 3, 3])
ndarray
See *array*.
record array
An `ndarray`_ with `structured data type`_ which has been subclassed as
np.recarray and whose dtype is of type np.record, making the
fields of its data type to be accessible by attribute.
reference
If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore,
``a`` and ``b`` are different names for the same Python object.
row-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In row-major order, the rightmost index "varies
the fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the row-major order as::
[1, 2, 3, 4, 5, 6]
Row-major order is also known as the C order, as the C programming
language uses it. New Numpy arrays are by default in row-major order.
self
Often seen in method signatures, ``self`` refers to the instance
of the associated class. For example:
>>> class Paintbrush(object):
... color = 'blue'
...
... def paint(self):
... print "Painting the city %s!" % self.color
...
>>> p = Paintbrush()
>>> p.color = 'red'
>>> p.paint() # self refers to 'p'
Painting the city red!
slice
Used to select only certain elements from a sequence::
>>> x = range(5)
>>> x
[0, 1, 2, 3, 4]
>>> x[1:3] # slice from 1 to 3 (excluding 3 itself)
[1, 2]
>>> x[1:5:2] # slice from 1 to 5, but skipping every second element
[1, 3]
>>> x[::-1] # slice a sequence in reverse
[4, 3, 2, 1, 0]
Arrays may have more than one dimension, each which can be sliced
individually::
>>> x = np.array([[1, 2], [3, 4]])
>>> x
array([[1, 2],
[3, 4]])
>>> x[:, 1]
array([2, 4])
structured data type
A data type composed of other datatypes
tuple
A sequence that may contain a variable number of types of any
kind. A tuple is immutable, i.e., once constructed it cannot be
changed. Similar to a list, it can be indexed and sliced::
>>> x = (1, 'one', [1, 2])
>>> x
(1, 'one', [1, 2])
>>> x[0]
1
>>> x[:2]
(1, 'one')
A useful concept is "tuple unpacking", which allows variables to
be assigned to the contents of a tuple::
>>> x, y = (1, 2)
>>> x, y = 1, 2
This is often used when a function returns multiple values:
>>> def return_many():
... return 1, 'alpha', None
>>> a, b, c = return_many()
>>> a, b, c
(1, 'alpha', None)
>>> a
1
>>> b
'alpha'
ufunc
Universal function. A fast element-wise array operation. Examples include
``add``, ``sin`` and ``logical_or``.
view
An array that does not own its data, but refers to another array's
data instead. For example, we may create a view that only shows
every second element of another array::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> y = x[::2]
>>> y
array([0, 2, 4])
>>> x[0] = 3 # changing x changes y as well, since y is a view on x
>>> y
array([3, 2, 4])
wrapper
Python is a high-level (highly abstracted, or English-like) language.
This abstraction comes at a price in execution speed, and sometimes
it becomes necessary to use lower level languages to do fast
computations. A wrapper is code that provides a bridge between
high and the low level languages, allowing, e.g., Python to execute
code written in C or Fortran.
Examples include ctypes, SWIG and Cython (which wraps C and C++)
and f2py (which wraps Fortran).
""" |
# -*- coding: utf-8 -*-
# Spearmint
#
# Academic and Non-Commercial Research Use Software License and Terms
# of Use
#
# Spearmint is a software package to perform Bayesian optimization
# according to specific algorithms (the “Software”). The Software is
# designed to automatically run experiments (thus the code name
# 'spearmint') in a manner that iteratively adjusts a number of
# parameters so as to minimize some objective in as few runs as
# possible.
#
# The Software was developed by NAME NAME and
# NAME at Harvard University, NAME at the
# University of Toronto (“Toronto”), and NAME at the
# Université de Sherbrooke (“Sherbrooke”), which assigned its rights
# in the Software to Socpra Sciences et Génie
# S.E.C. (“Socpra”). Pursuant to an inter-institutional agreement
# between the parties, it is distributed for free academic and
# non-commercial research use by the President and Fellows of Harvard
# College (“Harvard”).
#
# Using the Software indicates your agreement to be bound by the terms
# of this Software Use Agreement (“Agreement”). Absent your agreement
# to the terms below, you (the “End User”) have no rights to hold or
# use the Software whatsoever.
#
# Harvard agrees to grant hereunder the limited non-exclusive license
# to End User for the use of the Software in the performance of End
# User’s internal, non-commercial research and academic use at End
# User’s academic or not-for-profit research institution
# (“Institution”) on the following terms and conditions:
#
# 1. NO REDISTRIBUTION. The Software remains the property Harvard,
# Toronto and Socpra, and except as set forth in Section 4, End User
# shall not publish, distribute, or otherwise transfer or make
# available the Software to any other party.
#
# 2. NO COMMERCIAL USE. End User shall not use the Software for
# commercial purposes and any such use of the Software is expressly
# prohibited. This includes, but is not limited to, use of the
# Software in fee-for-service arrangements, core facilities or
# laboratories or to provide research services to (or in collaboration
# with) third parties for a fee, and in industry-sponsored
# collaborative research projects where any commercial rights are
# granted to the sponsor. If End User wishes to use the Software for
# commercial purposes or for any other restricted purpose, End User
# must execute a separate license agreement with Harvard.
#
# Requests for use of the Software for commercial purposes, please
# contact:
#
# Office of Technology Development
# Harvard University
# Smith Campus Center, Suite 727E
# 1350 Massachusetts Avenue
# Cambridge, MA 02138 USA
# Telephone: (617) 495-3067
# Facsimile: (617) 495-9568
# E-mail: EMAIL 3. OWNERSHIP AND COPYRIGHT NOTICE. Harvard, Toronto and Socpra own
# all intellectual property in the Software. End User shall gain no
# ownership to the Software. End User shall not remove or delete and
# shall retain in the Software, in any modifications to Software and
# in any Derivative Works, the copyright, trademark, or other notices
# pertaining to Software as provided with the Software.
#
# 4. DERIVATIVE WORKS. End User may create and use Derivative Works,
# as such term is defined under U.S. copyright laws, provided that any
# such Derivative Works shall be restricted to non-commercial,
# internal research and academic use at End User’s Institution. End
# User may distribute Derivative Works to other Institutions solely
# for the performance of non-commercial, internal research and
# academic use on terms substantially similar to this License and
# Terms of Use.
#
# 5. FEEDBACK. In order to improve the Software, comments from End
# Users may be useful. End User agrees to provide Harvard with
# feedback on the End User’s use of the Software (e.g., any bugs in
# the Software, the user experience, etc.). Harvard is permitted to
# use such information provided by End User in making changes and
# improvements to the Software without compensation or an accounting
# to End User.
#
# 6. NON ASSERT. End User acknowledges that Harvard, Toronto and/or
# Sherbrooke or Socpra may develop modifications to the Software that
# may be based on the feedback provided by End User under Section 5
# above. Harvard, Toronto and Sherbrooke/Socpra shall not be
# restricted in any way by End User regarding their use of such
# information. End User acknowledges the right of Harvard, Toronto
# and Sherbrooke/Socpra to prepare, publish, display, reproduce,
# transmit and or use modifications to the Software that may be
# substantially similar or functionally equivalent to End User’s
# modifications and/or improvements if any. In the event that End
# User obtains patent protection for any modification or improvement
# to Software, End User agrees not to allege or enjoin infringement of
# End User’s patent against Harvard, Toronto or Sherbrooke or Socpra,
# or any of the researchers, medical or research staff, officers,
# directors and employees of those institutions.
#
# 7. PUBLICATION & ATTRIBUTION. End User has the right to publish,
# present, or share results from the use of the Software. In
# accordance with customary academic practice, End User will
# acknowledge Harvard, Toronto and Sherbrooke/Socpra as the providers
# of the Software and may cite the relevant reference(s) from the
# following list of publications:
#
# Practical Bayesian Optimization of Machine Learning Algorithms
# NAME, NAME and NAME Neural Information Processing Systems, 2012
#
# Multi-Task Bayesian Optimization
# NAME, NAME and NAME Advances in Neural Information Processing Systems, 2013
#
# Input Warping for Bayesian Optimization of Non-stationary Functions
# NAME, NAME, NAME and NAME Preprint, arXiv:1402.0929, http://arxiv.org/abs/1402.0929, 2013
#
# Bayesian Optimization and Semiparametric Models with Applications to
# Assistive Technology NAME, PhD Thesis, University of
# Toronto, 2013
#
# 8. NO WARRANTIES. THE SOFTWARE IS PROVIDED "AS IS." TO THE FULLEST
# EXTENT PERMITTED BY LAW, HARVARD, TORONTO AND SHERBROOKE AND SOCPRA
# HEREBY DISCLAIM ALL WARRANTIES OF ANY KIND (EXPRESS, IMPLIED OR
# OTHERWISE) REGARDING THE SOFTWARE, INCLUDING BUT NOT LIMITED TO ANY
# IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE, OWNERSHIP, AND NON-INFRINGEMENT. HARVARD, TORONTO AND
# SHERBROOKE AND SOCPRA MAKE NO WARRANTY ABOUT THE ACCURACY,
# RELIABILITY, COMPLETENESS, TIMELINESS, SUFFICIENCY OR QUALITY OF THE
# SOFTWARE. HARVARD, TORONTO AND SHERBROOKE AND SOCPRA DO NOT WARRANT
# THAT THE SOFTWARE WILL OPERATE WITHOUT ERROR OR INTERRUPTION.
#
# 9. LIMITATIONS OF LIABILITY AND REMEDIES. USE OF THE SOFTWARE IS AT
# END USER’S OWN RISK. IF END USER IS DISSATISFIED WITH THE SOFTWARE,
# ITS EXCLUSIVE REMEDY IS TO STOP USING IT. IN NO EVENT SHALL
# HARVARD, TORONTO OR SHERBROOKE OR SOCPRA BE LIABLE TO END USER OR
# ITS INSTITUTION, IN CONTRACT, TORT OR OTHERWISE, FOR ANY DIRECT,
# INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR OTHER
# DAMAGES OF ANY KIND WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH
# THE SOFTWARE, EVEN IF HARVARD, TORONTO OR SHERBROOKE OR SOCPRA IS
# NEGLIGENT OR OTHERWISE AT FAULT, AND REGARDLESS OF WHETHER HARVARD,
# TORONTO OR SHERBROOKE OR SOCPRA IS ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGES.
#
# 10. INDEMNIFICATION. To the extent permitted by law, End User shall
# indemnify, defend and hold harmless Harvard, Toronto and Sherbrooke
# and Socpra, their corporate affiliates, current or future directors,
# trustees, officers, faculty, medical and professional staff,
# employees, students and agents and their respective successors,
# heirs and assigns (the "Indemnitees"), against any liability,
# damage, loss or expense (including reasonable attorney's fees and
# expenses of litigation) incurred by or imposed upon the Indemnitees
# or any one of them in connection with any claims, suits, actions,
# demands or judgments arising from End User’s breach of this
# Agreement or its Institution’s use of the Software except to the
# extent caused by the gross negligence or willful misconduct of
# Harvard, Toronto or Sherbrooke or Socpra. This indemnification
# provision shall survive expiration or termination of this Agreement.
#
# 11. GOVERNING LAW. This Agreement shall be construed and governed by
# the laws of the Commonwealth of Massachusetts regardless of
# otherwise applicable choice of law standards.
#
# 12. NON-USE OF NAME. Nothing in this License and Terms of Use shall
# be construed as granting End Users or their Institutions any rights
# or licenses to use any trademarks, service marks or logos associated
# with the Software. You may not use the terms “Harvard” or
# “University of Toronto” or “Université de Sherbrooke” or “Socpra
# Sciences et Génie S.E.C.” (or a substantially similar term) in any
# way that is inconsistent with the permitted uses described
# herein. You agree not to use any name or emblem of Harvard, Toronto
# or Sherbrooke, or any of their subdivisions for any purpose, or to
# falsely suggest any relationship between End User (or its
# Institution) and Harvard, Toronto and/or Sherbrooke, or in any
# manner that would infringe or violate any of their rights.
#
# 13. End User represents and warrants that it has the legal authority
# to enter into this License and Terms of Use on behalf of itself and
# its Institution.
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# adapted from http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
# -thepaul
# This is an implementation of wcwidth() and wcswidth() (defined in
# IEEE Std 1002.1-2001) for Unicode.
#
# http://www.opengroup.org/onlinepubs/007904975/functions/wcwidth.html
# http://www.opengroup.org/onlinepubs/007904975/functions/wcswidth.html
#
# In fixed-width output devices, Latin characters all occupy a single
# "cell" position of equal width, whereas ideographic CJK characters
# occupy two such cells. Interoperability between terminal-line
# applications and (teletype-style) character terminals using the
# UTF-8 encoding requires agreement on which character should advance
# the cursor by how many cell positions. No established formal
# standards exist at present on which Unicode character shall occupy
# how many cell positions on character terminals. These routines are
# a first attempt of defining such behavior based on simple rules
# applied to data provided by the Unicode Consortium.
#
# For some graphical characters, the Unicode standard explicitly
# defines a character-cell width via the definition of the East Asian
# FullWidth (F), Wide (W), Half-width (H), and Narrow (Na) classes.
# In all these cases, there is no ambiguity about which width a
# terminal shall use. For characters in the East Asian Ambiguous (A)
# class, the width choice depends purely on a preference of backward
# compatibility with either historic CJK or Western practice.
# Choosing single-width for these characters is easy to justify as
# the appropriate long-term solution, as the CJK practice of
# displaying these characters as double-width comes from historic
# implementation simplicity (8-bit encoded characters were displayed
# single-width and 16-bit ones double-width, even for Greek,
# Cyrillic, etc.) and not any typographic considerations.
#
# Much less clear is the choice of width for the Not East Asian
# (Neutral) class. Existing practice does not dictate a width for any
# of these characters. It would nevertheless make sense
# typographically to allocate two character cells to characters such
# as for instance EM SPACE or VOLUME INTEGRAL, which cannot be
# represented adequately with a single-width glyph. The following
# routines at present merely assign a single-cell width to all
# neutral characters, in the interest of simplicity. This is not
# entirely satisfactory and should be reconsidered before
# establishing a formal standard in this area. At the moment, the
# decision which Not East Asian (Neutral) characters should be
# represented by double-width glyphs cannot yet be answered by
# applying a simple rule from the Unicode database content. Setting
# up a proper standard for the behavior of UTF-8 character terminals
# will require a careful analysis not only of each Unicode character,
# but also of each presentation form, something the author of these
# routines has avoided to do so far.
#
# http://www.unicode.org/unicode/reports/tr11/
#
# NAME -- 2007-05-26 (Unicode 5.0)
#
# Permission to use, copy, modify, and distribute this software
# for any purpose and without fee is hereby granted. The author
# disclaims all warranties with regard to this software.
#
# Latest C version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
# auxiliary function for binary search in interval table
|
"""
TestCmd.py: a testing framework for commands and scripts.
The TestCmd module provides a framework for portable automated testing
of executable commands and scripts (in any language, not just Python),
especially commands and scripts that require file system interaction.
In addition to running tests and evaluating conditions, the TestCmd
module manages and cleans up one or more temporary workspace
directories, and provides methods for creating files and directories in
those workspace directories from in-line data, here-documents), allowing
tests to be completely self-contained.
A TestCmd environment object is created via the usual invocation:
import TestCmd
test = TestCmd.TestCmd()
There are a bunch of keyword arguments available at instantiation:
test = TestCmd.TestCmd(description = 'string',
program = 'program_or_script_to_test',
interpreter = 'script_interpreter',
workdir = 'prefix',
subdir = 'subdir',
verbose = Boolean,
match = default_match_function,
diff = default_diff_function,
combine = Boolean)
There are a bunch of methods that let you do different things:
test.verbose_set(1)
test.description_set('string')
test.program_set('program_or_script_to_test')
test.interpreter_set('script_interpreter')
test.interpreter_set(['script_interpreter', 'arg'])
test.workdir_set('prefix')
test.workdir_set('')
test.workpath('file')
test.workpath('subdir', 'file')
test.subdir('subdir', ...)
test.rmdir('subdir', ...)
test.write('file', "contents\n")
test.write(['subdir', 'file'], "contents\n")
test.read('file')
test.read(['subdir', 'file'])
test.read('file', mode)
test.read(['subdir', 'file'], mode)
test.writable('dir', 1)
test.writable('dir', None)
test.preserve(condition, ...)
test.cleanup(condition)
test.command_args(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program')
test.run(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
chdir = 'directory_to_chdir_to',
stdin = 'input to feed to the program\n')
universal_newlines = True)
p = test.start(program = 'program_or_script_to_run',
interpreter = 'script_interpreter',
arguments = 'arguments to pass to program',
universal_newlines = None)
test.finish(self, p)
test.pass_test()
test.pass_test(condition)
test.pass_test(condition, function)
test.fail_test()
test.fail_test(condition)
test.fail_test(condition, function)
test.fail_test(condition, function, skip)
test.no_result()
test.no_result(condition)
test.no_result(condition, function)
test.no_result(condition, function, skip)
test.stdout()
test.stdout(run)
test.stderr()
test.stderr(run)
test.symlink(target, link)
test.banner(string)
test.banner(string, width)
test.diff(actual, expected)
test.match(actual, expected)
test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n")
test.match_exact(["actual 1\n", "actual 2\n"],
["expected 1\n", "expected 2\n"])
test.match_re("actual 1\nactual 2\n", regex_string)
test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes)
test.match_re_dotall("actual 1\nactual 2\n", regex_string)
test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes)
test.tempdir()
test.tempdir('temporary-directory')
test.sleep()
test.sleep(seconds)
test.where_is('foo')
test.where_is('foo', 'PATH1:PATH2')
test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
test.unlink('file')
test.unlink('subdir', 'file')
The TestCmd module provides pass_test(), fail_test(), and no_result()
unbound functions that report test results for use with the Aegis change
management system. These methods terminate the test immediately,
reporting PASSED, FAILED, or NO RESULT respectively, and exiting with
status 0 (success), 1 or 2 respectively. This allows for a distinction
between an actual failed test and a test that could not be properly
evaluated because of an external condition (such as a full file system
or incorrect permissions).
import TestCmd
TestCmd.pass_test()
TestCmd.pass_test(condition)
TestCmd.pass_test(condition, function)
TestCmd.fail_test()
TestCmd.fail_test(condition)
TestCmd.fail_test(condition, function)
TestCmd.fail_test(condition, function, skip)
TestCmd.no_result()
TestCmd.no_result(condition)
TestCmd.no_result(condition, function)
TestCmd.no_result(condition, function, skip)
The TestCmd module also provides unbound functions that handle matching
in the same way as the match_*() methods described above.
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_exact)
test = TestCmd.TestCmd(match = TestCmd.match_re)
test = TestCmd.TestCmd(match = TestCmd.match_re_dotall)
The TestCmd module provides unbound functions that can be used for the
"diff" argument to TestCmd.TestCmd instantiation:
import TestCmd
test = TestCmd.TestCmd(match = TestCmd.match_re,
diff = TestCmd.diff_re)
test = TestCmd.TestCmd(diff = TestCmd.simple_diff)
The "diff" argument can also be used with standard difflib functions:
import difflib
test = TestCmd.TestCmd(diff = difflib.context_diff)
test = TestCmd.TestCmd(diff = difflib.unified_diff)
Lastly, the where_is() method also exists in an unbound function
version.
import TestCmd
TestCmd.where_is('foo')
TestCmd.where_is('foo', 'PATH1:PATH2')
TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4')
""" |
#!/usr/bin/env python3
############################################################################
#
# MODULE: i.cutlinesmod
# AUTHOR(S): Moritz NAME with help of NAME modified by
# NAME PURPOSE: Create tiles the borders of which do not cut across semantically
# meaningful objects
# COPYRIGHT: (C) 1997-2018 by the GRASS Development Team
#
# This program is free software under the GNU General Public
# License (>=v2). Read the file COPYING that comes with GRASS
# for details.
#############################################################################
#%Module
#% description: Creates semantically meaningful tile borders
#% keyword: imagery
#% keyword: tiling
#%end
#
#%option G_OPT_R_INPUT
#% description: Raster map to use as input for tiling
#% required: yes
#%end
#
#%option G_OPT_V_OUTPUT
#% description: Name of output vector map with cutline polygons
#%end
#
#%option
#% key: number_lines
#% type: integer
#% description: Number of tile border lines in each direction
#% required: yes
#%end
#
#%option
#% key: edge_detection
#% type: string
#% description: Edge detection algorithm to use
#% options: zc,canny
#% answer: zc
#% required: yes
#%end
#
#%option G_OPT_V_INPUTS
#% key: existing_cutlines
#% label: Input vector maps with existing cutlines
#% required: no
#%end
#
#%option
#% key: no_edge_friction
#% type: integer
#% description: Additional friction for non-edge pixels
#% required: yes
#% answer: 5
#%end
#
#%option
#% key: lane_border_multiplier
#% type: integer
#% description: Multiplier for borders of lanes compared to non-edge pixels
#% required: yes
#% answer: 10
#%end
#
#%option
#% key: min_tile_size
#% type: integer
#% description: Minimum size of tiles in map units
#% required: no
#%end
#
#%option
#% key: zc_threshold
#% type: double
#% label: Sensitivity of Gaussian filter (i.zc)
#% answer: 1
#% required: no
#% guisection: Zero-crossing
#%end
#
#%option
#% key: zc_width
#% type: integer
#% label: x-y extent of the Gaussian filter (i.zc)
#% answer: 9
#% required: no
#% guisection: Zero-crossing
#%end
#
#%option
#% key: canny_low_threshold
#% type: double
#% label: Low treshold for edges (i.edge)
#% answer: 3
#% required: no
#% guisection: Canny
#%end
#
#%option
#% key: canny_high_threshold
#% type: double
#% label: High treshold for edges (i.edge)
#% answer: 10
#% required: no
#% guisection: Canny
#%end
#
#%option
#% key: canny_sigma
#% type: double
#% label: Kernel radius (i.edge)
#% answer: 2
#% required: no
#% guisection: Canny
#%end
#
#%option
#% key: tile_width
#% type: integer
#% description: Width of tiles for tiled edge detection (pixels)
#% required: no
#% guisection: Parallel processing
#%end
#
#%option
#% key: tile_height
#% type: integer
#% description: Height of tiles for tiled edge detection (pixels)
#% required: no
#% guisection: Parallel processing
#%end
#
#%option
#% key: overlap
#% type: integer
#% description: Overlap between tiles for tiled edge detection (pixels)
#% required: no
#% answer: 1
#% guisection: Parallel processing
#%end
#
#%option
#% key: processes
#% type: integer
#% description: Number of parallel processes
#% answer: 1
#% required: yes
#% guisection: Parallel processing
#%end
#
#%option
#% key: memory
#% type: integer
#% description: RAM memory available (in MB)
#% answer: 300
#% required: yes
#%end
#
#%rules
#% collective: tile_width, tile_height, overlap
#%end
|
"""
Simple config
=============
Although CherryPy uses the :mod:`Python logging module <logging>`, it does so
behind the scenes so that simple logging is simple, but complicated logging
is still possible. "Simple" logging means that you can log to the screen
(i.e. console/stdout) or to a file, and that you can easily have separate
error and access log files.
Here are the simplified logging settings. You use these by adding lines to
your config file or dict. You should set these at either the global level or
per application (see next), but generally not both.
* ``log.screen``: Set this to True to have both "error" and "access" messages
printed to stdout.
* ``log.access_file``: Set this to an absolute filename where you want
"access" messages written.
* ``log.error_file``: Set this to an absolute filename where you want "error"
messages written.
Many events are automatically logged; to log your own application events, call
:func:`cherrypy.log`.
Architecture
============
Separate scopes
---------------
CherryPy provides log managers at both the global and application layers.
This means you can have one set of logging rules for your entire site,
and another set of rules specific to each application. The global log
manager is found at :func:`cherrypy.log`, and the log manager for each
application is found at :attr:`app.log<cherrypy._cptree.Application.log>`.
If you're inside a request, the latter is reachable from
``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain
a reference to the ``app``: either the return value of
:func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used
:func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``.
By default, the global logs are named "cherrypy.error" and "cherrypy.access",
and the application logs are named "cherrypy.error.2378745" and
"cherrypy.access.2378745" (the number is the id of the Application object).
This means that the application logs "bubble up" to the site logs, so if your
application has no log handlers, the site-level handlers will still log the
messages.
Errors vs. Access
-----------------
Each log manager handles both "access" messages (one per HTTP request) and
"error" messages (everything else). Note that the "error" log is not just for
errors! The format of access messages is highly formalized, but the error log
isn't--it receives messages from a variety of sources (including full error
tracebacks, if enabled).
Custom Handlers
===============
The simple settings above work by manipulating Python's standard :mod:`logging`
module. So when you need something more complex, the full power of the standard
module is yours to exploit. You can borrow or create custom handlers, formats,
filters, and much more. Here's an example that skips the standard FileHandler
and uses a RotatingFileHandler instead:
::
#python
log = app.log
# Remove the default FileHandlers if present.
log.error_file = ""
log.access_file = ""
maxBytes = getattr(log, "rot_maxBytes", 10000000)
backupCount = getattr(log, "rot_backupCount", 1000)
# Make a new RotatingFileHandler for the error log.
fname = getattr(log, "rot_error_file", "error.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(DEBUG)
h.setFormatter(_cplogging.logfmt)
log.error_log.addHandler(h)
# Make a new RotatingFileHandler for the access log.
fname = getattr(log, "rot_access_file", "access.log")
h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount)
h.setLevel(DEBUG)
h.setFormatter(_cplogging.logfmt)
log.access_log.addHandler(h)
The ``rot_*`` attributes are pulled straight from the application log object.
Since "log.*" config entries simply set attributes on the log object, you can
add custom attributes to your heart's content. Note that these handlers are
used ''instead'' of the default, simple handlers outlined above (so don't set
the "log.error_file" config entry, for example).
""" |
"""
This module contains the machinery handling assumptions.
All symbolic objects have assumption attributes that can be accessed via
.is_<assumption name> attribute.
Assumptions determine certain properties of symbolic objects and can
have 3 possible values: True, False, None. True is returned if the
object has the property and False is returned if it doesn't or can't
(i.e. doesn't make sense):
>>> from sympy import I
>>> I.is_algebraic
True
>>> I.is_real
False
>>> I.is_prime
False
When the property cannot be determined (or when a method is not
implemented) None will be returned, e.g. a generic symbol, x, may or
may not be positive so a value of None is returned for x.is_positive.
By default, all symbolic values are in the largest set in the given context
without specifying the property. For example, a symbol that has a property
being integer, is also real, complex, etc.
Here follows a list of possible assumption names:
.. glossary::
commutative
object commutes with any other object with
respect to multiplication operation.
complex
object can have only values from the set
of complex numbers.
imaginary
object value is a number that can be written as a real
number multiplied by the imaginary unit ``I``. See
[3]_. Please note, that ``0`` is not considered to be an
imaginary number, see
`issue #7649 <https://github.com/sympy/sympy/issues/7649>`_.
real
object can have only values from the set
of real numbers.
integer
object can have only values from the set
of integers.
odd
even
object can have only values from the set of
odd (even) integers [2]_.
prime
object is a natural number greater than ``1`` that has
no positive divisors other than ``1`` and itself. See [6]_.
composite
object is a positive integer that has at least one positive
divisor other than ``1`` or the number itself. See [4]_.
zero
nonzero
object is zero (not zero).
rational
object can have only values from the set
of rationals.
algebraic
object can have only values from the set
of algebraic numbers [11]_.
transcendental
object can have only values from the set
of transcendental numbers [10]_.
irrational
object value cannot be represented exactly by Rational, see [5]_.
finite
infinite
object absolute value is bounded (is value is
arbitrarily large). See [7]_, [8]_, [9]_.
negative
nonnegative
object can have only negative (only
nonnegative) values [1]_.
positive
nonpositive
object can have only positive (only
nonpositive) values.
hermitian
antihermitian
object belongs to the field of hermitian
(antihermitian) operators.
Examples
========
>>> from sympy import Symbol
>>> x = Symbol('x', real = True); x
x
>>> x.is_real
True
>>> x.is_complex
True
See Also
========
.. seealso::
:py:class:`sympy.core.numbers.ImaginaryUnit`
:py:class:`sympy.core.numbers.Zero`
:py:class:`sympy.core.numbers.One`
Notes
=====
Assumption values are stored in obj._assumptions dictionary or
are returned by getter methods (with property decorators) or are
attributes of objects/classes.
References
==========
.. [1] http://en.wikipedia.org/wiki/Negative_number
.. [2] http://en.wikipedia.org/wiki/Parity_%28mathematics%29
.. [3] http://en.wikipedia.org/wiki/Imaginary_number
.. [4] http://en.wikipedia.org/wiki/Composite_number
.. [5] http://en.wikipedia.org/wiki/Irrational_number
.. [6] http://en.wikipedia.org/wiki/Prime_number
.. [7] http://en.wikipedia.org/wiki/Finite
.. [8] https://docs.python.org/3/library/math.html#math.isfinite
.. [9] http://docs.scipy.org/doc/numpy/reference/generated/numpy.isfinite.html
.. [10] http://en.wikipedia.org/wiki/Transcendental_number
.. [11] http://en.wikipedia.org/wiki/Algebraic_number
""" |
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
#!/bin/env python
# Script that uses RHN API to clone RHN Errata to Satellite
# or Spacewalk server.
# Copyright (c) 2008--2011 Red Hat, Inc.
#
# Author: NAME (EMAIL)
#
# This script is an extension of the original "rhn-clone-errata.py"
# script written by: NAME (EMAIL)
#
# (THANKS!)
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
# Version Information:
#
# 0.1 - 2009-09-01 - NAME
#
# Initial release. Lots of problems. Oof.
#
# 0.2 - 2009-09-11 - NAME
#
# Updated methodology for handling errata. Breaking up individual
# errata appended with a channel identifier to better automate publishing
# of errata.
#
# Some code reworking. I still suck at python. Removed deprecated "sets"
# module.
#
# 0.3 - 2009-09-17 - NAME
#
# Fixed a rather glaring bug in the logic regarding relevant channel
# for package selection. Ugh.
#
# 0.4 - 2009-10-01 - NAME
#
# Modified how the publish happens. Now it creates the errata and THEN
# calls the separate errata.publish() function. I was having some
# intermittent time-outs doing the two together in the errata.create()
# function.
#
# 0.5 - 2010-03-17 - NAME
#
# Moved servers, users and passwords to a config file of your choice.
# Many config options changed as a result. Options on the command line
# override config file options.
#
# Merged proxy support code from NAME <EMAIL> (THANKS!)
#
# Modified some of the formatting for logfile output.
#
# I continue to suck at Python.
#
# 0.6 - 2010-03-18 - NAME
#
# Corrected a grievous bug in the new Proxy code.
#
# Moved Channel and ChannelSuffix maps to the config file.
#
# 0.7 - 2010-11-10 - NAME
#
# Minor bugfixes a/o cosmetic changes.
#
# 0.8.1 - 2011-06-06 - NAME
#
# Testing out new proxy code for handling authenticated proxies also.
# NOT PRODUCTION CODE
#
# 0.8.2 - 2011-06-06 - NAME
#
# Update to new proxy code.
#
# 0.8.3 - 2011-06-06 - NAME
#
# Add selector for which server connections need proxy. This is crude, will cleanup later.
#
# 0.8.4 - 2011-06-06 - NAME
#
# Add some code to handle transparent proxies.
#
# 0.9.0 - 2011-11-17 - NAME
#
# Included patch from NAME <EMAIL> that gives an option for a
# full sync of all channels listed in the configuration file.
#
# Thanks, NAME Additionally, changed the default behaviour of how the script handles errata that are
# missing packages on the system. The script now skips any errata that is missing one
# or more packages on the system. However, I've added an option to allow the script
# to ignore missing packages so that the old behaviour remains.
#
# 0.9.1
#
# Whitspace cleanup and additon of CVE handling.
#
# 0.9.2 - 2012-02-14 - NAME
#
# Rewrite of package searching and handling.
# Fix some problems with CVE handling.
#
|
#
# ElementTree
# $Id: ElementTree.py 3276 2007-09-12 06:52:30Z USERNAME $
#
# light-weight XML support for Python 2.2 and later.
#
# history:
# 2001-10-20 fl created (from various sources)
# 2001-11-01 fl return root from parse method
# 2002-02-16 fl sort attributes in lexical order
# 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup
# 2002-05-01 fl finished TreeBuilder refactoring
# 2002-07-14 fl added basic namespace support to ElementTree.write
# 2002-07-25 fl added QName attribute support
# 2002-10-20 fl fixed encoding in write
# 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding
# 2002-11-27 fl accept file objects or file names for parse/write
# 2002-12-04 fl moved XMLTreeBuilder back to this module
# 2003-01-11 fl fixed entity encoding glitch for us-ascii
# 2003-02-13 fl added XML literal factory
# 2003-02-21 fl added ProcessingInstruction/PI factory
# 2003-05-11 fl added tostring/fromstring helpers
# 2003-05-26 fl added ElementPath support
# 2003-07-05 fl added makeelement factory method
# 2003-07-28 fl added more well-known namespace prefixes
# 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed
# 2003-10-31 fl markup updates
# 2003-11-15 fl fixed nested namespace bug
# 2004-03-28 fl added XMLID helper
# 2004-06-02 fl added default support to findtext
# 2004-06-08 fl fixed encoding of non-ascii element/attribute names
# 2004-08-23 fl take advantage of post-2.1 expat features
# 2004-09-03 fl made Element class visible; removed factory
# 2005-02-01 fl added iterparse implementation
# 2005-03-02 fl fixed iterparse support for pre-2.2 versions
# 2005-11-12 fl added tostringlist/fromstringlist helpers
# 2006-07-05 fl merged in selected changes from the 1.3 sandbox
# 2006-07-05 fl removed support for 2.1 and earlier
# 2007-06-21 fl added deprecation/future warnings
# 2007-08-25 fl added doctype hook, added parser version attribute etc
# 2007-08-26 fl added new serializer code (better namespace handling, etc)
# 2007-08-27 fl warn for broken /tag searches on tree level
# 2007-09-02 fl added html/text methods to serializer (experimental)
# 2007-09-05 fl added method argument to tostring/tostringlist
# 2007-09-06 fl improved error handling
#
# Copyright (c) 1999-2007 by NAME All rights reserved.
#
# EMAIL
# http://www.pythonware.com
#
# --------------------------------------------------------------------
# The ElementTree toolkit is
#
# Copyright (c) 1999-2007 by NAME By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
|
"""The :py:mod:`~tdda.referencetest` module provides support for unit tests,
allowing them to easily compare test results against saved
"known to be correct" reference results.
This is typically useful for testing software that produces any of the following
types of output:
- a CSV file
- a text file (for example: HTML, JSON, logfiles, graphs, tables, etc)
- a string
- a Pandas DataFrame.
The main features are:
- If the comparison between a string and a file fails,
the actual string is written to a file and a ``diff``
command is suggested for seeing the differences between
the actual output and the expected output.
- There is support for CSV files, allowing fine control over
how the comparison is to be performed. This includes:
- the ability to select which columns to compare (and which
to exclude from the comparison).
- the ability to compare metadata (types of fields) as well
as values.
- the ability to specify the precision (as number of decimal places)
for the comparison of floating-point values.
- clear reporting of where the differences are, if the comparison
fails.
- There is support for ignoring lines within the strings/files
that contain particular patterns or regular expressions.
This is typically useful for filtering out things like
version numbers and timestamps that vary in the output
from run to run, but which do not indicate a problem.
- There is support for re-writing the reference output
with the actual output. This, obviously, should be used
only after careful checking that the new output is correct,
either because the previous output was in fact wrong,
or because the intended behaviour has changed.
- It allows you to group your reference results into different *kinds*.
This means you can keep different kinds of reference result files in
different locations. It also means that you can selectively
choose to only regenerate particular kinds of reference results,
if they need to be updated because they turned out to have been
wrong or if the intended behaviour has changed.
Kinds are strings.
Prerequisites
-------------
- :py:mod:`pandas` optional, required for CSV file support, see http://pandas.pydata.org.
- :py:mod:`pytest` optional, required for tests based on pytest rather than unittest, see http://docs.pytest.org.
These can be installed with::
pip install pandas
pip install pytest
The module provides interfaces for this to be called from unit-tests
based on either the standard Python :py:mod:`unittest` framework,
or on :py:mod:`pytest`.
Simple Examples
---------------
**Simple unittest example:**
For use with :py:mod:`unittest`, the
:py:class:`~tdda.referencetest.referencetest.ReferenceTest` API is provided
through the :py:class:`~tdda.referencetest.referencetestcase.ReferenceTestCase`
class. This is an extension to the standard :py:class:`unittest.TestCase`
class, so that the ``ReferenceTest`` methods can be called directly from
:py:mod:`unittest` tests.
This example shows how to write a test for a function that generates
a CSV file::
from tdda.referencetest import ReferenceTestCase, tag
import my_module
class MyTest(ReferenceTestCase):
@tag
def test_my_csv_file(self):
result = my_module.produce_a_csv_file(self.tmp_dir)
self.assertCSVFileCorrect(result, 'result.csv')
MyTest.set_default_data_location('testdata')
if __name__ == '__main__':
ReferenceTestCase.main()
To run the test:
.. code-block:: bash
python mytest.py
The test is tagged with ``@tag``, meaning that it will be included if
you run the tests with the ``--tagged`` option flag to specify that only
tagged tests should be run:
.. code-block:: bash
python mytest.py --tagged
The first time you run the test, it will produce an error unless you
have already created the expected ("reference") results. You can
create the reference results automatically
.. code-block:: bash
python mytest.py --write-all
Having generated the reference results, you should carefully examine
the files it has produced in the data output location, to check that
they are as expected.
**Simple pytest example:**
For use with :py:mod:`pytest`, the
:py:class:`~tdda.referencetest.referencetest.ReferenceTest` API is provided
through the :py:mod:`~tdda.referencetest.referencepytest` module. This is
a module that can be imported directly from ``pytest`` tests, allowing them
to access :py:class:`~tdda.referencetest.referencetest.ReferenceTest`
methods and properties.
This example shows how to write a test for a function that generates
a CSV file::
from tdda.referencetest import referencepytest, tag
import my_module
@tag
def test_my_csv_function(ref):
resultfile = my_module.produce_a_csv_file(ref.tmp_dir)
ref.assertCSVFileCorrect(resultfile, 'result.csv')
referencepytest.set_default_data_location('testdata')
You also need a ``conftest.py`` file, to define the fixtures and defaults::
import pytest
from tdda.referencetest import referencepytest
def pytest_addoption(parser):
referencepytest.addoption(parser)
def pytest_collection_modifyitems(session, config, items):
referencepytest.tagged(config, items)
@pytest.fixture(scope='module')
def ref(request):
return referencepytest.ref(request)
referencepytest.set_default_data_location('testdata')
To run the test:
.. code-block:: bash
pytest
The test is tagged with ``@tag``, meaning that it will be included if
you run the tests with the ``--tagged`` option flag to specify that only
tagged tests should be run:
.. code-block:: bash
pytest --tagged
The first time you run the test, it will produce an error unless you
have already created the expected ("reference") results. You can
create the reference results automatically:
.. code-block:: bash
pytest --write-all -s
Having generated the reference results, you should examine the files it has
produced in the data output location, to check that they are as expected.
""" |
"""automatically manage newlines in repository files
This extension allows you to manage the type of line endings (CRLF or
LF) that are used in the repository and in the local working
directory. That way you can get CRLF line endings on Windows and LF on
Unix/Mac, thereby letting everybody use their OS native line endings.
The extension reads its configuration from a versioned ``.hgeol``
configuration file found in the root of the working copy. The
``.hgeol`` file use the same syntax as all other Mercurial
configuration files. It uses two sections, ``[patterns]`` and
``[repository]``.
The ``[patterns]`` section specifies how line endings should be
converted between the working copy and the repository. The format is
specified by a file pattern. The first match is used, so put more
specific patterns first. The available line endings are ``LF``,
``CRLF``, and ``BIN``.
Files with the declared format of ``CRLF`` or ``LF`` are always
checked out and stored in the repository in that format and files
declared to be binary (``BIN``) are left unchanged. Additionally,
``native`` is an alias for checking out in the platform's default line
ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on
Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's
default behaviour; it is only needed if you need to override a later,
more general pattern.
The optional ``[repository]`` section specifies the line endings to
use for files stored in the repository. It has a single setting,
``native``, which determines the storage line endings for files
declared as ``native`` in the ``[patterns]`` section. It can be set to
``LF`` or ``CRLF``. The default is ``LF``. For example, this means
that on Windows, files configured as ``native`` (``CRLF`` by default)
will be converted to ``LF`` when stored in the repository. Files
declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section
are always stored as-is in the repository.
Example versioned ``.hgeol`` file::
[patterns]
**.py = native
**.vcproj = CRLF
**.txt = native
Makefile = LF
**.jpg = BIN
[repository]
native = LF
.. note::
The rules will first apply when files are touched in the working
copy, e.g. by updating to null and back to tip to touch all files.
The extension uses an optional ``[eol]`` section read from both the
normal Mercurial configuration files and the ``.hgeol`` file, with the
latter overriding the former. You can use that section to control the
overall behavior. There are three settings:
- ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or
``CRLF`` to override the default interpretation of ``native`` for
checkout. This can be used with :hg:`archive` on Unix, say, to
generate an archive where files have line endings for Windows.
- ``eol.only-consistent`` (default True) can be set to False to make
the extension convert files with inconsistent EOLs. Inconsistent
means that there is both ``CRLF`` and ``LF`` present in the file.
Such files are normally not touched under the assumption that they
have mixed EOLs on purpose.
- ``eol.fix-trailing-newline`` (default False) can be set to True to
ensure that converted files end with a EOL character (either ``\\n``
or ``\\r\\n`` as per the configured patterns).
The extension provides ``cleverencode:`` and ``cleverdecode:`` filters
like the deprecated win32text extension does. This means that you can
disable win32text and enable eol and your filters will still work. You
only need to these filters until you have prepared a ``.hgeol`` file.
The ``win32text.forbid*`` hooks provided by the win32text extension
have been unified into a single hook named ``eol.checkheadshook``. The
hook will lookup the expected line endings from the ``.hgeol`` file,
which means you must migrate to a ``.hgeol`` file first before using
the hook. ``eol.checkheadshook`` only checks heads, intermediate
invalid revisions will be pushed. To forbid them completely, use the
``eol.checkallhook`` hook. These hooks are best used as
``pretxnchangegroup`` hooks.
See :hg:`help patterns` for more information about the glob patterns
used.
""" |
"""
===============
Array Internals
===============
Internal organization of numpy arrays
=====================================
It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy".
Numpy arrays consist of two major components, the raw array data (from now on,
referred to as the data buffer), and the information about the raw array data.
The data buffer is typically what people think of as arrays in C or Fortran,
a contiguous (and fixed) block of memory containing fixed sized data items.
Numpy also contains a significant set of data that describes how to interpret
the data in the data buffer. This extra information contains (among other things):
1) The basic data element's size in bytes
2) The start of the data within the data buffer (an offset relative to the
beginning of the data buffer).
3) The number of dimensions and the size of each dimension
4) The separation between elements for each dimension (the 'stride'). This
does not have to be a multiple of the element size
5) The byte order of the data (which may not be the native byte order)
6) Whether the buffer is read-only
7) Information (via the dtype object) about the interpretation of the basic
data element. The basic data element may be as simple as a int or a float,
or it may be a compound object (e.g., struct-like), a fixed character field,
or Python object pointers.
8) Whether the array is to interpreted as C-order or Fortran-order.
This arrangement allow for very flexible use of arrays. One thing that it allows
is simple changes of the metadata to change the interpretation of the array buffer.
Changing the byteorder of the array is a simple change involving no rearrangement
of the data. The shape of the array can be changed very easily without changing
anything in the data buffer or any data copying at all
Among other things that are made possible is one can create a new array metadata
object that uses the same data buffer
to create a new view of that data buffer that has a different interpretation
of the buffer (e.g., different shape, offset, byte order, strides, etc) but
shares the same data bytes. Many operations in numpy do just this such as
slices. Other operations, such as transpose, don't move data elements
around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
Typically these new versions of the array metadata but the same data buffer are
new 'views' into the data buffer. There is a different ndarray object, but it
uses the same data buffer. This is why it is necessary to force copies through
use of the .copy() method if one really wants to make a new and independent
copy of the data buffer.
New views into arrays mean the the object reference counts for the data buffer
increase. Simply doing away with the original array object will not remove the
data buffer if other views of it still exist.
Multidimensional Array Indexing Order Issues
============================================
What is the right way to index
multi-dimensional arrays? Before you jump to conclusions about the one and
true way to index multi-dimensional arrays, it pays to understand why this is
a confusing issue. This section will try to explain in detail how numpy
indexing works and why we adopt the convention we do for images, and when it
may be appropriate to adopt other conventions.
The first thing to understand is
that there are two conflicting conventions for indexing 2-dimensional arrays.
Matrix notation uses the first index to indicate which row is being selected and
the second index to indicate which column is selected. This is opposite the
geometrically oriented-convention for images where people generally think the
first index represents x position (i.e., column) and the second represents y
position (i.e., row). This alone is the source of much confusion;
matrix-oriented users and image-oriented users expect two different things with
regard to indexing.
The second issue to understand is how indices correspond
to the order the array is stored in memory. In Fortran the first index is the
most rapidly varying index when moving through the elements of a two
dimensional array as it is stored in memory. If you adopt the matrix
convention for indexing, then this means the matrix is stored one column at a
time (since the first index moves to the next row as it changes). Thus Fortran
is considered a Column-major language. C has just the opposite convention. In
C, the last index changes most rapidly as one moves through the array as
stored in memory. Thus C is a Row-major language. The matrix is stored by
rows. Note that in both cases it presumes that the matrix convention for
indexing is being used, i.e., for both Fortran and C, the first index is the
row. Note this convention implies that the indexing convention is invariant
and that the data order changes to keep that so.
But that's not the only way
to look at it. Suppose one has large two-dimensional arrays (images or
matrices) stored in data files. Suppose the data are stored by rows rather than
by columns. If we are to preserve our index convention (whether matrix or
image) that means that depending on the language we use, we may be forced to
reorder the data if it is read into memory to preserve our indexing
convention. For example if we read row-ordered data into memory without
reordering, it will match the matrix indexing convention for C, but not for
Fortran. Conversely, it will match the image indexing convention for Fortran,
but not for C. For C, if one is using data stored in row order, and one wants
to preserve the image index convention, the data must be reordered when
reading into memory.
In the end, which you do for Fortran or C depends on
which is more important, not reordering data or preserving the indexing
convention. For large images, reordering data is potentially expensive, and
often the indexing convention is inverted to avoid that.
The situation with
numpy makes this issue yet more complicated. The internal machinery of numpy
arrays is flexible enough to accept any ordering of indices. One can simply
reorder indices by manipulating the internal stride information for arrays
without reordering the data at all. Numpy will know how to map the new index
order to the data without moving the data.
So if this is true, why not choose
the index order that matches what you most expect? In particular, why not define
row-ordered images to use the image convention? (This is sometimes referred
to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
order options for array ordering in numpy.) The drawback of doing this is
potential performance penalties. It's common to access the data sequentially,
either implicitly in array operations or explicitly by looping over rows of an
image. When that is done, then the data will be accessed in non-optimal order.
As the first index is incremented, what is actually happening is that elements
spaced far apart in memory are being sequentially accessed, with usually poor
memory access speeds. For example, for a two dimensional image 'im' defined so
that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
Python behavior then im[0] would represent a column at x=0. Yet that data
would be spread over the whole array since the data are stored in row order.
Despite the flexibility of numpy's indexing, it can't really paper over the fact
basic operations are rendered inefficient because of data order or that getting
contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
im[0]), thus one can't use an idiom such as for row in im; for col in im does
work, but doesn't yield contiguous column data.
As it turns out, numpy is
smart enough when dealing with ufuncs to determine which index is the most
rapidly varying one in memory and uses that for the innermost loop. Thus for
ufuncs there is no large intrinsic advantage to either approach in most cases.
On the other hand, use of .flat with an FORTRAN ordered array will lead to
non-optimal memory access as adjacent elements in the flattened array (iterator,
actually) are not contiguous in memory.
Indeed, the fact is that Python
indexing on lists and other sequences naturally leads to an outside-to inside
ordering (the first index gets the largest grouping, the next the next largest,
and the last gets the smallest element). Since image data are normally stored
by rows, this corresponds to position within rows being the last item indexed.
If you do want to use Fortran ordering realize that
there are two approaches to consider: 1) accept that the first index is just not
the most rapidly changing in memory and have all your I/O routines reorder
your data when going from memory to disk or visa versa, or use numpy's
mechanism for mapping the first index to the most rapidly varying data. We
recommend the former if possible. The disadvantage of the latter is that many
of numpy's functions will yield arrays without Fortran ordering unless you are
careful to use the 'order' keyword. Doing this would be highly inconvenient.
Otherwise we recommend simply learning to reverse the usual order of indices
when accessing elements of an array. Granted, it goes against the grain, but
it is more in line with Python semantics and the natural order of the data.
""" |
#import warnings
#import numpy as np
#from crf import logsumexp, defaultdict, exp, add, iterview
#
#class CRF2(object):
#
# def risk(self, x, Y):
# """
# Risk (Hamming loss)
# """
#
# warnings.warn('this implementation is incorrect!')
#
# Y = self.label_alphabet.map(Y)
#
# N = x.N; K = self.K; f = x.feature_table
# (g0, g) = self.log_potentials(x)
#
# a = self.forward(g0,g,N,K)
# b = self.backward(g,N,K)
#
# # log-normalizing constant
# logZ = logsumexp(a[N-1,:])
#
# Er = 0.0
# Erf = defaultdict(float)
# Ef = defaultdict(float)
#
# # The first factor needs to be special case'd
# c = exp(g0 + b[0,:] - logZ)
# for y in xrange(K):
# p = c[y]
# #if p < 1e-8: continue
# r = (Y[0] != y)
# Er += p * r
# for k in f[0, None, y]:
# Ef[k] += p
# Erf[k] += p*r
#
# for t in xrange(1,N):
# # vectorized computation of the marginal for this transition factor
# c = exp((add.outer(a[t-1,:], b[t,:]) + g[t-1,:,:] - logZ))
#
# for yp in xrange(K):
# for y in xrange(K):
# p = c[yp, y]
# #if p < 1e-8: continue
# r = (Y[t] != y)
# Er += p * r
# for k in f[t, yp, y]:
# Ef[k] += p
# Erf[k] += p*r
#
# Cov_rf = defaultdict(float)
# for k in Ef:
## if abs(Ef[k] - Erf[k]) > 1e-8:
## print k, Ef[k], Erf[k]
# Cov_rf[k] = Erf[k] - Er*Ef[k]
#
# return Er, Cov_rf
#
# def test_gradient_risk(self, data, subsetsize=10):
#
# def fd(x, i, eps=1e-4):
# """Compute `i`th component of the finite-difference approximation to the
# gradient of log-likelihood at current parameters on example `x`.
#
# """
#
# was = self.W[i] # record value
#
# self.W[i] = was+eps
# b,_ = self.risk(x, x.truth)
#
# self.W[i] = was-eps
# a,_ = self.risk(x, x.truth)
#
# self.W[i] = was # restore original value
#
# return (b - a) / 2 / eps
#
# for x in iterview(data, msg='test grad'):
#
# _, g = self.risk(x, x.truth)
#
# # pick a subset of features to test
# d = np.random.choice(g.keys(), subsetsize, replace=0)
#
# f = {}
# for i in iterview(d, msg='fd approx'): # loop over active features
# f[i] = fd(x, i)
#
# from arsenal.math import compare
# compare([f[k] for k in d],
# [g[k] for k in d], name='test gradient %s' % x,
# scatter=1,
# show_regression=1,
# alphabet=d)
# import pylab as pl
# pl.show()
|
"""
AUI is an Advanced User Interface library that aims to implement "cutting-edge"
interface usability and design features so developers can quickly and easily create
beautiful and usable application interfaces.
Vision and Design Principles
============================
AUI attempts to encapsulate the following aspects of the user interface:
* **Frame Management**: Frame management provides the means to open, move and hide common
controls that are needed to interact with the document, and allow these configurations
to be saved into different perspectives and loaded at a later time.
* **Toolbars**: Toolbars are a specialized subset of the frame management system and should
behave similarly to other docked components. However, they also require additional
functionality, such as "spring-loaded" rebar support, "chevron" buttons and end-user
customizability.
* **Modeless Controls**: Modeless controls expose a tool palette or set of options that
float above the application content while allowing it to be accessed. Usually accessed
by the toolbar, these controls disappear when an option is selected, but may also be
"torn off" the toolbar into a floating frame of their own.
* **Look and Feel**: Look and feel encompasses the way controls are drawn, both when shown
statically as well as when they are being moved. This aspect of user interface design
incorporates "special effects" such as transparent window dragging as well as frame animation.
AUI adheres to the following principles:
- Use native floating frames to obtain a native look and feel for all platforms;
- Use existing wxPython code where possible, such as sizer implementation for frame management;
- Use standard wxPython coding conventions.
Usage
=====
The following example shows a simple implementation that uses :class:`framemanager.AuiManager` to manage
three text controls in a frame window::
import wx
import wx.lib.agw.aui as aui
class MyFrame(wx.Frame):
def __init__(self, parent, id=-1, title="AUI Test", pos=wx.DefaultPosition,
size=(800, 600), style=wx.DEFAULT_FRAME_STYLE):
wx.Frame.__init__(self, parent, id, title, pos, size, style)
self._mgr = aui.AuiManager()
# notify AUI which frame to use
self._mgr.SetManagedWindow(self)
# create several text controls
text1 = wx.TextCtrl(self, -1, "Pane 1 - sample text",
wx.DefaultPosition, wx.Size(200,150),
wx.NO_BORDER | wx.TE_MULTILINE)
text2 = wx.TextCtrl(self, -1, "Pane 2 - sample text",
wx.DefaultPosition, wx.Size(200,150),
wx.NO_BORDER | wx.TE_MULTILINE)
text3 = wx.TextCtrl(self, -1, "Main content window",
wx.DefaultPosition, wx.Size(200,150),
wx.NO_BORDER | wx.TE_MULTILINE)
# add the panes to the manager
self._mgr.AddPane(text1, AuiPaneInfo().Left().Caption("Pane Number One"))
self._mgr.AddPane(text2, AuiPaneInfo().Bottom().Caption("Pane Number Two"))
self._mgr.AddPane(text3, AuiPaneInfo().CenterPane())
# tell the manager to "commit" all the changes just made
self._mgr.Update()
self.Bind(wx.EVT_CLOSE, self.OnClose)
def OnClose(self, event):
# deinitialize the frame manager
self._mgr.UnInit()
self.Destroy()
event.Skip()
# our normal wxApp-derived class, as usual
app = wx.App(0)
frame = MyFrame(None)
app.SetTopWindow(frame)
frame.Show()
app.MainLoop()
What's New
==========
Current wxAUI Version Tracked: wxWidgets 2.9.4 (SVN HEAD)
The wxPython AUI version fixes the following bugs or implement the following
missing features (the list is not exhaustive):
- Visual Studio 2005 style docking: http://www.kirix.com/forums/viewtopic.php?f=16&t=596
- Dock and Pane Resizing: http://www.kirix.com/forums/viewtopic.php?f=16&t=582
- Patch concerning dock resizing: http://www.kirix.com/forums/viewtopic.php?f=16&t=610
- Patch to effect wxAuiToolBar orientation switch: http://www.kirix.com/forums/viewtopic.php?f=16&t=641
- AUI: Core dump when loading a perspective in wxGTK (MSW OK): http://www.kirix.com/forums/viewtopic.php?f=15&t=627
- wxAuiNotebook reordered AdvanceSelection(): http://www.kirix.com/forums/viewtopic.php?f=16&t=617
- Vertical Toolbar Docking Issue: http://www.kirix.com/forums/viewtopic.php?f=16&t=181
- Patch to show the resize hint on mouse-down in aui: http://trac.wxwidgets.org/ticket/9612
- The Left/Right and Top/Bottom Docks over draw each other: http://trac.wxwidgets.org/ticket/3516
- MinSize() not honoured: http://trac.wxwidgets.org/ticket/3562
- Layout problem with wxAUI: http://trac.wxwidgets.org/ticket/3597
- Resizing children ignores current window size: http://trac.wxwidgets.org/ticket/3908
- Resizing panes under Vista does not repaint background: http://trac.wxwidgets.org/ticket/4325
- Resize sash resizes in response to click: http://trac.wxwidgets.org/ticket/4547
- "Illegal" resizing of the AuiPane? (wxPython): http://trac.wxwidgets.org/ticket/4599
- Floating wxAUIPane Resize Event doesn't update its position: http://trac.wxwidgets.org/ticket/9773
- Don't hide floating panels when we maximize some other panel: http://trac.wxwidgets.org/ticket/4066
- wxAUINotebook incorrect ALLOW_ACTIVE_PANE handling: http://trac.wxwidgets.org/ticket/4361
- Page changing veto doesn't work, (patch supplied): http://trac.wxwidgets.org/ticket/4518
- Show and DoShow are mixed around in wxAuiMDIChildFrame: http://trac.wxwidgets.org/ticket/4567
- wxAuiManager & wxToolBar - ToolBar Of Size Zero: http://trac.wxwidgets.org/ticket/9724
- wxAuiNotebook doesn't behave properly like a container as far as...: http://trac.wxwidgets.org/ticket/9911
- Serious layout bugs in wxAUI: http://trac.wxwidgets.org/ticket/10620
- wAuiDefaultTabArt::Clone() should just use copy contructor: http://trac.wxwidgets.org/ticket/11388
- Drop down button for check tool on wxAuiToolbar: http://trac.wxwidgets.org/ticket/11139
Plus the following features:
- AuiManager:
(a) Implementation of a simple minimize pane system: Clicking on this minimize button causes a new
AuiToolBar to be created and added to the frame manager, (currently the implementation is such
that panes at West will have a toolbar at the right, panes at South will have toolbars at the
bottom etc...) and the pane is hidden in the manager.
Clicking on the restore button on the newly created toolbar will result in the toolbar being
removed and the original pane being restored;
(b) Panes can be docked on top of each other to form `AuiNotebooks`; `AuiNotebooks` tabs can be torn
off to create floating panes;
(c) On Windows XP, use the nice sash drawing provided by XP while dragging the sash;
(d) Possibility to set an icon on docked panes;
(e) Possibility to draw a sash visual grip, for enhanced visualization of sashes;
(f) Implementation of a native docking art (`ModernDockArt`). Windows XP only, **requires** NAME
pywin32 package (winxptheme);
(g) Possibility to set a transparency for floating panes (a la Paint .NET);
(h) Snapping the main frame to the screen in any positin specified by horizontal and vertical
alignments;
(i) Snapping floating panes on left/right/top/bottom or any combination of directions, a la Winamp;
(j) "Fly-out" floating panes, i.e. panes which show themselves only when the mouse hover them;
(k) Ability to set custom bitmaps for pane buttons (close, maximize, etc...);
(l) Implementation of the style ``AUI_MGR_ANIMATE_FRAMES``, which fade-out floating panes when
they are closed (all platforms which support frames transparency) and show a moving rectangle
when they are docked and minimized (Windows < Vista and GTK only);
(m) A pane switcher dialog is available to cycle through existing AUI panes;
(n) Some flags which allow to choose the orientation and the position of the minimized panes;
(o) The functions [Get]MinimizeMode() in `AuiPaneInfo` which allow to set/get the flags described above;
(p) Events like ``EVT_AUI_PANE_DOCKING``, ``EVT_AUI_PANE_DOCKED``, ``EVT_AUI_PANE_FLOATING`` and ``EVT_AUI_PANE_FLOATED`` are
available for all panes *except* toolbar panes;
(q) Implementation of the RequestUserAttention method for panes;
(r) Ability to show the caption bar of docked panes on the left instead of on the top (with caption
text rotated by 90 degrees then). This is similar to what `wxDockIt` did. To enable this feature on any
given pane, simply call `CaptionVisible(True, left=True)`;
(s) New Aero-style docking guides: you can enable them by using the `AuiManager` style ``AUI_MGR_AERO_DOCKING_GUIDES``;
(t) A slide-in/slide-out preview of minimized panes can be seen by enabling the `AuiManager` style
``AUI_MGR_PREVIEW_MINIMIZED_PANES`` and by hovering with the mouse on the minimized pane toolbar tool;
(u) New Whidbey-style docking guides: you can enable them by using the `AuiManager` style ``AUI_MGR_WHIDBEY_DOCKING_GUIDES``;
(v) Native of custom-drawn mini frames can be used as floating panes, depending on the ``AUI_MGR_USE_NATIVE_MINIFRAMES`` style;
(w) A "smooth docking effect" can be obtained by using the ``AUI_MGR_SMOOTH_DOCKING`` style (similar to PyQT docking style);
(x) Implementation of "Movable" panes, i.e. a pane that is set as `Movable()` but not `Floatable()` can be dragged and docked
into a new location but will not form a floating window in between.
- AuiNotebook:
(a) Implementation of the style ``AUI_NB_HIDE_ON_SINGLE_TAB``, a la :mod:`lib.agw.flatnotebook`;
(b) Implementation of the style ``AUI_NB_SMART_TABS``, a la :mod:`lib.agw.flatnotebook`;
(c) Implementation of the style ``AUI_NB_USE_IMAGES_DROPDOWN``, which allows to show tab images
on the tab dropdown menu instead of bare check menu items (a la :mod:`lib.agw.flatnotebook`);
(d) 6 different tab arts are available, namely:
(1) Default "glossy" theme (as in :class:`~auibook.AuiNotebook`)
(2) Simple theme (as in :class:`~auibook.AuiNotebook`)
(3) Firefox 2 theme
(4) Visual Studio 2003 theme (VC71)
(5) Visual Studio 2005 theme (VC81)
(6) Google Chrome theme
(e) Enabling/disabling tabs;
(f) Setting the colour of the tab's text;
(g) Implementation of the style ``AUI_NB_CLOSE_ON_TAB_LEFT``, which draws the tab close button on
the left instead of on the right (a la Camino browser);
(h) Ability to save and load perspectives in `AuiNotebook` (experimental);
(i) Possibility to add custom buttons in the `AuiNotebook` tab area;
(j) Implementation of the style ``AUI_NB_TAB_FLOAT``, which allows the floating of single tabs.
Known limitation: when the notebook is more or less full screen, tabs cannot be dragged far
enough outside of the notebook to become floating pages;
(k) Implementation of the style ``AUI_NB_DRAW_DND_TAB`` (on by default), which draws an image
representation of a tab while dragging;
(l) Implementation of the `AuiNotebook` unsplit functionality, which unsplit a splitted AuiNotebook
when double-clicking on a sash;
(m) Possibility to hide all the tabs by calling `HideAllTAbs`;
(n) wxPython controls can now be added inside page tabs by calling `AddControlToPage`, and they can be
removed by calling `RemoveControlFromPage`;
(o) Possibility to preview all the pages in a `AuiNotebook` (as thumbnails) by using the `NotebookPreview`
method of `AuiNotebook`;
(p) Tab labels can be edited by calling the `SetRenamable` method on a `AuiNotebook` page;
(q) Support for multi-lines tab labels in `AuiNotebook`;
(r) Support for setting minimum and maximum tab widths for fixed width tabs;
(s) Implementation of the style ``AUI_NB_ORDER_BY_ACCESS``, which orders the tabs by last access time
inside the Tab Navigator dialog;
(t) Implementation of the style ``AUI_NB_NO_TAB_FOCUS``, allowing the developer not to draw the tab
focus rectangle on tne `AuiNotebook` tabs.
|
- AuiToolBar:
(a) ``AUI_TB_PLAIN_BACKGROUND`` style that allows to easy setup a plain background to the AUI toolbar,
without the need to override drawing methods. This style contrasts with the default behaviour
of the :class:`~auibar.AuiToolBar` that draws a background gradient and this break the window design when
putting it within a control that has margin between the borders and the toolbar (example: put
:class:`~auibar.AuiToolBar` within a :class:`StaticBoxSizer` that has a plain background);
(b) `AuiToolBar` allow item alignment: http://trac.wxwidgets.org/ticket/10174;
(c) `AUIToolBar` `DrawButton()` improvement: http://trac.wxwidgets.org/ticket/10303;
(d) `AuiToolBar` automatically assign new id for tools: http://trac.wxwidgets.org/ticket/10173;
(e) `AuiToolBar` Allow right-click on any kind of button: http://trac.wxwidgets.org/ticket/10079;
(f) `AuiToolBar` idle update only when visible: http://trac.wxwidgets.org/ticket/10075;
(g) Ability of creating `AuiToolBar` tools with [counter]clockwise rotation. This allows to propose a
variant of the minimizing functionality with a rotated button which keeps the caption of the pane
as label;
(h) Allow setting the alignment of all tools in a toolbar that is expanded;
(i) Implementation of the ``AUI_MINIMIZE_POS_TOOLBAR`` flag, which allows to minimize a pane inside
an existing toolbar. Limitation: if the minimized icon in the toolbar ends up in the overflowing
items (i.e., a menu is needed to show the icon), this style will not work.
TODOs
=====
- Documentation, documentation and documentation;
- Fix `tabmdi.AuiMDIParentFrame` and friends, they do not work correctly at present;
- Allow specification of `CaptionLeft()` to `AuiPaneInfo` to show the caption bar of docked panes
on the left instead of on the top (with caption text rotated by 90 degrees then). This is
similar to what `wxDockIt` did - DONE;
- Make developer-created `AuiNotebooks` and automatic (framemanager-created) `AuiNotebooks` behave
the same way (undocking of tabs) - DONE, to some extent;
- Find a way to dock panes in already floating panes (`AuiFloatingFrames`), as they already have
their own `AuiManager`;
- Add more gripper styles (see, i.e., PlusDock 4.0);
- Add an "AutoHide" feature to docked panes, similar to fly-out floating panes (see, i.e., PlusDock 4.0);
- Add events for panes when they are about to float or to be docked (something like
``EVT_AUI_PANE_FLOATING/ED`` and ``EVT_AUI_PANE_DOCKING/ED``) - DONE, to some extent;
- Implement the 4-ways splitter behaviour for horizontal and vertical sashes if they intersect;
- Extend `tabart.py` with more aui tab arts;
- Implement ``AUI_NB_LEFT`` and ``AUI_NB_RIGHT`` tab locations in `AuiNotebook`;
- Move `AuiDefaultToolBarArt` into a separate module (as with `tabart.py` and `dockart.py`) and
provide more arts for toolbars (maybe from :mod:`lib.agw.flatmenu`?)
- Support multiple-rows/multiple columns toolbars;
- Integrate as much as possible with :mod:`lib.agw.flatmenu`, from dropdown menus in `AuiNotebook` to
toolbars and menu positioning;
- Possibly handle minimization of panes in a different way (or provide an option to switch to
another way of minimizing panes);
- Clean up/speed up the code, especially time-consuming for-loops;
- Possibly integrate `wxPyRibbon` (still on development), at least on Windows.
License And Version
===================
AUI library is distributed under the wxPython license.
Latest Revision: NAME @ 09 Jan 2014, 23.00 GMT
Version 1.3.
""" |
# -*- encoding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2009 Veritos - NAME - www.veritos.nl
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsability of assessing all potential
# consequences resulting from its eventual inadequacies and bugs.
# End users who are looking for a ready-to-use solution with commercial
# garantees and support are strongly adviced to contract a Free Software
# Service Company like Veritos.
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
##############################################################################
#
# Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger).
# Deze module werkt niet in OpenERP versie 4 en lager.
#
# Status 1.0 - getest op OpenERP 5.0.3
#
# Versie IP_ADDRESS
# account.account.type
# Basis gelegd voor alle account type
#
# account.account.template
# Basis gelegd met alle benodigde grootboekrekeningen welke via een menu-
# structuur gelinkt zijn aan rubrieken 1 t/m 9.
# De grootboekrekeningen gelinkt aan de account.account.type
# Deze links moeten nog eens goed nagelopen worden.
#
# account.chart.template
# Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren,
# bank, inkoop en verkoop boeken en de BTW configuratie.
#
# Versie IP_ADDRESS
# account.tax.code.template
# Basis gelegd voor de BTW configuratie (structuur)
# Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt?
#
# account.tax.template
# De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende
# grootboekrekeningen
#
# Versie IP_ADDRESS
# Opschonen van de code en verwijderen van niet gebruikte componenten.
# Versie IP_ADDRESS
# Aanpassen a_expense van 3000 -> 7000
# record id='btw_code_5b' op negatieve waarde gezet
# Versie IP_ADDRESS
# BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde.
# Versie IP_ADDRESS
# Account Receivable en Payable goed gedefinieerd.
# Versie IP_ADDRESS
# Alle user_type_xxx velden goed gedefinieerd.
# Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren.
# Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren.
# Versie IP_ADDRESS
# Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging)
# versie IP_ADDRESS
# Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity
# versie IP_ADDRESS
# Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht.
# versie IP_ADDRESS
# BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd.
# versie IP_ADDRESS - Switch to English
# Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense
# Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
|
#!/usr/bin/env python
#
# MAGMA (version 2.1.0) --
# Univ. of Tennessee, Knoxville
# Univ. of California, Berkeley
# Univ. of Colorado, Denver
# @date August 2016
## @file
# @author NAME Script to run testers with various matrix sizes.
#
# See also the run_summarize.py script, which post-processes the output,
# sorting it into errors (segfaults, etc.), accuracy failures, and known failures.
# run_summarize.py can apply a different (larger) tolerance without re-running
# the tests.
#
# Small sizes are chosen around block sizes (e.g., 30...34 around 32) to
# detect bugs that occur at the block size, and the switch over from
# LAPACK to MAGMA code.
# Tall and wide sizes are chosen to exercise different aspect ratios,
# e.g., nearly square, 2:1, 10:1, 1:2, 1:10.
# The -h or --help option provides a summary of the options.
#
# Non-interactive vs. interactive mode
# --------------------------
# When output is redirected to a file, it runs in non-interactive mode, printing a
# short summary to stderr on the console and all other output to the file.
# For example:
#
# ./run_tests.py --lu --precision s --small > lu.txt
# testing_sgesv_gpu -c ok
# testing_sgetrf_gpu -c2 ok
# testing_sgetf2_gpu -c ok
# testing_sgetri_gpu -c ** 45 tests failed
# testing_sgetrf_mgpu -c2 ok
# testing_sgesv -c ok
# testing_sgetrf -c2 ok
#
# ****************************************************************************************************
# summary
# ****************************************************************************************************
# 282 tests in 7 commands passed
# 45 tests failed accuracy test
# 0 errors detected (crashes, CUDA errors, etc.)
# routines with failures:
# testing_sgetri_gpu -c
#
# When output is to console (tty), it runs in interactive mode, pausing after
# each test. At the pause, typing "M" re-makes and re-runs that tester,
# while typing enter goes to the next tester.
# For example (some output suppressed with ... for brevity):
#
# ./run_tests.py --lu --precision s --small
# ****************************************************************************************************
# ./testing_sgesv_gpu -c --range 1:20:1 ...
# ****************************************************************************************************
# N NRHS CPU Gflop/s (sec) GPU Gflop/s (sec) ||B - AX|| / N*||A||*||X||
# ================================================================================
# 1 1 --- ( --- ) 0.00 ( 0.00) 9.26e-08 ok
# 2 1 --- ( --- ) 0.00 ( 0.00) 1.32e-08 ok
# 3 1 --- ( --- ) 0.00 ( 0.00) 8.99e-09 ok
# ...
# ok
# [enter to continue; M to make and re-run]
#
# ****************************************************************************************************
# ./testing_sgetri_gpu -c --range 1:20:1 ...
# ****************************************************************************************************
# % MAGMA 1.4.0 svn compiled for CUDA capability >= 3.0
# % CUDA runtime 6000, driver 6000. MAGMA not compiled with OpenMP.
# % device 0: GeForce GT 750M, 925.5 MHz clock, 2047.6 MB memory, capability 3.0
# Usage: ./testing_sgetri_gpu [options] [-h|--help]
#
# N CPU Gflop/s (sec) GPU Gflop/s (sec) ||R||_F / (N*||A||_F)
# =================================================================
# 1 0.00 ( 0.00) 0.00 ( 0.00) 6.87e+01 failed
# 2 0.00 ( 0.00) 0.00 ( 0.00) 2.41e+00 failed
# 3 0.01 ( 0.00) 0.00 ( 0.00) 1.12e+00 failed
# ...
# ** 45 tests failed
# [enter to continue; M to make and re-run]
#
# ...
#
# ****************************************************************************************************
# summary
# ****************************************************************************************************
# 282 tests in 7 commands passed
# 45 tests failed accuracy test
# 0 errors detected (crashes, CUDA errors, etc.)
# routines with failures:
# testing_sgetri_gpu -c
#
#
# What tests are run
# ------------------
# The --blas, --aux, --chol, --hesv, --lu, --qr, --syev, --sygv, --geev, --svd,
# --batched options run particular sets of tests. By default, all tests are run,
# except batched because we don't want to run batched with, say, N=1000.
# --mgpu runs only multi-GPU tests from the above sets.
# These may be negated with --no-blas, --no-aux, etc.
#
# The --start option skips all testers before the given one, then continues
# with testers from there. This is helpful to restart a non-interactive set
# of tests. For example:
#
# ./run_tests.py --start testing_spotrf > output.log
#
# If specific testers are named on the command line, only those are run.
# For example:
#
# ./run_tests.py testing_spotrf testing_sgetrf
#
# The -p/--precision option controls what precisions are tested, the default
# being "sdcz" for all four precisions. For example, to run single and double:
#
# ./run_tests.py -p sd
#
# The -s/--small, -m/--medium, -l/--large options control what sizes are tested,
# the default being all three sets.
# -s/--small does small tests, N < 300.
# -m/--medium does medium tests, N < 1000.
# -l/--large does large tests, N > 1000.
# For example, running small and medium tests:
#
# ./run_tests.py -s -m
#
# Specific tests can be chosen using --itype, --version, -U/--upper, -L/--lower,
# -J/--jobz, -D/--diag, and --fraction. For instance:
#
# ./run_tests.py testing_ssygvdx_2stage -L -JN --itype 1 -s --no-mgpu
#
#
# What is checked
# ------------------
# The --memcheck option runs cuda-memcheck. This is very helpful for finding
# memory bugs (reading & writing outside allocated memory). It is, however, slow.
#
# The --tol option sets the tolerance to verify accuracy. This is 30 by default,
# which may be too tight for some testers. Setting it somewhat higher
# (e.g., 50 or 100) filters out spurious accuracy failures. Also see the
# run_summarize.py script, which parses the testers output and can filter out
# tests using a higher tolerance after the fact, without re-running them.
#
# Run with default tolerance tol=30.
#
# ./run_tests.py -s -m testing_sgemv > run-gemv.txt
# testing_sgemv -c ** 7 tests failed
# testing_sgemv -T -c ok
# testing_sgemv -C -c ok
#
# ****************************************************************************************************
# summary
# ****************************************************************************************************
# 302 tests in 3 commands passed
# 7 tests failed accuracy test
# 0 errors detected (crashes, CUDA errors, etc.)
# routines with failures:
# testing_sgemv -c
#
# Post-process with tolerance tol2=100. Numbers in {braces} are ratio = error/epsilon, which should be < tol.
# Here, the ratio is just slightly larger {31.2 to 37.4} than the default tol=30.
#
# ./run_summarize.py --tol2 100 run-gemv.txt
# single epsilon 5.96e-08, tol2 100, tol2*eps 5.96e-06, 30*eps 1.79e-06, 100*eps 5.96e-06, 1000*eps 5.96e-05
# double epsilon 1.11e-16, tol2 100, tol2*eps 1.11e-14, 30*eps 3.33e-15, 100*eps 1.11e-14, 1000*eps 1.11e-13
# ########################################################################################################################
# okay tests: 3 commands, 302 tests
#
#
# ########################################################################################################################
# errors (segfault, etc.): 0 commands, 0 tests
#
#
# ########################################################################################################################
# failed tests (error > tol2*eps): 0 commands, 0 tests
#
#
# ########################################################################################################################
# suspicious tests (tol2*eps > error > tol*eps): 1 commands, 7 tests
# ./testing_sgemv
# 63 10000 0.19 ( 6.73) 1.65 ( 0.76) 8.58 ( 0.15) 1.86e-06 { 31.2} 1.11e-06 { 18.6} suspect
# 64 10000 0.19 ( 6.73) 1.68 ( 0.76) 14.36 ( 0.09) 2.17e-06 { 36.4} 1.14e-06 { 19.1} suspect
# 65 10000 0.19 ( 6.72) 1.43 ( 0.91) 8.73 ( 0.15) 2.23e-06 { 37.4} 1.09e-06 { 18.3} suspect
# 31 10000 0.09 ( 6.70) 1.25 ( 0.49) 6.33 ( 0.10) 1.93e-06 { 32.4} 8.65e-07 { 14.5} suspect
# 32 10000 0.10 ( 6.68) 1.35 ( 0.47) 11.00 ( 0.06) 2.15e-06 { 36.1} 9.14e-07 { 15.3} suspect
# 33 10000 0.10 ( 6.72) 1.24 ( 0.53) 9.85 ( 0.07) 2.19e-06 { 36.7} 1.07e-06 { 18.0} suspect
# 10 10000 0.03 ( 6.58) 0.52 ( 0.39) 5.71 ( 0.04) 2.23e-06 { 37.4} 1.11e-06 { 18.6} suspect
#
#
#
# ########################################################################################################################
# known failures: 0 commands, 0 tests
#
#
# ########################################################################################################################
# ignored errors (e.g., malloc failed): 0 commands, 0 tests
#
#
# ########################################################################################################################
# other (lines that did not get matched): 0 commands, 0 tests
#
# The --dev option sets which GPU device to use.
#
# By default, a wide range of sizes and shapes (square, tall, wide) are tested,
# as applicable. The -N and --range options override these.
#
# For multi-GPU codes, --ngpu specifies the number of GPUs, default 2. Most
# testers accept --ngpu -1 to test the multi-GPU code on a single GPU.
# (Using --ngpu 1 will usually invoke the single-GPU code.)
|
""" enhpath.py - An object-oriented approach to file/directory operations.
Author: NAME <EMAIL>.
URL coming soon.
Derived from Jason Orendorff's path.py 2.0.4 (JOP) available at
http://www.jorendorff.com/articles/python/path.
Whereas JOP maintains strict API compatibility with its parent functions,
enhpath ("enhanced path") stresses convenience and conciseness in the caller's
code. It does this by combining related methods, encapsulating multistep
operations, and occasional magic behaviors. Enhpath requires Python 2.3 (JOP:
Python 2.2). Paths are subclasses of unicode, so all strings methods are
available too. Redundant methods like .basename() are moved to subclass
path_compat. Subclassable so you can add local methods. (JOP: not
subclassable because methods that create new paths call path() directly rather
than self.__class__().)
Constructors and class methods:
path('path/name') path object
path('') Used to generate subpaths relative to current
directory: path('').joinpath('a') => path('a')
path() Same as path('')
path.cwd() Same as path(os.getcwd())
(JOP: path.getcwd() is static method)
path.popdir(N=1) Pop Nth previous directory off path.pushed_dirs, chdir to
it, and log a debug message. IndexError if we fall off
the end of the list. See .chdir(). (JOP: no equiv.)
path.tempfile(suffix='', prefix=tempfile.template, dir=None, text=False)
Create a temporary file using tempfile.mkstemp, open it,
and return a tuple of (path, file object). The file
will not be automatically deleted.
'suffix': use this suffix; e.g., ".txt".
'prefix': use this prefix.
'dir': create in this directory (default system temp).
'text' (boolean): open in text mode.
(JOP: no equiv.)
path.tempdir(suffix='', prefix=tempfile.template, dir=None)
Create a temporary directory using tempfile.mkdtemp and
return its path. The directory will not be
automatically deleted. (JOP: no equivalent.)
path.tempfileobject(mode='w+b', bufsize=-1, suffix='',
prefix=tempfile.template, dir=None)
Return a file object pointing to an anonymous temporary
file. The file will automatically be destroyed when the
file object is closed or garbage collected. The file
will not be visible in the filesystem if the OS permits.
(Unix does.) This is a static method since it neither
creates nor uses a path object. The only reason it's in
this class is to put all the tempfile-creating methods
together. (JOP: no equiv.)
Chdir warnings: changing the current working directory via path.popdir(),
.chdir(), or os.chdir() does not adjust existing relative path objects, so
if they're relative to the old current directory they're now invalid. Changing
the directory is global to the runtime, so it's visible in all threads and
calling functions.
Class attributes:
path.repr_as_str True to make path('a').__repr__() return 'a'. False
(default) to make it return 'path("a")'. Useful when
you have to dump lists of paths or dicts containing
paths for debugging. Changing this is visible in all
threads. (JOP: no equivalent.)
Instance attributes:
.parent Parent directory as path. Compare .ancestor().
path('a/b').parent => path('a').
path('b').parent => path('').
.name Filename portion as string.
path('a/filename.txt').name => 'filename.txt'.
.base Filename without extension. Compare .stripext().
path('a/filename.txt').base => 'filename'.
path('a/archive.tar.gz').base => 'archive.tar'.
(JOP: called .namebase).
.ext Extension only.
path('a/filename.txt').ext => '.txt'.
path('a/archive.tar.gz').ext => '.gz'.
Interaction with Python operators:
+ Simple concatenation.
path('a') + 'b' => path('ab').
'a' + path('b') => path('ab').
/ Same as .joinpath().
path('a') / 'b' => path('a/b').
path('a') / 'b' / 'c' => path('a/b/c').
Normalization methods:
.abspath() Convert to absolute path. Implies normpath on most platforms.
path('python2.4').abspath() => path('/usr/lib/python2.4').
.isabs() Is the path absolute?
.normcase() Does nothing on Unix. On case-insensitive filesystems, converts
to lowercase. On Windows, converts slashes to backslashes.
.normpath() Clean up .., ., redundant //, etc. On Windows, convert shashes
to backslashes. Python docs warn "this may change the meaning
of a path if it contains symbolic links!"
path('a/../b/./c//d').normpath() => path('b/c/d')
.realpath() Resolve symbolic links in path.
path('/home/joe').realpath() => path('/mnt/data/home/joe')
if /home is a symlink to /mnt/data/home.
.expand() Call expanduser, expandvars and normpath. This is commonly
everything you need to clean up a filename from a
configuration file.
.expanduser() Convert ~user to the user's home directory.
path('~joe/Mail').expanduser() => path('/home/joe/Mail')
path('~/.vimrc').expanduser() => path('/home/joe/.vimrc')
.expandvars() Resolve $ENVIRONMENT_VARIABLE references.
path('$HOME/Mail').expandvars() => path('/home/joe/Mail')
.relpath() Convert to relative path from current directory.
path('/home/joe/Mail') => path('Mail') if CWD is /home/joe.
.relpathto(dest) Return a relative path from self to dest. If there is
no relative path (e.g., they reside on different drives on
Windows), same as dest.abspath(). Dest may be a path or a
string.
.relpathfrom(ancestor) Chop off the front part of self that matches
ancestor.
path('/home/joe/Mail').relpathfrom('/home/joe') =>
path('Mail')
ValueError if self does not start with ancestor.
Deriving related paths:
.splitpath() Return a list of all directory/filename components. The
first item will be a path, either os.curdir, os.pardir, empty,
or the root directory of this path (for example, '/' or
'C:\\'). The other items will be strings.
path('/usr/local/bin') => [path('/'), 'usr', 'local', 'bin']
path('a/b/c.txt') => [path(''), 'a', 'b', 'c.txt']
(JOP: This is what .splitall() does. JOP's .splitpath()
returns (p.parent, p.name).)
(Note: not called .split() to avoid masking the string method
of that name.)
.splitext() Same as (p.stripext(), p.ext).
.stripext() Chop one extension off the path.
path('a/filename.txt').stripext() => path('a/filename')
.joinpath(*components) Join components with directory separator as necessary.
path('a').joinpath('b', 'c') => path('a/b/c')
path('a/').joinpath('b') => path('a/b')
Calling .splitpath() and .joinpath() produces the original
path.
(Note: not called .join() to avoid masking the string method
of that name.)
.ancestor(N) Chop N components off end, same as invoking .parent N times.
path('a/b/c').ancestor(2) => path('a')
(JOP: no equivalent method.)
.joinancestor(N, *components)
Combination of .ancestor() and .joinpath().
(JOP: no equivalent method.)
.redeploy(old_ancestor, new_ancestor)
Replace the old_ancestor part of self with new_ancestor.
Both may be paths or strings. old_ancestor *must* be an
ancestor of self; this is checked via absolute paths even
if the specified paths are relative. (Not implemented:
verifying it would be useful for things like Cheetah's --idir
and --odir options.) (JOP: no equivalent method.)
Listing directories:
Common arguments:
pattern, a glob pattern like "*.py". Limits the result to
matching filenames.
symlinks, False to exclude symbolic links from result.
Useful if you want to treat them separately. (JOB: no
equivalent argument.)
.listdir(pattern=None, symlinks=True, names_only=False)
List directory.
path('/').listdir() => [path('/bin'), path('/boot'), ...]
path('/').listdir(names_only=True) => ['bin', 'boot', ...]
If names_only is true, symlinks is false and pattern is None,
this is the same as os.listdir() and no path objects are
created. But if symlinks is true and there is a pattern,
it must create path objects to determine the return values,
and then turn them back to strings.
(JOP: No names_only argument.)
.dirs(pattern=None, symlinks=True)
List only the subdirectories in directory. Not recursive.
path('/usr/lib/python2.3').dirs() =>
[path('/usr/lib/python2.3/site-packages'), ...]
.files(pattern=None, symlinks=True)
List only the regular files in directory. Not recursive.
Does not list special files (anything for which
os.path.isfile() returns false).
path('/usr/lib/python2.3').dirs() =>
[path('/usr/lib/python2.3/BaseHTTPServer.py'), ...]
.symlinks(pattern=None)
List only the symbolic links in directory. Not recursive.
path('/').symlinks() => [path('/home')] if it's a symlink.
(JOP: no equivalent method.)
.walk(pattern=None, symlinks=True)
Iterate over files and subdirs recursively. The search is
depth-first. Each directory is returned just before its
children. Returns an iteration, not a list.
.walkdirs(pattern=None, symlinks=True)
Same as .walk() but yield only directories.
.walkfiles(pattern=None, symlinks=True)
Same as .walk() but yield only regular files. Excludes
special files (anything for which os.path.isfile() returns
false).
.walksymlinks(pattern=None)
Same as .walk() but yield only symbolic links.
(JOP: no equivalent method.)
.findpaths(args=None, ls=False, **kw)
Run the Unix 'find' command and return an iteration of paths.
The argument signature matches find's most popular arguments.
Due to Python's handling of keyword arguments, there are
some limitations:
- You can't specify the same argument multiple times.
- The argument order may be rearranged.
- You can't do an 'or' expression or a 'brace' expression.
- Not all 'find' operations are implemented.
Special syntaxes:
- mtime=(N, N)
Converted to two -mtime options, used to specify a range.
Normally the first arg is negative and the second
positive. Same for atime, ctime, mmin, amin, cmin.
- name=[pattern1, pattern2, ...]
Converted to '-lbrace -name pattern1 -o ... -rbrace'.
Value may be list or tuple.
There are also some other arguments:
- args, list or string, appended to the shell command line.
Useful to do things the keyword args can't. Note that
if value is a string, it is split on whitespace.
- ls, boolean, true to yield one-line strings describing
the files, same as find's '-ls' option. Does not yield
paths.
- pretend, boolean, don't run the command, just return it
as a string. Useful only for debugging. We try to
handle quoting intelligently but there's no guarantee
we'll produce a valid or correct command line. If your
argument values have quotes, spaces, or newlines, use
pretend=True and verify the command line is correct,
otherwise you may have unexpected problems. If 'pretend'
is False (default), the subcommand is logged to the
'enhpath' logger, level debug. See Python's 'logging'
module for details.
Examples:
.find(name='*.py')
.find(type='d', ls=True)
.find(mtime=-1, type='f')
(JOP: no equivalent method.)
WARNING: Normally we bypass the shell to avoid quoting
problems. However, if 'args' is a string or we're running on
Python 2.3, we can't avoid the shell. Argument values
containing spaces, quotes, or newlines may be misinterpreted
by the shell. This can lead to a syntax error or to an
incorrect search. When in doubt, use the 'pretend' argument
to verify the command line is correct.
path('').find(...) yields paths relative to the current
directory. In this case, 'find' on posix returns paths
prefixed with "./", so we chop off this prefix. We don't
call .normpath() because of its fragility with symbolic
links. On other platforms we don't clean up the paths
because we don't know how.
.findpaths_pretend(args=None, ls=False, **kw)
Same as .find(...) above but don't actually do the find;
instead, return the external command line as a list of
strings. Useful for debugging.
.fnmatch(pattern) Return True if self.name matches the pattern.
.glob(pattern) Return a list of paths that match the pattern.
path('a').glob('*.py') => Same as path('a').listdir('*.py')
path('a').glob('*/bin/*') => List of files all users have
in their bin directories.
# Reading/writing files
.open(mode='r') Open file and return a file object.
.file(mode='r') Same.
.bytes(mode='r') Read the file in binary mode and return content as string.
.write_bytes(bytes, append=False)
Write 'bytes' (string) to the file. Overwrites the file
unless 'append' is true.
.text(encoding=None, errors='strict')
Read the file in text mode and return content as string.
'encoding' is a Unicode encoding/character set. If
present, the content is returned as a unicode object;
otherwise it's returned as an 8-bit string.
'errors' is an argument for str.decode().
.write_text(text, encoding=None, errors='strict', linesep=os.linesep,
append=False)
Write 'text' (string) to the file in text mode. Overwrites
the file unless 'append' (keyword arg) is true.
'encoding' (string) is the unicode encoding. Ignored if
text is string type rather than unicode type.
'errors' is an argument for unicode.encode().
'linesep' (keyword arg) the chars to write for newline.
None means don't convert newlines. The default is your
platform's preferred convention.
.lines(encoding=None, errors='strict', retain=True)
Read the file in text mode and return the lines as a list.
'encoding' and 'errors' are same as for .text().
'retain' (boolean) If true, convert all newline types to
'\n'. If false, chop off newlines.
The open mode is 'U'.
To iterate over the lines, use the filehandle returned by
.open() as an iterator.
.writelines(lines, encoding=None, errors='strict', linesep=os.linesep,
append=False)
Write the lines (list) to the file.
The other args are the same as for .write_text().
When appending, use the same Unicode encoding the original
text was written in, otherwise the reader will be very
confused.
Checking file existence/type:
.exists() Does the path exist?
.isdir() Is the path a directory?
.isfile() Is the path a regular file?
.islink() Is the path a symbolic link?
.ismount() Is the path a mount point?
.isspecial() Is the path a special file?
.type() Return the file type using the one-letter codes from the
'find' command:
'f' => regular file (path.FILE)
'd' => directory (path.DIR)
'l' => symbolic link (path.LINK)
'b' => block special file (path.BLOCK)
'c' => character special file (path.CHAR)
'p' => named pipe/FIFO (path.PIPE)
's' => socket (path.SOCKET)
'D' => Door (Solaris) (path.DOOR)
None => unknown
The constants at the right are class attributes if you
prefer to compare to them instead of literal chars. You'll
never get a 'D' in the current implementation since the
'stat' module provides no way to test for a Door.
path.SPECIAL_TYPES is a list of the latter five types.
All the .is*() functions return False if the path doesn't exist. All except
.islink() return the state of the pointed-to file if the path is a symbolic
link. .isfile() returns False for a special file. To test a special file's
type, pass .stat().st_mode to one of the S_*() functions in the 'stat' module;
this is more efficient than .isspecial() when you only care about one type.
Checking permissions and other information:
.stat() Get general file information; see os.stat().
.lstat() Same as .stat() but don't follow a symbolic link.
.statvfs() Get general information about the filesystem; see
os.statvfs().
.samefile(other) Is self and other the same file? Returns True if one is
a symbolic or hard link to the other. 'other' may be a
path or a string.
.pathconf(name) See os.pathconf(); OS-specific info about the path.
.canread() Can we read the file? (JOP: no equivalent.)
.canwrite() Can we write the file? (JOP: no equivalent.)
.canexecute() Can we execute the file? True for a directory means we
can chdir into it or access its contents.
(JOP: no equivalent.)
.access(mode) General permission test; see os.access() for usage.
Modifying file information:
.utime(times) Set file access and modification time.
'time' is either None to set them to the current time, or
a tuple of (atime, mtime) -- integers in tick
format (the same format returned by time.time()).
.getutime() Return a tuple of (atime, mtime). This can be passed
directly to another path's .utime(). (JOP: no equiv.)
.copyutimefrom(other) Make my atime/mtime match the other path's.
'other' may be a path or a string. (JOP: no equiv.)
.copyutimeto(*other) Make the other paths' atime/mtime match mine.
Note that multiple others can be specified, unlike
.copyutimefrom(). (JOP: no equiv.)
.itercopyutimeto(iterpaths) Same as .copyutimeto() but use an iterable to
specify the destination paths. (JOP: no equiv.)
.chmod(mode) Set the path's permissions to 'mode' (octal int).
There are several constants in the 'stat' module you can
use; use the '|' operator to combine them.
.grant(mode) Add 'mode' to the file's current mode. (Uses '|'.)
.grant(mode) Subtract 'mode' from the file's current mode. (Uses '&'.)
.chown(uid=None, gid=None) Change the path's owner/group.
If uid or gid is a string, look up the corresponding
number via the 'pwd' or 'group' module.
(JOP: both uid and gid must be specified, and both must
be numeric.)
.needsupdate(*other)
True if the path doesn't exist or its mtime is older
than any of the others. If any 'other' is a directory,
only the directory mtime will be compared; this method
does not recurse. A directory's mtime changes when a
file in it is added, removed, or renamed. To do the
equvalent of iteration, see .iterneedsupdate().
(JOP: no equivalent method.)
.iterneedsupdate(iterpaths)
Same as .needsupdate() but use an iterable to
specify the other paths. To do the equivalent of a
recursive compare, call .walkfiles() on the other
directories and concatenate the iterators using
itertools.chain, interspersing any static lists of paths
you wish. (JOP: no equivalent method.)
Moving files and directories:
.move(dest, overwrite=True, prune=False, atomic=False)
Move the file or directory to 'dest'.
Tries os.rename() or os.renames() first, falls back to
shutil.move() if it fails.
If 'overwrite' is false, raise OverwriteError if dest
exists.
Creates any ancestor directories of dest if missing.
If 'prune' is true, delete any empty ancestor parent
directories of source after move.
If 'atomic' is true and .rename*() fails, don't catch the
OSError and don't try shutil.move(). This guarantees that
if the move succeeds, it's an atomic operation. This will
fail if the two paths are on different filesystems, and
may fail if the source is a directory.
(JOP: this combines the functionality of .rename(),
.renames(), and .move().)
.movefile(dest, overwrite=True, prune=False, atomic=False, checkdest=False)
Same as .move() but raise FileTypeError if path is a file
rather than a directory.
'checkdest' (boolean) True to fail dest is a file.
(JOP: no equivalent method.)
.movedir(dest, overwrite=True, prune=False, atomic=False, checkdest=False)
Same as .move() but raise FileTypeError if path is a directory.
'checkdest' (boolean) True to fail dest is a directory.
(JOP: no equivalent method.)
Creating directories and (empty) files:
.mkdir(mode=0777, clobber=False)
Create an empty directory.
If 'clobber' is true, call .delete_dammit() first. Otherwise
you'll get OSError if a file exists in its place.
Silently succeed if the directory exists and clobber is false.
Creates any missing ancestor directories.
(JOP: this is equivalent to .makedirs() rather than
.makedir(), except you'll get OSError if a directory or file
exists.)
.touch() Create a file if it doesn't exist. If a file or directory does
exist, set its atime and mtime to the current time -- same as
.utime(None).
Deleting directories and files:
.delete_dammit()
Delete path recursively whatever it is, and don't complain if
it doesn't exist. Convenient but dangerous!
(JOP: combines .rmtree(), .rmdir(), and .remove(), plus unqiue
features.)
.rmdir(prune=False)
Delete a directory.
Silently succeeds if it doesn't exist. OSError if it's a file
or symbolic link. See .delete_dammit().
If 'prune' is true, also delete any empty ancestor
directories.
(JOP: equivalent to .removedirs() if prune is true, or
.rmdir() if prune is false, except the JOP methods don't have
a 'prune' argument, and they raise OSError if the directory
doesn't exist.)
.remove(prune=False)
Delete a file.
Silently succeeds if it doesn't exist. OSError if it's a
directory.
If 'prune' is true, delete any ancestor directories.
(JOP: equivalent to .remove() if prune is false, except JOP
method has no 'prune' arg. Raises OSError if it doesn't
exist.)
.unlink(prune=False) Same as .remove().
Links:
.hardlink(source)
Create a hard link at 'source' pointing to this path.
(JOP: equivalent to .link().)
.symlink(source)
Create a symbolic link at 'source' pointing to this path.
If path is relative, it should be relative to source's
directory, but it needn't be valid with the current
directory.
.readlink() Return the path this symbolic link points to.
.readlinkabs() Same as .readlink() but always return an absolute path.
Copying files and directories:
.copy(dest, copy_times=True, copy_mode=False, symlinks=True)
Copy a file or (recursively) a directory.
If 'copy_times' is true (default), copy the atime/mtime too.
If 'copy_mode' is true (not default), copy the permission
bits too.
If 'symlinks' is true (default), create symbolic links in
dest corresponding to those in path (using shutil.copytree,
which does not claim infallibility).
(JOP: combines .copy(), .copy2(), .copytree().)
.copymode(dest)
Copy path's permission bits to dest (but not its content).
.copystat(dest)
Copy path's permission bits and atime/mtime to dest (but not
its content or owner/group). Overlaps with .copyutimeto().
Modifying the runtime environment:
.chdir(push=False)
Set the current working directory to path.
If 'push' is true, push the old current directory onto
path.pushed_dirs (class attribute) and log a debug message.
Note that pushing is visible to all threads and calling
functions. (JOP: no equiv.)
.chroot() Set this process's root directory to path.
Subclass path_windows(path): # Windows-only operations.
.drive Drive specification.
path('C:\COMMAND.COM') => 'C:' on Windows, '' on Unix.
.splitdrive() Same as (p.drive, path(p.<everything else>))
.splitunc() Same as (p.uncshare, path(p.<everything else>)
.uncshare The UNC mount point, empty for local drives. UNC files are
in \\host\path syntax.
.startfile() Launch path as an autonoymous program (Windows only).
Subclass path_compat(path_windows): # Redundant JOP methods.
.namebase() Same as .base.
.basename() Same as .name.
.dirname() Same as .parent.
.getatime() Same as .atime.
.getmtime() Same as .mtime.
.getctime() Same as .ctime.
.getsize() Same as .size.
# JOP has the following TODO, which I suppose applies here too:
# - Bug in write_text(). It doesn't support Universal newline mode.
# - Better error message in listdir() when self isn't a
# directory. (On Windows, the error message really sucks.)
# - Make sure everything has a good docstring.
# - Add methods for regex find and replace.
# - guess_content_type() method?
# - Perhaps support arguments to touch().
# - Could add split() and join() methods that generate warnings.
# - Note: __add__() technically has a bug, I think, where
# it doesn't play nice with other types that implement
# __radd__(). Test this.
""" |
"""
``pretty_midi`` contains utility function/classes for handling MIDI data,
so that it's in a format from which it is easy to modify and extract
information.
If you end up using ``pretty_midi`` in a published research project, please
cite the following report:
Colin NAME and NAME Analysis, Creation and Manipulation of MIDI Data with pretty_midi
<http://colinraffel.com/publications/ismir2014intuitive.pdf>`_.
In 15th International Conference on Music Information Retrieval Late Breaking
and Demo Papers, 2014.
Example usage for analyzing, manipulating and synthesizing a MIDI file:
.. code-block:: python
import pretty_midi
# Load MIDI file into PrettyMIDI object
midi_data = pretty_midi.PrettyMIDI('example.mid')
# Print an empirical estimate of its global tempo
print midi_data.estimate_tempo()
# Compute the relative amount of each semitone across the entire song,
# a proxy for key
total_velocity = sum(sum(midi_data.get_chroma()))
print [sum(semitone)/total_velocity for semitone in midi_data.get_chroma()]
# Shift all notes up by 5 semitones
for instrument in midi_data.instruments:
# Don't want to shift drum notes
if not instrument.is_drum:
for note in instrument.notes:
note.pitch += 5
# Synthesize the resulting MIDI data using sine waves
audio_data = midi_data.synthesize()
Example usage for creating a simple MIDI file:
.. code-block:: python
import pretty_midi
# Create a PrettyMIDI object
cello_c_chord = pretty_midi.PrettyMIDI()
# Create an Instrument instance for a cello instrument
cello_program = pretty_midi.instrument_name_to_program('Cello')
cello = pretty_midi.Instrument(program=cello_program)
# Iterate over note names, which will be converted to note number later
for note_name in ['C5', 'E5', 'G5']:
# Retrieve the MIDI note number for this note name
note_number = pretty_midi.note_name_to_number(note_name)
# Create a Note instance, starting at 0s and ending at .5s
note = pretty_midi.Note(
velocity=100, pitch=note_number, start=0, end=.5)
# Add it to our cello instrument
cello.notes.append(note)
# Add the cello instrument to the PrettyMIDI object
cello_c_chord.instruments.append(cello)
# Write out the MIDI data
cello_c_chord.write('cello-C-chord.mid')
Further examples can be found in the source tree's `examples directory
<https://github.com/craffel/pretty-midi/tree/master/examples>`_.
``pretty_midi.PrettyMIDI``
==========================
.. autoclass:: PrettyMIDI
:members:
:undoc-members:
``pretty_midi.Instrument``
==========================
.. autoclass:: Instrument
:members:
:undoc-members:
``pretty_midi.Note``
====================
.. autoclass:: Note
:members:
:undoc-members:
``pretty_midi.PitchBend``
=========================
.. autoclass:: PitchBend
:members:
:undoc-members:
``pretty_midi.ControlChange``
=============================
.. autoclass:: ControlChange
:members:
:undoc-members:
``pretty_midi.TimeSignature``
=============================
.. autoclass:: TimeSignature
:members:
:undoc-members:
``pretty_midi.KeySignature``
============================
.. autoclass:: KeySignature
:members:
:undoc-members:
``pretty_midi.Lyric``
=====================
.. autoclass:: Lyric
:members:
:undoc-members:
Utility functions
=================
.. autofunction:: key_number_to_key_name
.. autofunction:: key_name_to_key_number
.. autofunction:: mode_accidentals_to_key_number
.. autofunction:: key_number_to_mode_accidentals
.. autofunction:: qpm_to_bpm
.. autofunction:: note_number_to_hz
.. autofunction:: hz_to_note_number
.. autofunction:: note_name_to_number
.. autofunction:: note_number_to_name
.. autofunction:: note_number_to_drum_name
.. autofunction:: drum_name_to_note_number
.. autofunction:: program_to_instrument_name
.. autofunction:: instrument_name_to_program
.. autofunction:: program_to_instrument_class
.. autofunction:: pitch_bend_to_semitones
.. autofunction:: semitones_to_pitch_bend
""" |
"""Configuration file parser.
A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.
Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.
class:
ConfigParser -- responsible for parsing a list of
configuration files, and managing the parsed database.
methods:
__init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
delimiters=('=', ':'), comment_prefixes=('#', ';'),
inline_comment_prefixes=None, strict=True,
empty_lines_in_values=True):
Create the parser. When `defaults' is given, it is initialized into the
dictionary or intrinsic defaults. The keys must be strings, the values
must be appropriate for %()s string interpolation.
When `dict_type' is given, it will be used to create the dictionary
objects for the list of sections, for the options within a section, and
for the default values.
When `delimiters' is given, it will be used as the set of substrings
that divide keys from values.
When `comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in empty lines. Comments can be
indented.
When `inline_comment_prefixes' is given, it will be used as the set of
substrings that prefix comments in non-empty lines.
When `strict` is True, the parser won't allow for any section or option
duplicates while reading from a single source (file, string or
dictionary). Default is True.
When `empty_lines_in_values' is False (default: True), each empty line
marks the end of an option. Otherwise, internal empty lines of
a multiline option are kept as part of the value.
When `allow_no_value' is True (default: False), options without
values are accepted; the value presented for these is None.
sections()
Return all the configuration section names, sans DEFAULT.
has_section(section)
Return whether the given section exists.
has_option(section, option)
Return whether the given option exists in the given section.
options(section)
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
read_file(f, filename=None)
Read and parse one configuration file, given as a file object.
The filename defaults to f.name; it is only used in error
messages (if f has no `name' attribute, the string `<???>' is used).
read_string(string)
Read configuration from a given string.
read_dict(dictionary)
Read configuration from a dictionary. Keys are section names,
values are dictionaries with keys and values that should be present
in the section. If the used dictionary type preserves order, sections
and their keys will be added in order. Values are automatically
converted to strings.
get(section, option, raw=False, vars=None, fallback=_UNSET)
Return a string value for the named option. All % interpolations are
expanded in the return values, based on the defaults passed into the
constructor and the DEFAULT section. Additional substitutions may be
provided using the `vars' argument, which must be a dictionary whose
contents override any pre-existing defaults. If `option' is a key in
`vars', the value from `vars' is used.
getint(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to an integer.
getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a float.
getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
Like get(), but convert value to a boolean (currently case
insensitively defined as 0, false, no, off for False, and 1, true,
yes, on for True). Returns False or True.
items(section=_UNSET, raw=False, vars=None)
If section is given, return a list of tuples with (name, value) for
each option in the section. Otherwise, return a list of tuples with
(section_name, section_proxy) for each section, including DEFAULTSECT.
remove_section(section)
Remove the given file section and all its options.
remove_option(section, option)
Remove the given option from the given section.
set(section, option, value)
Set the given option.
write(fp, space_around_delimiters=True)
Write the configuration state in .ini format. If
`space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
""" |
"""Exception classes for CherryPy.
CherryPy provides (and uses) exceptions for declaring that the HTTP response
should be a status other than the default "200 OK". You can ``raise`` them like
normal Python exceptions. You can also call them and they will raise
themselves; this means you can set an
:class:`HTTPError<cherrypy._cperror.HTTPError>`
or :class:`HTTPRedirect<cherrypy._cperror.HTTPRedirect>` as the
:attr:`request.handler<cherrypy._cprequest.Request.handler>`.
.. _redirectingpost:
Redirecting POST
================
When you GET a resource and are redirected by the server to another Location,
there's generally no problem since GET is both a "safe method" (there should
be no side-effects) and an "idempotent method" (multiple calls are no different
than a single call).
POST, however, is neither safe nor idempotent--if you
charge a credit card, you don't want to be charged twice by a redirect!
For this reason, *none* of the 3xx responses permit a user-agent (browser) to
resubmit a POST on redirection without first confirming the action with the
user:
===== ================================= ===========
300 Multiple Choices Confirm with the user
301 Moved Permanently Confirm with the user
302 Found (Object moved temporarily) Confirm with the user
303 See Other GET the new URI; no confirmation
304 Not modified for conditional GET only;
POST should not raise this error
305 Use Proxy Confirm with the user
307 Temporary Redirect Confirm with the user
308 Permanent Redirect No confirmation
===== ================================= ===========
However, browsers have historically implemented these restrictions poorly;
in particular, many browsers do not force the user to confirm 301, 302
or 307 when redirecting POST. For this reason, CherryPy defaults to 303,
which most user-agents appear to have implemented correctly. Therefore, if
you raise HTTPRedirect for a POST request, the user-agent will most likely
attempt to GET the new URI (without asking for confirmation from the user).
We realize this is confusing for developers, but it's the safest thing we
could do. You are of course free to raise ``HTTPRedirect(uri, status=302)``
or any other 3xx status if you know what you're doing, but given the
environment, we couldn't let any of those be the default.
Custom Error Handling
=====================
.. image:: /refman/cperrors.gif
Anticipated HTTP responses
--------------------------
The 'error_page' config namespace can be used to provide custom HTML output for
expected responses (like 404 Not Found). Supply a filename from which the
output will be read. The contents will be interpolated with the values
%(status)s, %(message)s, %(traceback)s, and %(version)s using plain old Python
`string formatting
<http://docs.python.org/2/library/stdtypes.html#string-formatting-operations>`_.
::
_cp_config = {
'error_page.404': os.path.join(localDir, "static/index.html")
}
Beginning in version 3.1, you may also provide a function or other callable as
an error_page entry. It will be passed the same status, message, traceback and
version arguments that are interpolated into templates::
def error_page_402(status, message, traceback, version):
return "Error %s - Well, I'm very sorry but you haven't paid!" % status
cherrypy.config.update({'error_page.402': error_page_402})
Also in 3.1, in addition to the numbered error codes, you may also supply
"error_page.default" to handle all codes which do not have their own error_page
entry.
Unanticipated errors
--------------------
CherryPy also has a generic error handling mechanism: whenever an unanticipated
error occurs in your code, it will call
:func:`Request.error_response<cherrypy._cprequest.Request.error_response>` to
set the response status, headers, and body. By default, this is the same
output as
:class:`HTTPError(500) <cherrypy._cperror.HTTPError>`. If you want to provide
some other behavior, you generally replace "request.error_response".
Here is some sample code that shows how to display a custom error message and
send an e-mail containing the error::
from cherrypy import _cperror
def handle_error():
cherrypy.response.status = 500
cherrypy.response.body = [
"<html><body>Sorry, an error occurred</body></html>"
]
sendMail('EMAIL',
'Error in your web app',
_cperror.format_exc())
@cherrypy.config(**{'request.error_response': handle_error})
class Root:
pass
Note that you have to explicitly set
:attr:`response.body <cherrypy._cprequest.Response.body>`
and not simply return an error message as a result.
""" |
"""Model and Property classes and associated stuff.
A model class represents the structure of entities stored in the
datastore. Applications define model classes to indicate the
structure of their entities, then instantiate those model classes
to create entities.
All model classes must inherit (directly or indirectly) from Model.
Through the magic of metaclasses, straightforward assignments in the
model class definition can be used to declare the model's structure:
class Person(Model):
name = StringProperty()
age = IntegerProperty()
We can now create a Person entity and write it to the datastore:
p = Person(name='Arthur NAME age=42)
k = p.put()
The return value from put() is a Key (see the documentation for
ndb/key.py), which can be used to retrieve the same entity later:
p2 = k.get()
p2 == p # Returns True
To update an entity, simple change its attributes and write it back
(note that this doesn't change the key):
p2.name = 'Arthur NAME
p2.put()
We can also delete an entity (by using the key):
k.delete()
The property definitions in the class body tell the system the names
and the types of the fields to be stored in the datastore, whether
they must be indexed, their default value, and more.
Many different Property types exist. Most are indexed by default, the
exceptions indicated in the list below:
- StringProperty: a short text string, limited to 500 bytes
- TextProperty: an unlimited text string; unindexed
- BlobProperty: an unlimited byte string; unindexed
- IntegerProperty: a 64-bit signed integer
- FloatProperty: a double precision floating point number
- BooleanProperty: a bool value
- DateTimeProperty: a datetime object. Note: App Engine always uses
UTC as the timezone
- DateProperty: a date object
- TimeProperty: a time object
- GeoPtProperty: a geographical location, i.e. (latitude, longitude)
- KeyProperty: a datastore Key value, optionally constrained to
referring to a specific kind
- UserProperty: a User object (for backwards compatibility only)
- StructuredProperty: a field that is itself structured like an
entity; see below for more details
- LocalStructuredProperty: like StructuredProperty but the on-disk
representation is an opaque blob; unindexed
- ComputedProperty: a property whose value is computed from other
properties by a user-defined function. The property value is
written to the datastore so that it can be used in queries, but the
value from the datastore is not used when the entity is read back
- GenericProperty: a property whose type is not constrained; mostly
used by the Expando class (see below) but also usable explicitly
- JsonProperty: a property whose value is any object that can be
serialized using JSON; the value written to the datastore is a JSON
representation of that object
- PickleProperty: a property whose value is any object that can be
serialized using Python's pickle protocol; the value written to the
datastore is the pickled representation of that object, using the
highest available pickle protocol
Most Property classes have similar constructor signatures. They
accept several optional keyword arguments:
- name=<string>: the name used to store the property value in the
datastore. Unlike the following options, this may also be given as
a positional argument
- indexed=<bool>: indicates whether the property should be indexed
(allowing queries on this property's value)
- repeated=<bool>: indicates that this property can have multiple
values in the same entity.
- required=<bool>: indicates that this property must be given a value
- default=<value>: a default value if no explicit value is given
- choices=<list of values>: a list or tuple of allowable values
- validator=<function>: a general-purpose validation function. It
will be called with two arguments (prop, value) and should either
return the validated value or raise an exception. It is also
allowed for the function to modify the value, but calling it again
on the modified value should not modify the value further. (For
example: a validator that returns value.strip() or value.lower() is
fine, but one that returns value + '$' is not.)
- verbose_name=<value>: A human readable name for this property. This
human readable name can be used for html form labels.
The repeated and required/default options are mutually exclusive: a
repeated property cannot be required nor can it specify a default
value (the default is always an empty list and an empty list is always
an allowed value), but a required property can have a default.
Some property types have additional arguments. Some property types
do not support all options.
Repeated properties are always represented as Python lists; if there
is only one value, the list has only one element. When a new list is
assigned to a repeated property, all elements of the list are
validated. Since it is also possible to mutate lists in place,
repeated properties are re-validated before they are written to the
datastore.
No validation happens when an entity is read from the datastore;
however property values read that have the wrong type (e.g. a string
value for an IntegerProperty) are ignored.
For non-repeated properties, None is always a possible value, and no
validation is called when the value is set to None. However for
required properties, writing the entity to the datastore requires
the value to be something other than None (and valid).
The StructuredProperty is different from most other properties; it
lets you define a sub-structure for your entities. The substructure
itself is defined using a model class, and the attribute value is an
instance of that model class. However it is not stored in the
datastore as a separate entity; instead, its attribute values are
included in the parent entity using a naming convention (the name of
the structured attribute followed by a dot followed by the name of the
subattribute). For example:
class Address(Model):
street = StringProperty()
city = StringProperty()
class Person(Model):
name = StringProperty()
address = StructuredProperty(Address)
p = Person(name='Harry NAME
address=Address(street='4 Privet Drive',
city='Little Whinging'))
k.put()
This would write a single 'Person' entity with three attributes (as
you could verify using the Datastore Viewer in the Admin Console):
name = 'Harry NAME
address.street = '4 Privet Drive'
address.city = 'Little Whinging'
Structured property types can be nested arbitrarily deep, but in a
hierarchy of nested structured property types, only one level can have
the repeated flag set. It is fine to have multiple structured
properties referencing the same model class.
It is also fine to use the same model class both as a top-level entity
class and as for a structured property; however queries for the model
class will only return the top-level entities.
The LocalStructuredProperty works similar to StructuredProperty on the
Python side. For example:
class Address(Model):
street = StringProperty()
city = StringProperty()
class Person(Model):
name = StringProperty()
address = LocalStructuredProperty(Address)
p = Person(name='Harry NAME
address=Address(street='4 Privet Drive',
city='Little Whinging'))
k.put()
However the data written to the datastore is different; it writes a
'Person' entity with a 'name' attribute as before and a single
'address' attribute whose value is a blob which encodes the Address
value (using the standard"protocol buffer" encoding).
Sometimes the set of properties is not known ahead of time. In such
cases you can use the Expando class. This is a Model subclass that
creates properties on the fly, both upon assignment and when loading
an entity from the datastore. For example:
class SuperPerson(Expando):
name = StringProperty()
superpower = StringProperty()
razorgirl = SuperPerson(name='Molly NAME
superpower='bionic eyes, razorblade hands',
rasta_name='Steppin\' Razor',
alt_name='Sally NAME
elastigirl = SuperPerson(name='Helen NAME
superpower='stretchable body')
elastigirl.max_stretch = 30 # Meters
You can inspect the properties of an expando instance using the
_properties attribute:
>>> print razorgirl._properties.keys()
['rasta_name', 'name', 'superpower', 'alt_name']
>>> print elastigirl._properties
{'max_stretch': GenericProperty('max_stretch'),
'name': StringProperty('name'),
'superpower': StringProperty('superpower')}
Note: this property exists for plain Model instances too; it is just
not as interesting for those.
The Model class offers basic query support. You can create a Query
object by calling the query() class method. Iterating over a Query
object returns the entities matching the query one at a time.
Query objects are fully described in the docstring for query.py, but
there is one handy shortcut that is only available through
Model.query(): positional arguments are interpreted as filter
expressions which are combined through an AND operator. For example:
Person.query(Person.name == 'Harry NAME Person.age >= 11)
is equivalent to:
Person.query().filter(Person.name == 'Harry NAME Person.age >= 11)
Keyword arguments passed to .query() are passed along to the Query()
constructor.
It is possible to query for field values of stuctured properties. For
example:
qry = Person.query(Person.address.city == 'London')
A number of top-level functions also live in this module:
- transaction() runs a function inside a transaction
- get_multi() reads multiple entities at once
- put_multi() writes multiple entities at once
- delete_multi() deletes multiple entities at once
All these have a corresponding *_async() variant as well.
The *_multi_async() functions return a list of Futures.
And finally these (without async variants):
- in_transaction() tests whether you are currently running in a transaction
- @transactional decorates functions that should be run in a transaction
There are many other interesting features. For example, Model
subclasses may define pre-call and post-call hooks for most operations
(get, put, delete, allocate_ids), and Property classes may be
subclassed to suit various needs. Documentation for writing a
Property subclass is in the docstring for the Property class.
""" |
"""
This is a procedural interface to the matplotlib object-oriented
plotting library.
The following plotting commands are provided; the majority have
MATLAB |reg| [*]_ analogs and similar arguments.
.. |reg| unicode:: 0xAE
_Plotting commands
acorr - plot the autocorrelation function
annotate - annotate something in the figure
arrow - add an arrow to the axes
axes - Create a new axes
axhline - draw a horizontal line across axes
axvline - draw a vertical line across axes
axhspan - draw a horizontal bar across axes
axvspan - draw a vertical bar across axes
axis - Set or return the current axis limits
autoscale - turn axis autoscaling on or off, and apply it
bar - make a bar chart
barh - a horizontal bar chart
broken_barh - a set of horizontal bars with gaps
box - set the axes frame on/off state
boxplot - make a box and whisker plot
violinplot - make a violin plot
cla - clear current axes
clabel - label a contour plot
clf - clear a figure window
clim - adjust the color limits of the current image
close - close a figure window
colorbar - add a colorbar to the current figure
cohere - make a plot of coherence
contour - make a contour plot
contourf - make a filled contour plot
csd - make a plot of cross spectral density
delaxes - delete an axes from the current figure
draw - Force a redraw of the current figure
errorbar - make an errorbar graph
figlegend - make legend on the figure rather than the axes
figimage - make a figure image
figtext - add text in figure coords
figure - create or change active figure
fill - make filled polygons
findobj - recursively find all objects matching some criteria
gca - return the current axes
gcf - return the current figure
gci - get the current image, or None
getp - get a graphics property
grid - set whether gridding is on
hist - make a histogram
ioff - turn interaction mode off
ion - turn interaction mode on
isinteractive - return True if interaction mode is on
imread - load image file into array
imsave - save array as an image file
imshow - plot image data
legend - make an axes legend
locator_params - adjust parameters used in locating axis ticks
loglog - a log log plot
matshow - display a matrix in a new figure preserving aspect
margins - set margins used in autoscaling
pause - pause for a specified interval
pcolor - make a pseudocolor plot
pcolormesh - make a pseudocolor plot using a quadrilateral mesh
pie - make a pie chart
plot - make a line plot
plot_date - plot dates
plotfile - plot column data from an ASCII tab/space/comma delimited file
pie - pie charts
polar - make a polar plot on a PolarAxes
psd - make a plot of power spectral density
quiver - make a direction field (arrows) plot
rc - control the default params
rgrids - customize the radial grids and labels for polar
savefig - save the current figure
scatter - make a scatter plot
setp - set a graphics property
semilogx - log x axis
semilogy - log y axis
show - show the figures
specgram - a spectrogram plot
spy - plot sparsity pattern using markers or image
stem - make a stem plot
subplot - make one subplot (numrows, numcols, axesnum)
subplots - make a figure with a set of (numrows, numcols) subplots
subplots_adjust - change the params controlling the subplot positions of current figure
subplot_tool - launch the subplot configuration tool
suptitle - add a figure title
table - add a table to the plot
text - add some text at location x,y to the current axes
thetagrids - customize the radial theta grids and labels for polar
tick_params - control the appearance of ticks and tick labels
ticklabel_format - control the format of tick labels
title - add a title to the current axes
tricontour - make a contour plot on a triangular grid
tricontourf - make a filled contour plot on a triangular grid
tripcolor - make a pseudocolor plot on a triangular grid
triplot - plot a triangular grid
xcorr - plot the autocorrelation function of x and y
xlim - set/get the xlimits
ylim - set/get the ylimits
xticks - set/get the xticks
yticks - set/get the yticks
xlabel - add an xlabel to the current axes
ylabel - add a ylabel to the current axes
autumn - set the default colormap to autumn
bone - set the default colormap to bone
cool - set the default colormap to cool
copper - set the default colormap to copper
flag - set the default colormap to flag
gray - set the default colormap to gray
hot - set the default colormap to hot
hsv - set the default colormap to hsv
jet - set the default colormap to jet
pink - set the default colormap to pink
prism - set the default colormap to prism
spring - set the default colormap to spring
summer - set the default colormap to summer
winter - set the default colormap to winter
_Event handling
connect - register an event handler
disconnect - remove a connected event handler
_Matrix commands
cumprod - the cumulative product along a dimension
cumsum - the cumulative sum along a dimension
detrend - remove the mean or besdt fit line from an array
diag - the k-th diagonal of matrix
diff - the n-th differnce of an array
eig - the eigenvalues and eigen vectors of v
eye - a matrix where the k-th diagonal is ones, else zero
find - return the indices where a condition is nonzero
fliplr - flip the rows of a matrix up/down
flipud - flip the columns of a matrix left/right
linspace - a linear spaced vector of N values from min to max inclusive
logspace - a log spaced vector of N values from min to max inclusive
meshgrid - repeat x and y to make regular matrices
ones - an array of ones
rand - an array from the uniform distribution [0,1]
randn - an array from the normal distribution
rot90 - rotate matrix k*90 degress counterclockwise
squeeze - squeeze an array removing any dimensions of length 1
tri - a triangular matrix
tril - a lower triangular matrix
triu - an upper triangular matrix
vander - the Vandermonde matrix of vector x
svd - singular value decomposition
zeros - a matrix of zeros
_Probability
normpdf - The Gaussian probability density function
rand - random numbers from the uniform distribution
randn - random numbers from the normal distribution
_Statistics
amax - the maximum along dimension m
amin - the minimum along dimension m
corrcoef - correlation coefficient
cov - covariance matrix
mean - the mean along dimension m
median - the median along dimension m
norm - the norm of vector x
prod - the product along dimension m
ptp - the max-min along dimension m
std - the standard deviation along dimension m
asum - the sum along dimension m
ksdensity - the kernel density estimate
_Time series analysis
bartlett - M-point Bartlett window
blackman - M-point Blackman window
cohere - the coherence using average periodiogram
csd - the cross spectral density using average periodiogram
fft - the fast Fourier transform of vector x
hamming - M-point Hamming window
hanning - M-point Hanning window
hist - compute the histogram of x
kaiser - M length Kaiser window
psd - the power spectral density using average periodiogram
sinc - the sinc function of array x
_Dates
date2num - convert python datetimes to numeric representation
drange - create an array of numbers for date plots
num2date - convert numeric type (float days since 0001) to datetime
_Other
angle - the angle of a complex array
griddata - interpolate irregularly distributed data to a regular grid
load - Deprecated--please use loadtxt.
loadtxt - load ASCII data into array.
polyfit - fit x, y to an n-th order polynomial
polyval - evaluate an n-th order polynomial
roots - the roots of the polynomial coefficients in p
save - Deprecated--please use savetxt.
savetxt - save an array to an ASCII file.
trapz - trapezoidal integration
__end
.. [*] MATLAB is a registered trademark of The MathWorks, Inc.
""" |
# -*- encoding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
# Copyright (c) 2009 Veritos - NAME - www.veritos.nl
#
# Deze module werkt in Odoo 5.0.0 (en waarschijnlijk hoger).
# Deze module werkt niet in Odoo versie 4 en lager.
#
# Status 1.0 - getest op Odoo 5.0.3
#
# Versie IP_ADDRESS
# account.account.type
# Basis gelegd voor alle account type
#
# account.account.template
# Basis gelegd met alle benodigde grootboekrekeningen welke via een menu-
# structuur gelinkt zijn aan rubrieken 1 t/m 9.
# De grootboekrekeningen gelinkt aan de account.account.type
# Deze links moeten nog eens goed nagelopen worden.
#
# account.chart.template
# Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren,
# bank, inkoop en verkoop boeken en de BTW configuratie.
#
# Versie IP_ADDRESS
# account.tax.code.template
# Basis gelegd voor de BTW configuratie (structuur)
# Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt?
#
# account.tax.template
# De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende
# grootboekrekeningen
#
# Versie IP_ADDRESS
# Opschonen van de code en verwijderen van niet gebruikte componenten.
# Versie IP_ADDRESS
# Aanpassen a_expense van 3000 -> 7000
# record id='btw_code_5b' op negatieve waarde gezet
# Versie IP_ADDRESS
# BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde.
# Versie IP_ADDRESS
# Account Receivable en Payable goed gedefinieerd.
# Versie IP_ADDRESS
# Alle user_type_xxx velden goed gedefinieerd.
# Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren.
# Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren.
# Versie IP_ADDRESS
# Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging)
# versie IP_ADDRESS
# Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity
# versie IP_ADDRESS
# Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht.
# versie IP_ADDRESS
# BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd.
# versie IP_ADDRESS - Switch to English
# Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense
# Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# Copyright (c) 2009 Veritos - NAME - www.veritos.nl
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsability of assessing all potential
# consequences resulting from its eventual inadequacies and bugs.
# End users who are looking for a ready-to-use solution with commercial
# garantees and support are strongly adviced to contract a Free Software
# Service Company like Veritos.
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
##############################################################################
#
# Deze module werkt in OpenERP 5.0.0 (en waarschijnlijk hoger).
# Deze module werkt niet in OpenERP versie 4 en lager.
#
# Status 1.0 - getest op OpenERP 5.0.3
#
# Versie IP_ADDRESS
# account.account.type
# Basis gelegd voor alle account type
#
# account.account.template
# Basis gelegd met alle benodigde grootboekrekeningen welke via een menu-
# structuur gelinkt zijn aan rubrieken 1 t/m 9.
# De grootboekrekeningen gelinkt aan de account.account.type
# Deze links moeten nog eens goed nagelopen worden.
#
# account.chart.template
# Basis gelegd voor het koppelen van rekeningen aan debiteuren, crediteuren,
# bank, inkoop en verkoop boeken en de BTW configuratie.
#
# Versie IP_ADDRESS
# account.tax.code.template
# Basis gelegd voor de BTW configuratie (structuur)
# Heb als basis het BTW aangifte formulier gebruikt. Of dit werkt?
#
# account.tax.template
# De BTW rekeningen aangemaakt en deze gekoppeld aan de betreffende
# grootboekrekeningen
#
# Versie IP_ADDRESS
# Opschonen van de code en verwijderen van niet gebruikte componenten.
# Versie IP_ADDRESS
# Aanpassen a_expense van 3000 -> 7000
# record id='btw_code_5b' op negatieve waarde gezet
# Versie IP_ADDRESS
# BTW rekeningen hebben typeaanduiding gekregen t.b.v. purchase of sale
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Opschonen van module.
# Versie IP_ADDRESS
# Foutje in l10n_nl_wizard.xml gecorrigeerd waardoor de module niet volledig installeerde.
# Versie IP_ADDRESS
# Account Receivable en Payable goed gedefinieerd.
# Versie IP_ADDRESS
# Alle user_type_xxx velden goed gedefinieerd.
# Specifieke bouw en garage gerelateerde grootboeken verwijderd om een standaard module te creeeren.
# Deze module kan dan als basis worden gebruikt voor specifieke doelgroep modules te creeeren.
# Versie IP_ADDRESS
# Correctie van rekening 7010 (stond dubbel met 7014 waardoor installatie verkeerd ging)
# versie IP_ADDRESS
# Correctie op diverse rekening types van user_type_asset -> user_type_liability en user_type_equity
# versie IP_ADDRESS
# Kleine correctie op BTW te vorderen hoog, id was hetzelfde voor beide, waardoor hoog werd overschreven door # overig. Verduidelijking van omschrijvingen in belastingcodes t.b.v. aangifte overzicht.
# versie IP_ADDRESS
# BTW omschrijvingen aangepast, zodat rapporten er beter uitzien. 2a en 5b e.d. verwijderd en enkele omschrijvingen toegevoegd.
# versie IP_ADDRESS - Switch to English
# Added properties_stock_xxx accounts for correct stock valuation, changed 7000-accounts from type cash to type expense
# Changed naming of 7020 and 7030 to Kostprijs omzet xxxx
|
"""Generic socket server classes.
This module tries to capture the various aspects of defining a server:
For socket-based servers:
- address family:
- AF_INET{,6}: IP (Internet Protocol) sockets (default)
- AF_UNIX: Unix domain sockets
- others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
- SOCK_STREAM (reliable stream, e.g. TCP)
- SOCK_DGRAM (datagrams, e.g. UDP)
For request-based servers (including socket-based):
- client address verification before further looking at the request
(This is actually a hook for any processing that needs to look
at the request before anything else, e.g. logging)
- how to handle multiple requests:
- synchronous (one request is handled at a time)
- forking (each request is handled by a new process)
- threading (each request is handled by a new thread)
The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server. This is bad class design, but
save some typing. (There's also the issue that a deep class hierarchy
slows down method lookups.)
There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:
+------------+
| BaseServer |
+------------+
|
v
+-----------+ +------------------+
| TCPServer |------->| UnixStreamServer |
+-----------+ +------------------+
|
v
+-----------+ +--------------------+
| UDPServer |------->| UnixDatagramServer |
+-----------+ +--------------------+
Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.
Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes. For
instance, a threading UDP server class is created as follows:
class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.
To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method. You can then run
various versions of the service by combining one of the server classes
with your request handler class.
The request handler class must be different for datagram or stream
services. This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.
Of course, you still have to use your head!
For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child). In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.
On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested. Here a threading or forking
server is appropriate.
In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data. This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.
Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use a selector to
decide which request to work on next (or whether to handle a new
incoming request). This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).
Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
and encryption schemes
XXX Open problems:
- What to do with out-of-band data?
BaseServer:
- split generic "request" functionality out into BaseServer class.
Copyright (C) 2000 NAME <EMAIL>
example: read entries from a SQL database (requires overriding
get_request() to return a table entry from the database).
entry is processed by a RequestHandlerClass.
""" |
"""
AUI is an Advanced User Interface library that aims to implement "cutting-edge"
interface usability and design features so developers can quickly and easily create
beautiful and usable application interfaces.
Vision and Design Principles
============================
AUI attempts to encapsulate the following aspects of the user interface:
* **Frame Management**: Frame management provides the means to open, move and hide common
controls that are needed to interact with the document, and allow these configurations
to be saved into different perspectives and loaded at a later time.
* **Toolbars**: Toolbars are a specialized subset of the frame management system and should
behave similarly to other docked components. However, they also require additional
functionality, such as "spring-loaded" rebar support, "chevron" buttons and end-user
customizability.
* **Modeless Controls**: Modeless controls expose a tool palette or set of options that
float above the application content while allowing it to be accessed. Usually accessed
by the toolbar, these controls disappear when an option is selected, but may also be
"torn off" the toolbar into a floating frame of their own.
* **Look and Feel**: Look and feel encompasses the way controls are drawn, both when shown
statically as well as when they are being moved. This aspect of user interface design
incorporates "special effects" such as transparent window dragging as well as frame animation.
AUI adheres to the following principles:
- Use native floating frames to obtain a native look and feel for all platforms;
- Use existing wxPython code where possible, such as sizer implementation for frame management;
- Use standard wxPython coding conventions.
Usage
=====
The following example shows a simple implementation that uses :class:`framemanager.AuiManager` to manage
three text controls in a frame window::
import wx
import wx.lib.agw.aui as aui
class MyFrame(wx.Frame):
def __init__(self, parent, id=-1, title="AUI Test", pos=wx.DefaultPosition,
size=(800, 600), style=wx.DEFAULT_FRAME_STYLE):
wx.Frame.__init__(self, parent, id, title, pos, size, style)
self._mgr = aui.AuiManager()
# notify AUI which frame to use
self._mgr.SetManagedWindow(self)
# create several text controls
text1 = wx.TextCtrl(self, -1, "Pane 1 - sample text",
wx.DefaultPosition, wx.Size(200,150),
wx.NO_BORDER | wx.TE_MULTILINE)
text2 = wx.TextCtrl(self, -1, "Pane 2 - sample text",
wx.DefaultPosition, wx.Size(200,150),
wx.NO_BORDER | wx.TE_MULTILINE)
text3 = wx.TextCtrl(self, -1, "Main content window",
wx.DefaultPosition, wx.Size(200,150),
wx.NO_BORDER | wx.TE_MULTILINE)
# add the panes to the manager
self._mgr.AddPane(text1, aui.AuiPaneInfo().Left().Caption("Pane Number One"))
self._mgr.AddPane(text2, aui.AuiPaneInfo().Bottom().Caption("Pane Number Two"))
self._mgr.AddPane(text3, aui.AuiPaneInfo().CenterPane())
# tell the manager to "commit" all the changes just made
self._mgr.Update()
self.Bind(wx.EVT_CLOSE, self.OnClose)
def OnClose(self, event):
# deinitialize the frame manager
self._mgr.UnInit()
self.Destroy()
event.Skip()
# our normal wxApp-derived class, as usual
app = wx.App(0)
frame = MyFrame(None)
app.SetTopWindow(frame)
frame.Show()
app.MainLoop()
What's New
==========
Current wxAUI Version Tracked: wxWidgets 2.9.4 (SVN HEAD)
The wxPython AUI version fixes the following bugs or implement the following
missing features (the list is not exhaustive):
- Visual Studio 2005 style docking: http://www.kirix.com/forums/viewtopic.php?f=16&t=596
- Dock and Pane Resizing: http://www.kirix.com/forums/viewtopic.php?f=16&t=582
- Patch concerning dock resizing: http://www.kirix.com/forums/viewtopic.php?f=16&t=610
- Patch to effect wxAuiToolBar orientation switch: http://www.kirix.com/forums/viewtopic.php?f=16&t=641
- AUI: Core dump when loading a perspective in wxGTK (MSW OK): http://www.kirix.com/forums/viewtopic.php?f=15&t=627
- wxAuiNotebook reordered AdvanceSelection(): http://www.kirix.com/forums/viewtopic.php?f=16&t=617
- Vertical Toolbar Docking Issue: http://www.kirix.com/forums/viewtopic.php?f=16&t=181
- Patch to show the resize hint on mouse-down in aui: http://trac.wxwidgets.org/ticket/9612
- The Left/Right and Top/Bottom Docks over draw each other: http://trac.wxwidgets.org/ticket/3516
- MinSize() not honoured: http://trac.wxwidgets.org/ticket/3562
- Layout problem with wxAUI: http://trac.wxwidgets.org/ticket/3597
- Resizing children ignores current window size: http://trac.wxwidgets.org/ticket/3908
- Resizing panes under Vista does not repaint background: http://trac.wxwidgets.org/ticket/4325
- Resize sash resizes in response to click: http://trac.wxwidgets.org/ticket/4547
- "Illegal" resizing of the AuiPane? (wxPython): http://trac.wxwidgets.org/ticket/4599
- Floating wxAUIPane Resize Event doesn't update its position: http://trac.wxwidgets.org/ticket/9773
- Don't hide floating panels when we maximize some other panel: http://trac.wxwidgets.org/ticket/4066
- wxAUINotebook incorrect ALLOW_ACTIVE_PANE handling: http://trac.wxwidgets.org/ticket/4361
- Page changing veto doesn't work, (patch supplied): http://trac.wxwidgets.org/ticket/4518
- Show and DoShow are mixed around in wxAuiMDIChildFrame: http://trac.wxwidgets.org/ticket/4567
- wxAuiManager & wxToolBar - ToolBar Of Size Zero: http://trac.wxwidgets.org/ticket/9724
- wxAuiNotebook doesn't behave properly like a container as far as...: http://trac.wxwidgets.org/ticket/9911
- Serious layout bugs in wxAUI: http://trac.wxwidgets.org/ticket/10620
- wAuiDefaultTabArt::Clone() should just use copy contructor: http://trac.wxwidgets.org/ticket/11388
- Drop down button for check tool on wxAuiToolbar: http://trac.wxwidgets.org/ticket/11139
Plus the following features:
- AuiManager:
(a) Implementation of a simple minimize pane system: Clicking on this minimize button causes a new
AuiToolBar to be created and added to the frame manager, (currently the implementation is such
that panes at West will have a toolbar at the right, panes at South will have toolbars at the
bottom etc...) and the pane is hidden in the manager.
Clicking on the restore button on the newly created toolbar will result in the toolbar being
removed and the original pane being restored;
(b) Panes can be docked on top of each other to form `AuiNotebooks`; `AuiNotebooks` tabs can be torn
off to create floating panes;
(c) On Windows XP, use the nice sash drawing provided by XP while dragging the sash;
(d) Possibility to set an icon on docked panes;
(e) Possibility to draw a sash visual grip, for enhanced visualization of sashes;
(f) Implementation of a native docking art (`ModernDockArt`). Windows XP only, **requires** NAME
pywin32 package (winxptheme);
(g) Possibility to set a transparency for floating panes (a la Paint .NET);
(h) Snapping the main frame to the screen in any positin specified by horizontal and vertical
alignments;
(i) Snapping floating panes on left/right/top/bottom or any combination of directions, a la Winamp;
(j) "Fly-out" floating panes, i.e. panes which show themselves only when the mouse hover them;
(k) Ability to set custom bitmaps for pane buttons (close, maximize, etc...);
(l) Implementation of the style ``AUI_MGR_ANIMATE_FRAMES``, which fade-out floating panes when
they are closed (all platforms which support frames transparency) and show a moving rectangle
when they are docked and minimized (Windows < Vista and GTK only);
(m) A pane switcher dialog is available to cycle through existing AUI panes;
(n) Some flags which allow to choose the orientation and the position of the minimized panes;
(o) The functions [Get]MinimizeMode() in `AuiPaneInfo` which allow to set/get the flags described above;
(p) Events like ``EVT_AUI_PANE_DOCKING``, ``EVT_AUI_PANE_DOCKED``, ``EVT_AUI_PANE_FLOATING`` and ``EVT_AUI_PANE_FLOATED`` are
available for all panes *except* toolbar panes;
(q) Implementation of the RequestUserAttention method for panes;
(r) Ability to show the caption bar of docked panes on the left instead of on the top (with caption
text rotated by 90 degrees then). This is similar to what `wxDockIt` did. To enable this feature on any
given pane, simply call `CaptionVisible(True, left=True)`;
(s) New Aero-style docking guides: you can enable them by using the `AuiManager` style ``AUI_MGR_AERO_DOCKING_GUIDES``;
(t) A slide-in/slide-out preview of minimized panes can be seen by enabling the `AuiManager` style
``AUI_MGR_PREVIEW_MINIMIZED_PANES`` and by hovering with the mouse on the minimized pane toolbar tool;
(u) New Whidbey-style docking guides: you can enable them by using the `AuiManager` style ``AUI_MGR_WHIDBEY_DOCKING_GUIDES``;
(v) Native of custom-drawn mini frames can be used as floating panes, depending on the ``AUI_MGR_USE_NATIVE_MINIFRAMES`` style;
(w) A "smooth docking effect" can be obtained by using the ``AUI_MGR_SMOOTH_DOCKING`` style (similar to PyQT docking style);
(x) Implementation of "Movable" panes, i.e. a pane that is set as `Movable()` but not `Floatable()` can be dragged and docked
into a new location but will not form a floating window in between.
- AuiNotebook:
(a) Implementation of the style ``AUI_NB_HIDE_ON_SINGLE_TAB``, a la :mod:`lib.agw.flatnotebook`;
(b) Implementation of the style ``AUI_NB_SMART_TABS``, a la :mod:`lib.agw.flatnotebook`;
(c) Implementation of the style ``AUI_NB_USE_IMAGES_DROPDOWN``, which allows to show tab images
on the tab dropdown menu instead of bare check menu items (a la :mod:`lib.agw.flatnotebook`);
(d) 6 different tab arts are available, namely:
(1) Default "glossy" theme (as in :class:`~auibook.AuiNotebook`)
(2) Simple theme (as in :class:`~auibook.AuiNotebook`)
(3) Firefox 2 theme
(4) Visual Studio 2003 theme (VC71)
(5) Visual Studio 2005 theme (VC81)
(6) Google Chrome theme
(e) Enabling/disabling tabs;
(f) Setting the colour of the tab's text;
(g) Implementation of the style ``AUI_NB_CLOSE_ON_TAB_LEFT``, which draws the tab close button on
the left instead of on the right (a la Camino browser);
(h) Ability to save and load perspectives in `AuiNotebook` (experimental);
(i) Possibility to add custom buttons in the `AuiNotebook` tab area;
(j) Implementation of the style ``AUI_NB_TAB_FLOAT``, which allows the floating of single tabs.
Known limitation: when the notebook is more or less full screen, tabs cannot be dragged far
enough outside of the notebook to become floating pages;
(k) Implementation of the style ``AUI_NB_DRAW_DND_TAB`` (on by default), which draws an image
representation of a tab while dragging;
(l) Implementation of the `AuiNotebook` unsplit functionality, which unsplit a splitted AuiNotebook
when double-clicking on a sash;
(m) Possibility to hide all the tabs by calling `HideAllTAbs`;
(n) wxPython controls can now be added inside page tabs by calling `AddControlToPage`, and they can be
removed by calling `RemoveControlFromPage`;
(o) Possibility to preview all the pages in a `AuiNotebook` (as thumbnails) by using the `NotebookPreview`
method of `AuiNotebook`;
(p) Tab labels can be edited by calling the `SetRenamable` method on a `AuiNotebook` page;
(q) Support for multi-lines tab labels in `AuiNotebook`;
(r) Support for setting minimum and maximum tab widths for fixed width tabs;
(s) Implementation of the style ``AUI_NB_ORDER_BY_ACCESS``, which orders the tabs by last access time
inside the Tab Navigator dialog;
(t) Implementation of the style ``AUI_NB_NO_TAB_FOCUS``, allowing the developer not to draw the tab
focus rectangle on tne `AuiNotebook` tabs.
|
- AuiToolBar:
(a) ``AUI_TB_PLAIN_BACKGROUND`` style that allows to easy setup a plain background to the AUI toolbar,
without the need to override drawing methods. This style contrasts with the default behaviour
of the :class:`~auibar.AuiToolBar` that draws a background gradient and this break the window design when
putting it within a control that has margin between the borders and the toolbar (example: put
:class:`~auibar.AuiToolBar` within a :class:`StaticBoxSizer` that has a plain background);
(b) `AuiToolBar` allow item alignment: http://trac.wxwidgets.org/ticket/10174;
(c) `AUIToolBar` `DrawButton()` improvement: http://trac.wxwidgets.org/ticket/10303;
(d) `AuiToolBar` automatically assign new id for tools: http://trac.wxwidgets.org/ticket/10173;
(e) `AuiToolBar` Allow right-click on any kind of button: http://trac.wxwidgets.org/ticket/10079;
(f) `AuiToolBar` idle update only when visible: http://trac.wxwidgets.org/ticket/10075;
(g) Ability of creating `AuiToolBar` tools with [counter]clockwise rotation. This allows to propose a
variant of the minimizing functionality with a rotated button which keeps the caption of the pane
as label;
(h) Allow setting the alignment of all tools in a toolbar that is expanded;
(i) Implementation of the ``AUI_MINIMIZE_POS_TOOLBAR`` flag, which allows to minimize a pane inside
an existing toolbar. Limitation: if the minimized icon in the toolbar ends up in the overflowing
items (i.e., a menu is needed to show the icon), this style will not work.
TODOs
=====
- Documentation, documentation and documentation;
- Fix `tabmdi.AuiMDIParentFrame` and friends, they do not work correctly at present;
- Allow specification of `CaptionLeft()` to `AuiPaneInfo` to show the caption bar of docked panes
on the left instead of on the top (with caption text rotated by 90 degrees then). This is
similar to what `wxDockIt` did - DONE;
- Make developer-created `AuiNotebooks` and automatic (framemanager-created) `AuiNotebooks` behave
the same way (undocking of tabs) - DONE, to some extent;
- Find a way to dock panes in already floating panes (`AuiFloatingFrames`), as they already have
their own `AuiManager`;
- Add more gripper styles (see, i.e., PlusDock 4.0);
- Add an "AutoHide" feature to docked panes, similar to fly-out floating panes (see, i.e., PlusDock 4.0);
- Add events for panes when they are about to float or to be docked (something like
``EVT_AUI_PANE_FLOATING/ED`` and ``EVT_AUI_PANE_DOCKING/ED``) - DONE, to some extent;
- Implement the 4-ways splitter behaviour for horizontal and vertical sashes if they intersect;
- Extend `tabart.py` with more aui tab arts;
- Implement ``AUI_NB_LEFT`` and ``AUI_NB_RIGHT`` tab locations in `AuiNotebook`;
- Move `AuiDefaultToolBarArt` into a separate module (as with `tabart.py` and `dockart.py`) and
provide more arts for toolbars (maybe from :mod:`lib.agw.flatmenu`?)
- Support multiple-rows/multiple columns toolbars;
- Integrate as much as possible with :mod:`lib.agw.flatmenu`, from dropdown menus in `AuiNotebook` to
toolbars and menu positioning;
- Possibly handle minimization of panes in a different way (or provide an option to switch to
another way of minimizing panes);
- Clean up/speed up the code, especially time-consuming for-loops;
- Possibly integrate `wxPyRibbon` (still on development), at least on Windows.
License And Version
===================
AUI library is distributed under the wxPython license.
Latest Revision: NAME @ 09 Jan 2014, 23.00 GMT
Version 1.3.
""" |
"""
=============
Miscellaneous
=============
IEEE 754 Floating Point Special Values:
-----------------------------------------------
Special values defined in numpy: nan, inf,
NaNs can be used as a poor-man's mask (if you don't care what the
original value was)
Note: cannot use equality to test NaNs. E.g.: ::
>>> myarr = np.array([1., 0., np.nan, 3.])
>>> np.where(myarr == np.nan)
>>> np.nan == np.nan # is always False! Use special numpy functions instead.
False
>>> myarr[myarr == np.nan] = 0. # doesn't work
>>> myarr
array([ 1., 0., NaN, 3.])
>>> myarr[np.isnan(myarr)] = 0. # use this instead find
>>> myarr
array([ 1., 0., 0., 3.])
Other related special value functions: ::
isinf(): True if value is inf
isfinite(): True if not nan or inf
nan_to_num(): Map nan to 0, inf to max float, -inf to min float
The following corresponds to the usual functions except that nans are excluded
from the results: ::
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
How numpy handles numerical exceptions:
------------------------------------------
The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
set individually for different kinds of exceptions. The different behaviors
are:
- 'ignore' : Take no action when the exception occurs.
- 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
- 'raise' : Raise a `FloatingPointError`.
- 'call' : Call a function specified using the `seterrcall` function.
- 'print' : Print a warning directly to ``stdout``.
- 'log' : Record error in a Log object specified by `seterrcall`.
These behaviors can be set for all kinds of errors or specific ones:
- all : apply to all numeric exceptions
- invalid : when NaNs are generated
- divide : divide by zero (for integers as well!)
- overflow : floating point overflows
- underflow : floating point underflows
Note that integer divide-by-zero is handled by the same machinery.
These behaviors are set on a per-thread basis.
Examples:
------------
::
>>> oldsettings = np.seterr(all='warn')
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under='ignore')
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid='raise')
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
... print "saw stupid error!"
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all='call')
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
... # error-handling settings
Interfacing to C:
-----------------
Only a survey of the choices. Little detail on how each works.
1) Bare metal, wrap your own C-code manually.
- Plusses:
- Efficient
- No dependencies on other tools
- Minuses:
- Lots of learning overhead:
- need to learn basics of Python C API
- need to learn basics of numpy C API
- need to learn how to handle reference counting and love it.
- Reference counting often difficult to get right.
- getting it wrong leads to memory leaks, and worse, segfaults
- API will change for Python 3.0!
2) pyrex
- Plusses:
- avoid learning C API's
- no dealing with reference counting
- can code in psuedo python and generate C code
- can also interface to existing C code
- should shield you from changes to Python C api
- become pretty popular within Python community
- Minuses:
- Can write code in non-standard form which may become obsolete
- Not as flexible as manual wrapping
- Maintainers not easily adaptable to new features
Thus:
3) cython - fork of pyrex to allow needed features for SAGE
- being considered as the standard scipy/numpy wrapping tool
- fast indexing support for arrays
4) ctypes
- Plusses:
- part of Python standard library
- good for interfacing to existing sharable libraries, particularly
Windows DLLs
- avoids API/reference counting issues
- good numpy support: arrays have all these in their ctypes
attribute: ::
a.ctypes.data a.ctypes.get_strides
a.ctypes.data_as a.ctypes.shape
a.ctypes.get_as_parameter a.ctypes.shape_as
a.ctypes.get_data a.ctypes.strides
a.ctypes.get_shape a.ctypes.strides_as
- Minuses:
- can't use for writing code to be turned into C extensions, only a wrapper
tool.
5) SWIG (automatic wrapper generator)
- Plusses:
- around a long time
- multiple scripting language support
- C++ support
- Good for wrapping large (many functions) existing C libraries
- Minuses:
- generates lots of code between Python and the C code
- can cause performance problems that are nearly impossible to optimize
out
- interface files can be hard to write
- doesn't necessarily avoid reference counting issues or needing to know
API's
7) Weave
- Plusses:
- Phenomenal tool
- can turn many numpy expressions into C code
- dynamic compiling and loading of generated C code
- can embed pure C code in Python module and have weave extract, generate
interfaces and compile, etc.
- Minuses:
- Future uncertain--lacks a champion
8) Psyco
- Plusses:
- Turns pure python into efficient machine code through jit-like
optimizations
- very fast when it optimizes well
- Minuses:
- Only on intel (windows?)
- Doesn't do much for numpy?
Interfacing to Fortran:
-----------------------
Fortran: Clear choice is f2py. (Pyfort is an older alternative, but not
supported any longer)
Interfacing to C++:
-------------------
1) CXX
2) Boost.python
3) SWIG
4) Sage has used cython to wrap C++ (not pretty, but it can be done)
5) SIP (used mainly in PyQT)
""" |
"""
========
Glossary
========
.. glossary::
along an axis
Axes are defined for arrays with more than one dimension. A
2-dimensional array has two corresponding axes: the first running
vertically downwards across rows (axis 0), and the second running
horizontally across columns (axis 1).
Many operation can take place along one of these axes. For example,
we can sum each row of an array, in which case we operate along
columns, or axis 1::
>>> x = np.arange(12).reshape((3,4))
>>> x
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> x.sum(axis=1)
array([ 6, 22, 38])
array
A homogeneous container of numerical elements. Each element in the
array occupies a fixed amount of memory (hence homogeneous), and
can be a numerical element of a single type (such as float, int
or complex) or a combination (such as ``(float, int, float)``). Each
array has an associated data-type (or ``dtype``), which describes
the numerical type of its elements::
>>> x = np.array([1, 2, 3], float)
>>> x
array([ 1., 2., 3.])
>>> x.dtype # floating point number, 64 bits of memory per element
dtype('float64')
# More complicated data type: each array element is a combination of
# and integer and a floating point number
>>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)])
array([(1, 2.0), (3, 4.0)],
dtype=[('x', '<i4'), ('y', '<f8')])
Fast element-wise operations, called `ufuncs`_, operate on arrays.
array_like
Any sequence that can be interpreted as an ndarray. This includes
nested lists, tuples, scalars and existing arrays.
attribute
A property of an object that can be accessed using ``obj.attribute``,
e.g., ``shape`` is an attribute of an array::
>>> x = np.array([1, 2, 3])
>>> x.shape
(3,)
BLAS
`Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_
broadcast
NumPy can do operations on arrays whose shapes are mismatched::
>>> x = np.array([1, 2])
>>> y = np.array([[3], [4]])
>>> x
array([1, 2])
>>> y
array([[3],
[4]])
>>> x + y
array([[4, 5],
[5, 6]])
See `doc.broadcasting`_ for more information.
C order
See `row-major`
column-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In column-major order, the leftmost index "varies the
fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the column-major order as::
[1, 4, 2, 5, 3, 6]
Column-major order is also known as the Fortran order, as the Fortran
programming language uses it.
decorator
An operator that transforms a function. For example, a ``log``
decorator may be defined to print debugging information upon
function execution::
>>> def log(f):
... def new_logging_func(*args, **kwargs):
... print("Logging call with parameters:", args, kwargs)
... return f(*args, **kwargs)
...
... return new_logging_func
Now, when we define a function, we can "decorate" it using ``log``::
>>> @log
... def add(a, b):
... return a + b
Calling ``add`` then yields:
>>> add(1, 2)
Logging call with parameters: (1, 2) {}
3
dictionary
Resembling a language dictionary, which provides a mapping between
words and descriptions thereof, a Python dictionary is a mapping
between two objects::
>>> x = {1: 'one', 'two': [1, 2]}
Here, `x` is a dictionary mapping keys to values, in this case
the integer 1 to the string "one", and the string "two" to
the list ``[1, 2]``. The values may be accessed using their
corresponding keys::
>>> x[1]
'one'
>>> x['two']
[1, 2]
Note that dictionaries are not stored in any specific order. Also,
most mutable (see *immutable* below) objects, such as lists, may not
be used as keys.
For more information on dictionaries, read the
`Python tutorial <http://docs.python.org/tut>`_.
Fortran order
See `column-major`
flattened
Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details.
immutable
An object that cannot be modified after execution is called
immutable. Two common examples are strings and tuples.
instance
A class definition gives the blueprint for constructing an object::
>>> class House(object):
... wall_colour = 'white'
Yet, we have to *build* a house before it exists::
>>> h = House() # build a house
Now, ``h`` is called a ``House`` instance. An instance is therefore
a specific realisation of a class.
iterable
A sequence that allows "walking" (iterating) over items, typically
using a loop such as::
>>> x = [1, 2, 3]
>>> [item**2 for item in x]
[1, 4, 9]
It is often used in combintion with ``enumerate``::
>>> keys = ['a','b','c']
>>> for n, k in enumerate(keys):
... print("Key %d: %s" % (n, k))
...
Key 0: a
Key 1: b
Key 2: c
list
A Python container that can hold any number of objects or items.
The items do not have to be of the same type, and can even be
lists themselves::
>>> x = [2, 2.0, "two", [2, 2.0]]
The list `x` contains 4 items, each which can be accessed individually::
>>> x[2] # the string 'two'
'two'
>>> x[3] # a list, containing an integer 2 and a float 2.0
[2, 2.0]
It is also possible to select more than one item at a time,
using *slicing*::
>>> x[0:2] # or, equivalently, x[:2]
[2, 2.0]
In code, arrays are often conveniently expressed as nested lists::
>>> np.array([[1, 2], [3, 4]])
array([[1, 2],
[3, 4]])
For more information, read the section on lists in the `Python
tutorial <http://docs.python.org/tut>`_. For a mapping
type (key-value), see *dictionary*.
mask
A boolean array, used to select only certain elements for an operation::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> mask = (x > 2)
>>> mask
array([False, False, False, True, True], dtype=bool)
>>> x[mask] = -1
>>> x
array([ 0, 1, 2, -1, -1])
masked array
Array that suppressed values indicated by a mask::
>>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True])
>>> x
masked_array(data = [-- 2.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
>>> x + [1, 2, 3]
masked_array(data = [-- 4.0 --],
mask = [ True False True],
fill_value = 1e+20)
<BLANKLINE>
Masked arrays are often used when operating on arrays containing
missing or invalid entries.
matrix
A 2-dimensional ndarray that preserves its two-dimensional nature
throughout operations. It has certain special operations, such as ``*``
(matrix multiplication) and ``**`` (matrix power), defined::
>>> x = np.mat([[1, 2], [3, 4]])
>>> x
matrix([[1, 2],
[3, 4]])
>>> x**2
matrix([[ 7, 10],
[15, 22]])
method
A function associated with an object. For example, each ndarray has a
method called ``repeat``::
>>> x = np.array([1, 2, 3])
>>> x.repeat(2)
array([1, 1, 2, 2, 3, 3])
ndarray
See *array*.
record array
An `ndarray`_ with `structured data type`_ which has been subclassed as
np.recarray and whose dtype is of type np.record, making the
fields of its data type to be accessible by attribute.
reference
If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore,
``a`` and ``b`` are different names for the same Python object.
row-major
A way to represent items in a N-dimensional array in the 1-dimensional
computer memory. In row-major order, the rightmost index "varies
the fastest": for example the array::
[[1, 2, 3],
[4, 5, 6]]
is represented in the row-major order as::
[1, 2, 3, 4, 5, 6]
Row-major order is also known as the C order, as the C programming
language uses it. New Numpy arrays are by default in row-major order.
self
Often seen in method signatures, ``self`` refers to the instance
of the associated class. For example:
>>> class Paintbrush(object):
... color = 'blue'
...
... def paint(self):
... print("Painting the city %s!" % self.color)
...
>>> p = Paintbrush()
>>> p.color = 'red'
>>> p.paint() # self refers to 'p'
Painting the city red!
slice
Used to select only certain elements from a sequence::
>>> x = range(5)
>>> x
[0, 1, 2, 3, 4]
>>> x[1:3] # slice from 1 to 3 (excluding 3 itself)
[1, 2]
>>> x[1:5:2] # slice from 1 to 5, but skipping every second element
[1, 3]
>>> x[::-1] # slice a sequence in reverse
[4, 3, 2, 1, 0]
Arrays may have more than one dimension, each which can be sliced
individually::
>>> x = np.array([[1, 2], [3, 4]])
>>> x
array([[1, 2],
[3, 4]])
>>> x[:, 1]
array([2, 4])
structured data type
A data type composed of other datatypes
tuple
A sequence that may contain a variable number of types of any
kind. A tuple is immutable, i.e., once constructed it cannot be
changed. Similar to a list, it can be indexed and sliced::
>>> x = (1, 'one', [1, 2])
>>> x
(1, 'one', [1, 2])
>>> x[0]
1
>>> x[:2]
(1, 'one')
A useful concept is "tuple unpacking", which allows variables to
be assigned to the contents of a tuple::
>>> x, y = (1, 2)
>>> x, y = 1, 2
This is often used when a function returns multiple values:
>>> def return_many():
... return 1, 'alpha', None
>>> a, b, c = return_many()
>>> a, b, c
(1, 'alpha', None)
>>> a
1
>>> b
'alpha'
ufunc
Universal function. A fast element-wise array operation. Examples include
``add``, ``sin`` and ``logical_or``.
view
An array that does not own its data, but refers to another array's
data instead. For example, we may create a view that only shows
every second element of another array::
>>> x = np.arange(5)
>>> x
array([0, 1, 2, 3, 4])
>>> y = x[::2]
>>> y
array([0, 2, 4])
>>> x[0] = 3 # changing x changes y as well, since y is a view on x
>>> y
array([3, 2, 4])
wrapper
Python is a high-level (highly abstracted, or English-like) language.
This abstraction comes at a price in execution speed, and sometimes
it becomes necessary to use lower level languages to do fast
computations. A wrapper is code that provides a bridge between
high and the low level languages, allowing, e.g., Python to execute
code written in C or Fortran.
Examples include ctypes, SWIG and Cython (which wraps C and C++)
and f2py (which wraps Fortran).
""" |
"""
===============
Array Internals
===============
Internal organization of numpy arrays
=====================================
It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy".
Numpy arrays consist of two major components, the raw array data (from now on,
referred to as the data buffer), and the information about the raw array data.
The data buffer is typically what people think of as arrays in C or Fortran,
a contiguous (and fixed) block of memory containing fixed sized data items.
Numpy also contains a significant set of data that describes how to interpret
the data in the data buffer. This extra information contains (among other things):
1) The basic data element's size in bytes
2) The start of the data within the data buffer (an offset relative to the
beginning of the data buffer).
3) The number of dimensions and the size of each dimension
4) The separation between elements for each dimension (the 'stride'). This
does not have to be a multiple of the element size
5) The byte order of the data (which may not be the native byte order)
6) Whether the buffer is read-only
7) Information (via the dtype object) about the interpretation of the basic
data element. The basic data element may be as simple as a int or a float,
or it may be a compound object (e.g., struct-like), a fixed character field,
or Python object pointers.
8) Whether the array is to interpreted as C-order or Fortran-order.
This arrangement allow for very flexible use of arrays. One thing that it allows
is simple changes of the metadata to change the interpretation of the array buffer.
Changing the byteorder of the array is a simple change involving no rearrangement
of the data. The shape of the array can be changed very easily without changing
anything in the data buffer or any data copying at all
Among other things that are made possible is one can create a new array metadata
object that uses the same data buffer
to create a new view of that data buffer that has a different interpretation
of the buffer (e.g., different shape, offset, byte order, strides, etc) but
shares the same data bytes. Many operations in numpy do just this such as
slices. Other operations, such as transpose, don't move data elements
around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
Typically these new versions of the array metadata but the same data buffer are
new 'views' into the data buffer. There is a different ndarray object, but it
uses the same data buffer. This is why it is necessary to force copies through
use of the .copy() method if one really wants to make a new and independent
copy of the data buffer.
New views into arrays mean the the object reference counts for the data buffer
increase. Simply doing away with the original array object will not remove the
data buffer if other views of it still exist.
Multidimensional Array Indexing Order Issues
============================================
What is the right way to index
multi-dimensional arrays? Before you jump to conclusions about the one and
true way to index multi-dimensional arrays, it pays to understand why this is
a confusing issue. This section will try to explain in detail how numpy
indexing works and why we adopt the convention we do for images, and when it
may be appropriate to adopt other conventions.
The first thing to understand is
that there are two conflicting conventions for indexing 2-dimensional arrays.
Matrix notation uses the first index to indicate which row is being selected and
the second index to indicate which column is selected. This is opposite the
geometrically oriented-convention for images where people generally think the
first index represents x position (i.e., column) and the second represents y
position (i.e., row). This alone is the source of much confusion;
matrix-oriented users and image-oriented users expect two different things with
regard to indexing.
The second issue to understand is how indices correspond
to the order the array is stored in memory. In Fortran the first index is the
most rapidly varying index when moving through the elements of a two
dimensional array as it is stored in memory. If you adopt the matrix
convention for indexing, then this means the matrix is stored one column at a
time (since the first index moves to the next row as it changes). Thus Fortran
is considered a Column-major language. C has just the opposite convention. In
C, the last index changes most rapidly as one moves through the array as
stored in memory. Thus C is a Row-major language. The matrix is stored by
rows. Note that in both cases it presumes that the matrix convention for
indexing is being used, i.e., for both Fortran and C, the first index is the
row. Note this convention implies that the indexing convention is invariant
and that the data order changes to keep that so.
But that's not the only way
to look at it. Suppose one has large two-dimensional arrays (images or
matrices) stored in data files. Suppose the data are stored by rows rather than
by columns. If we are to preserve our index convention (whether matrix or
image) that means that depending on the language we use, we may be forced to
reorder the data if it is read into memory to preserve our indexing
convention. For example if we read row-ordered data into memory without
reordering, it will match the matrix indexing convention for C, but not for
Fortran. Conversely, it will match the image indexing convention for Fortran,
but not for C. For C, if one is using data stored in row order, and one wants
to preserve the image index convention, the data must be reordered when
reading into memory.
In the end, which you do for Fortran or C depends on
which is more important, not reordering data or preserving the indexing
convention. For large images, reordering data is potentially expensive, and
often the indexing convention is inverted to avoid that.
The situation with
numpy makes this issue yet more complicated. The internal machinery of numpy
arrays is flexible enough to accept any ordering of indices. One can simply
reorder indices by manipulating the internal stride information for arrays
without reordering the data at all. Numpy will know how to map the new index
order to the data without moving the data.
So if this is true, why not choose
the index order that matches what you most expect? In particular, why not define
row-ordered images to use the image convention? (This is sometimes referred
to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
order options for array ordering in numpy.) The drawback of doing this is
potential performance penalties. It's common to access the data sequentially,
either implicitly in array operations or explicitly by looping over rows of an
image. When that is done, then the data will be accessed in non-optimal order.
As the first index is incremented, what is actually happening is that elements
spaced far apart in memory are being sequentially accessed, with usually poor
memory access speeds. For example, for a two dimensional image 'im' defined so
that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
Python behavior then im[0] would represent a column at x=0. Yet that data
would be spread over the whole array since the data are stored in row order.
Despite the flexibility of numpy's indexing, it can't really paper over the fact
basic operations are rendered inefficient because of data order or that getting
contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
im[0]), thus one can't use an idiom such as for row in im; for col in im does
work, but doesn't yield contiguous column data.
As it turns out, numpy is
smart enough when dealing with ufuncs to determine which index is the most
rapidly varying one in memory and uses that for the innermost loop. Thus for
ufuncs there is no large intrinsic advantage to either approach in most cases.
On the other hand, use of .flat with an FORTRAN ordered array will lead to
non-optimal memory access as adjacent elements in the flattened array (iterator,
actually) are not contiguous in memory.
Indeed, the fact is that Python
indexing on lists and other sequences naturally leads to an outside-to inside
ordering (the first index gets the largest grouping, the next the next largest,
and the last gets the smallest element). Since image data are normally stored
by rows, this corresponds to position within rows being the last item indexed.
If you do want to use Fortran ordering realize that
there are two approaches to consider: 1) accept that the first index is just not
the most rapidly changing in memory and have all your I/O routines reorder
your data when going from memory to disk or visa versa, or use numpy's
mechanism for mapping the first index to the most rapidly varying data. We
recommend the former if possible. The disadvantage of the latter is that many
of numpy's functions will yield arrays without Fortran ordering unless you are
careful to use the 'order' keyword. Doing this would be highly inconvenient.
Otherwise we recommend simply learning to reverse the usual order of indices
when accessing elements of an array. Granted, it goes against the grain, but
it is more in line with Python semantics and the natural order of the data.
""" |
"""
Library for working with the tor process.
**Module Overview:**
::
ControllerError - Base exception raised when using the controller.
|- ProtocolError - Malformed socket data.
|- OperationFailed - Tor was unable to successfully complete the operation.
| |- UnsatisfiableRequest - Tor was unable to satisfy a valid request.
| | +- CircuitExtensionFailed - Attempt to make or extend a circuit failed.
| +- InvalidRequest - Invalid request.
| +- InvalidArguments - Invalid request parameters.
+- SocketError - Communication with the socket failed.
+- SocketClosed - Socket has been shut down.
.. data:: Runlevel (enum)
Rating of importance used for event logging.
=========== ===========
Runlevel Description
=========== ===========
**ERR** critical issues that impair tor's ability to function
**WARN** non-critical issues the user should be aware of
**NOTICE** information that may be helpful to the user
**INFO** high level runtime information
**DEBUG** low level runtime information
=========== ===========
.. data:: Signal (enum)
Signals that the tor process will accept.
========================= ===========
Signal Description
========================= ===========
**RELOAD** or **HUP** reloads our torrc
**SHUTDOWN** or **INT** shut down, waiting ShutdownWaitLength first if we're a relay
**DUMP** or **USR1** dumps information about open connections and circuits to our log
**DEBUG** or **USR2** switch our logging to the DEBUG runlevel
**HALT** or **TERM** exit tor immediately
**NEWNYM** switch to new circuits, so new application requests don't share any circuits with old ones (this also clears our DNS cache)
**CLEARDNSCACHE** clears cached DNS results
========================= ===========
.. data:: CircStatus (enum)
Statuses that a circuit can be in. Tor may provide statuses not in this enum.
============ ===========
CircStatus Description
============ ===========
**LAUNCHED** new circuit was created
**BUILT** circuit finished being created and can accept traffic
**EXTENDED** circuit has been extended by a hop
**FAILED** circuit construction failed
**CLOSED** circuit has been closed
============ ===========
.. data:: CircBuildFlag (enum)
Attributes about how a circuit is built. These were introduced in tor version
IP_ADDRESS. Tor may provide flags not in this enum.
================= ===========
CircBuildFlag Description
================= ===========
**ONEHOP_TUNNEL** single hop circuit to fetch directory information
**IS_INTERNAL** circuit that won't be used for client traffic
**NEED_CAPACITY** circuit only includes high capacity relays
**NEED_UPTIME** circuit only includes relays with a high uptime
================= ===========
.. data:: CircPurpose (enum)
Description of what a circuit is intended for. These were introduced in tor
version IP_ADDRESS. Tor may provide purposes not in this enum.
==================== ===========
CircPurpose Description
==================== ===========
**GENERAL** client traffic or fetching directory information
**HS_CLIENT_INTRO** client side introduction point for a hidden service circuit
**HS_CLIENT_REND** client side hidden service rendezvous circuit
**HS_SERVICE_INTRO** server side introduction point for a hidden service circuit
**HS_SERVICE_REND** server side hidden service rendezvous circuit
**TESTING** testing to see if we're reachable, so we can be used as a relay
**CONTROLLER** circuit that was built by a controller
**MEASURE_TIMEOUT** unknown (https://trac.torproject.org/7626)
==================== ===========
.. data:: CircClosureReason (enum)
Reason that a circuit is being closed or failed to be established. Tor may
provide reasons not in this enum.
========================= ===========
CircClosureReason Description
========================= ===========
**NONE** no reason given
**TORPROTOCOL** violation in the tor protocol
**INTERNAL** internal error
**REQUESTED** requested by the client via a TRUNCATE command
**HIBERNATING** relay is presently hibernating
**RESOURCELIMIT** relay is out of memory, sockets, or circuit IDs
**CONNECTFAILED** unable to contact the relay
**OR_IDENTITY** relay had the wrong OR identification
**OR_CONN_CLOSED** connection failed after being established
**FINISHED** circuit has expired (see tor's MaxCircuitDirtiness config option)
**TIMEOUT** circuit construction timed out
**DESTROYED** circuit unexpectedly closed
**NOPATH** not enough relays to make a circuit
**NOSUCHSERVICE** requested hidden service does not exist
**MEASUREMENT_EXPIRED** same as **TIMEOUT** except that it was left open for measurement purposes
========================= ===========
.. data:: CircEvent (enum)
Type of change reflected in a circuit by a CIRC_MINOR event. Tor may provide
event types not in this enum.
===================== ===========
CircEvent Description
===================== ===========
**PURPOSE_CHANGED** circuit purpose or hidden service state has changed
**CANNIBALIZED** circuit connections are being reused for a different circuit
===================== ===========
.. data:: HiddenServiceState (enum)
State that a hidden service circuit can have. These were introduced in tor
version IP_ADDRESS. Tor may provide states not in this enum.
Enumerations fall into four groups based on their prefix...
======= ===========
Prefix Description
======= ===========
HSCI_* client-side introduction-point
HSCR_* client-side rendezvous-point
HSSI_* service-side introduction-point
HSSR_* service-side rendezvous-point
======= ===========
============================= ===========
HiddenServiceState Description
============================= ===========
**HSCI_CONNECTING** connecting to the introductory point
**HSCI_INTRO_SENT** sent INTRODUCE1 and awaiting a reply
**HSCI_DONE** received a reply, circuit is closing
**HSCR_CONNECTING** connecting to the introductory point
**HSCR_ESTABLISHED_IDLE** rendezvous-point established, awaiting an introduction
**HSCR_ESTABLISHED_WAITING** introduction received, awaiting a rend
**HSCR_JOINED** connected to the hidden service
**HSSI_CONNECTING** connecting to the introductory point
**HSSI_ESTABLISHED** established introductory point
**HSSR_CONNECTING** connecting to the introductory point
**HSSR_JOINED** connected to the rendezvous-point
============================= ===========
.. data:: RelayEndReason (enum)
Reasons why the stream is to be closed.
=================== ===========
RelayEndReason Description
=================== ===========
**MISC** none of the following reasons
**RESOLVEFAILED** unable to resolve the hostname
**CONNECTREFUSED** remote host refused the connection
**EXITPOLICY** OR refuses to connect to the destination
**DESTROY** circuit is being shut down
**DONE** connection has been closed
**TIMEOUT** connection timed out
**NOROUTE** routing error while contacting the destination
**HIBERNATING** relay is temporarily hibernating
**INTERNAL** internal error at the relay
**RESOURCELIMIT** relay has insufficient resources to service the request
**CONNRESET** connection was unexpectedly reset
**TORPROTOCOL** violation in the tor protocol
**NOTDIRECTORY** directory information requested from a relay that isn't mirroring it
=================== ===========
.. data:: StreamStatus (enum)
State that a stream going through tor can have. Tor may provide states not in
this enum.
================= ===========
StreamStatus Description
================= ===========
**NEW** request for a new connection
**NEWRESOLVE** request to resolve an address
**REMAP** address is being re-mapped to another
**SENTCONNECT** sent a connect cell along a circuit
**SENTRESOLVE** sent a resolve cell along a circuit
**SUCCEEDED** stream has been established
**FAILED** stream is detached, and won't be re-established
**DETACHED** stream is detached, but might be re-established
**CLOSED** stream has closed
================= ===========
.. data:: StreamClosureReason (enum)
Reason that a stream is being closed or failed to be established. This
includes all values in the :data:`~stem.RelayEndReason` enumeration as
well as the following. Tor may provide reasons not in this enum.
===================== ===========
StreamClosureReason Description
===================== ===========
**END** endpoint has sent a RELAY_END cell
**PRIVATE_ADDR** endpoint was a private address (IP_ADDRESS, IP_ADDRESS etc)
===================== ===========
.. data:: StreamSource (enum)
Cause of a stream being remapped to another address. Tor may provide sources
not in this enum.
============= ===========
StreamSource Description
============= ===========
**CACHE** tor is remapping because of a cached answer
**EXIT** exit relay requested the remap
============= ===========
.. data:: StreamPurpose (enum)
Purpsoe of the stream. This is only provided with new streams and tor may
provide purposes not in this enum.
================= ===========
StreamPurpose Description
================= ===========
**DIR_FETCH** fetching directory information (descriptors, consensus, etc)
**DIR_UPLOAD** uploading our descriptor to an authority
**DNS_REQUEST** user initiated DNS request
**DIRPORT_TEST** checking that our directory port is reachable externally
**USER** either relaying user traffic or not one of the above categories
================= ===========
.. data:: ORStatus (enum)
State that an OR connection can have. Tor may provide states not in this
enum.
=============== ===========
ORStatus Description
=============== ===========
**NEW** received OR connection, starting server-side handshake
**LAUNCHED** launched outbound OR connection, starting client-side handshake
**CONNECTED** OR connection has been established
**FAILED** attempt to establish OR connection failed
**CLOSED** OR connection has been closed
=============== ===========
.. data:: ORClosureReason (enum)
Reason that an OR connection is being closed or failed to be established. Tor
may provide reasons not in this enum.
=================== ===========
ORClosureReason Description
=================== ===========
**DONE** OR connection shut down cleanly
**CONNECTREFUSED** got a ECONNREFUSED when connecting to the relay
**IDENTITY** identity of the relay wasn't what we expected
**CONNECTRESET** got a ECONNRESET or similar error from relay
**TIMEOUT** got a ETIMEOUT or similar error from relay
**NOROUTE** got a ENOTCONN, ENETUNREACH, ENETDOWN, EHOSTUNREACH, or similar error from relay
**IOERROR** got a different kind of error from relay
**RESOURCELIMIT** relay has insufficient resources to service the request
**MISC** connection refused for another reason
=================== ===========
.. data:: AuthDescriptorAction (enum)
Actions that directory authorities might take with relay descriptors. Tor may
provide reasons not in this enum.
===================== ===========
AuthDescriptorAction Description
===================== ===========
**ACCEPTED** accepting the descriptor as the newest version
**DROPPED** descriptor rejected without notifying the relay
**REJECTED** relay notified that its descriptor has been rejected
===================== ===========
.. data:: StatusType (enum)
Sources for tor status events. Tor may provide types not in this enum.
============= ===========
StatusType Description
============= ===========
**GENERAL** general tor activity, not specifically as a client or relay
**CLIENT** related to our activity as a tor client
**SERVER** related to our activity as a tor relay
============= ===========
.. data:: GuardType (enum)
Use a guard relay can be for. Tor may provide types not in this enum.
Enum descriptions are pending...
https://trac.torproject.org/7619
=========== ===========
GuardType Description
=========== ===========
**ENTRY** unknown
=========== ===========
.. data:: GuardStatus (enum)
Status a guard relay can have. Tor may provide types not in this enum.
Enum descriptions are pending...
https://trac.torproject.org/7619
============= ===========
GuardStatus Description
============= ===========
**NEW** unknown
**UP** unknown
**DOWN** unknown
**BAD** unknown
**GOOD** unknown
**DROPPED** unknown
============= ===========
.. data:: TimeoutSetType (enum)
Way in which the timeout value of a circuit is changing. Tor may provide
types not in this enum.
=============== ===========
TimeoutSetType Description
=============== ===========
**COMPUTED** tor has computed a new timeout based on prior circuits
**RESET** timeout reverted to its default
**SUSPENDED** timeout reverted to its default until network connectivity has recovered
**DISCARD** throwing out timeout value from when the network was down
**RESUME** resumed calculations to determine the proper timeout
=============== ===========
""" |
"""
Database with model functions.
To be used with the L{cc.ivs.sigproc.fit.minimizer} function or with the L{evaluate}
function in this module.
>>> p = plt.figure()
>>> x = np.linspace(-10,10,1000)
>>> p = plt.plot(x,evaluate('gauss',x,[5,1.,2.,0.5]),label='gauss')
>>> p = plt.plot(x,evaluate('voigt',x,[20.,1.,1.5,3.,0.5]),label='voigt')
>>> p = plt.plot(x,evaluate('lorentz',x,[5,1.,2.,0.5]),label='lorentz')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
]include figure]]ivs_sigproc_fit_funclib01.png]
>>> p = plt.figure()
>>> x = np.linspace(0,10,1000)[1:]
>>> p = plt.plot(x,evaluate('power_law',x,[2.,3.,1.5,0,0.5]),label='power_law')
>>> p = plt.plot(x,evaluate('power_law',x,[2.,3.,1.5,0,0.5])+evaluate('gauss',x,[1.,5.,0.5,0,0]),label='power_law + gauss')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
]include figure]]ivs_sigproc_fit_funclib02.png]
>>> p = plt.figure()
>>> x = np.linspace(0,10,1000)
>>> p = plt.plot(x,evaluate('sine',x,[1.,2.,0,0]),label='sine')
>>> p = plt.plot(x,evaluate('sine_linfreqshift',x,[1.,0.5,0,0,.5]),label='sine_linfreqshift')
>>> p = plt.plot(x,evaluate('sine_expfreqshift',x,[1.,0.5,0,0,1.2]),label='sine_expfreqshift')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
]include figure]]ivs_sigproc_fit_funclib03.png]
>>> p = plt.figure()
>>> p = plt.plot(x,evaluate('sine',x,[1.,2.,0,0]),label='sine')
>>> p = plt.plot(x,evaluate('sine_orbit',x,[1.,2.,0,0,0.1,10.,0.1]),label='sine_orbit')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
]include figure]]ivs_sigproc_fit_funclib03a.png]
>>> p = plt.figure()
>>> x_single = np.linspace(0,10,1000)
>>> x_double = np.vstack([x_single,x_single])
>>> p = plt.plot(x_single,evaluate('kepler_orbit',x_single,[2.5,0.,0.5,0,3,1.]),label='kepler_orbit (single)')
>>> y_double = evaluate('kepler_orbit',x_double,[2.5,0.,0.5,0,3,2.,-4,2.],type='double')
>>> p = plt.plot(x_double[0],y_double[0],label='kepler_orbit (double 1)')
>>> p = plt.plot(x_double[1],y_double[1],label='kepler_orbit (double 2)')
>>> p = plt.plot(x,evaluate('box_transit',x,[2.,0.4,0.1,0.3,0.5]),label='box_transit')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
]include figure]]ivs_sigproc_fit_funclib04.png]
>>> p = plt.figure()
>>> x = np.linspace(-1,1,1000)
>>> gammas = [-0.25,0.1,0.25,0.5,1,2,4]
>>> y = np.array([evaluate('soft_parabola',x,[1.,0,1.,gamma]) for gamma in gammas])
divide by zero encountered in power
>>> for iy,gamma in zip(y,gammas): p = plt.plot(x,iy,label="soft_parabola $\gamma$={:.2f}".format(gamma))
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
]include figure]]ivs_sigproc_fit_funclib05.png]
>>> p = plt.figure()
>>> x = np.logspace(-1,2,1000)
>>> blbo = evaluate('blackbody',x,[10000.,1.],wave_units='micron',flux_units='W/m3')
>>> raje = evaluate('rayleigh_jeans',x,[10000.,1.],wave_units='micron',flux_units='W/m3')
>>> wien = evaluate('wien',x,[10000.,1.],wave_units='micron',flux_units='W/m3')
>>> p = plt.subplot(221)
>>> p = plt.title(r'$\lambda$ vs $F_\lambda$')
>>> p = plt.loglog(x,blbo,label='Black Body')
>>> p = plt.loglog(x,raje,label='Rayleigh-Jeans')
>>> p = plt.loglog(x,wien,label='Wien')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
>>> blbo = evaluate('blackbody',x,[10000.,1.],wave_units='micron',flux_units='Jy')
>>> raje = evaluate('rayleigh_jeans',x,[10000.,1.],wave_units='micron',flux_units='Jy')
>>> wien = evaluate('wien',x,[10000.,1.],wave_units='micron',flux_units='Jy')
>>> p = plt.subplot(223)
>>> p = plt.title(r"$\lambda$ vs $F_\\nu$")
>>> p = plt.loglog(x,blbo,label='Black Body')
>>> p = plt.loglog(x,raje,label='Rayleigh-Jeans')
>>> p = plt.loglog(x,wien,label='Wien')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
>>> x = np.logspace(0.47,3.47,1000)
>>> blbo = evaluate('blackbody',x,[10000.,1.],wave_units='THz',flux_units='Jy')
>>> raje = evaluate('rayleigh_jeans',x,[10000.,1.],wave_units='THz',flux_units='Jy')
>>> wien = evaluate('wien',x,[10000.,1.],wave_units='THz',flux_units='Jy')
>>> p = plt.subplot(224)
>>> p = plt.title(r"$\\nu$ vs $F_\\nu$")
>>> p = plt.loglog(x,blbo,label='Black Body')
>>> p = plt.loglog(x,raje,label='Rayleigh-Jeans')
>>> p = plt.loglog(x,wien,label='Wien')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
>>> blbo = evaluate('blackbody',x,[10000.,1.],wave_units='THz',flux_units='W/m3')
>>> raje = evaluate('rayleigh_jeans',x,[10000.,1.],wave_units='THz',flux_units='W/m3')
>>> wien = evaluate('wien',x,[10000.,1.],wave_units='THz',flux_units='W/m3')
>>> p = plt.subplot(222)
>>> p = plt.title(r"$\\nu$ vs $F_\lambda$")
>>> p = plt.loglog(x,blbo,label='Black Body')
>>> p = plt.loglog(x,raje,label='Rayleigh-Jeans')
>>> p = plt.loglog(x,wien,label='Wien')
>>> leg = plt.legend(loc='best')
>>> leg.get_frame().set_alpha(0.5)
]include figure]]ivs_sigproc_fit_funclib06.png]
""" |
"""
=============
Miscellaneous
=============
IEEE 754 Floating Point Special Values
--------------------------------------
Special values defined in numpy: nan, inf,
NaNs can be used as a poor-man's mask (if you don't care what the
original value was)
Note: cannot use equality to test NaNs. E.g.: ::
>>> myarr = np.array([1., 0., np.nan, 3.])
>>> np.where(myarr == np.nan)
>>> np.nan == np.nan # is always False! Use special numpy functions instead.
False
>>> myarr[myarr == np.nan] = 0. # doesn't work
>>> myarr
array([ 1., 0., NaN, 3.])
>>> myarr[np.isnan(myarr)] = 0. # use this instead find
>>> myarr
array([ 1., 0., 0., 3.])
Other related special value functions: ::
isinf(): True if value is inf
isfinite(): True if not nan or inf
nan_to_num(): Map nan to 0, inf to max float, -inf to min float
The following corresponds to the usual functions except that nans are excluded
from the results: ::
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
How numpy handles numerical exceptions
--------------------------------------
The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
set individually for different kinds of exceptions. The different behaviors
are:
- 'ignore' : Take no action when the exception occurs.
- 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
- 'raise' : Raise a `FloatingPointError`.
- 'call' : Call a function specified using the `seterrcall` function.
- 'print' : Print a warning directly to ``stdout``.
- 'log' : Record error in a Log object specified by `seterrcall`.
These behaviors can be set for all kinds of errors or specific ones:
- all : apply to all numeric exceptions
- invalid : when NaNs are generated
- divide : divide by zero (for integers as well!)
- overflow : floating point overflows
- underflow : floating point underflows
Note that integer divide-by-zero is handled by the same machinery.
These behaviors are set on a per-thread basis.
Examples
--------
::
>>> oldsettings = np.seterr(all='warn')
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under='ignore')
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid='raise')
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
... print "saw stupid error!"
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all='call')
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
... # error-handling settings
Interfacing to C
----------------
Only a survey of the choices. Little detail on how each works.
1) Bare metal, wrap your own C-code manually.
- Plusses:
- Efficient
- No dependencies on other tools
- Minuses:
- Lots of learning overhead:
- need to learn basics of Python C API
- need to learn basics of numpy C API
- need to learn how to handle reference counting and love it.
- Reference counting often difficult to get right.
- getting it wrong leads to memory leaks, and worse, segfaults
- API will change for Python 3.0!
2) Cython
- Plusses:
- avoid learning C API's
- no dealing with reference counting
- can code in pseudo python and generate C code
- can also interface to existing C code
- should shield you from changes to Python C api
- has become the de-facto standard within the scientific Python community
- fast indexing support for arrays
- Minuses:
- Can write code in non-standard form which may become obsolete
- Not as flexible as manual wrapping
4) ctypes
- Plusses:
- part of Python standard library
- good for interfacing to existing sharable libraries, particularly
Windows DLLs
- avoids API/reference counting issues
- good numpy support: arrays have all these in their ctypes
attribute: ::
a.ctypes.data a.ctypes.get_strides
a.ctypes.data_as a.ctypes.shape
a.ctypes.get_as_parameter a.ctypes.shape_as
a.ctypes.get_data a.ctypes.strides
a.ctypes.get_shape a.ctypes.strides_as
- Minuses:
- can't use for writing code to be turned into C extensions, only a wrapper
tool.
5) SWIG (automatic wrapper generator)
- Plusses:
- around a long time
- multiple scripting language support
- C++ support
- Good for wrapping large (many functions) existing C libraries
- Minuses:
- generates lots of code between Python and the C code
- can cause performance problems that are nearly impossible to optimize
out
- interface files can be hard to write
- doesn't necessarily avoid reference counting issues or needing to know
API's
7) scipy.weave
- Plusses:
- can turn many numpy expressions into C code
- dynamic compiling and loading of generated C code
- can embed pure C code in Python module and have weave extract, generate
interfaces and compile, etc.
- Minuses:
- Future very uncertain: it's the only part of Scipy not ported to Python 3
and is effectively deprecated in favor of Cython.
8) Psyco
- Plusses:
- Turns pure python into efficient machine code through jit-like
optimizations
- very fast when it optimizes well
- Minuses:
- Only on intel (windows?)
- Doesn't do much for numpy?
Interfacing to Fortran:
-----------------------
The clear choice to wrap Fortran code is
`f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_.
Pyfort is an older alternative, but not supported any longer.
Fwrap is a newer project that looked promising but isn't being developed any
longer.
Interfacing to C++:
-------------------
1) Cython
2) CXX
3) Boost.python
4) SWIG
5) SIP (used mainly in PyQT)
""" |
"""
Low-level LAPACK functions
==========================
This module contains low-level functions from the LAPACK library.
.. versionadded:: 0.12.0
.. warning::
These functions do little to no error checking.
It is possible to cause crashes by mis-using them,
so prefer using the higher-level routines in `scipy.linalg`.
Finding functions
=================
.. autosummary::
get_lapack_funcs
All functions
=============
.. autosummary::
:toctree: generated/
sgbsv
dgbsv
cgbsv
zgbsv
sgbtrf
dgbtrf
cgbtrf
zgbtrf
sgbtrs
dgbtrs
cgbtrs
zgbtrs
sgebal
dgebal
cgebal
zgebal
sgees
dgees
cgees
zgees
sgeev
dgeev
cgeev
zgeev
sgeev_lwork
dgeev_lwork
cgeev_lwork
zgeev_lwork
sgegv
dgegv
cgegv
zgegv
sgehrd
dgehrd
cgehrd
zgehrd
sgehrd_lwork
dgehrd_lwork
cgehrd_lwork
zgehrd_lwork
sgelss
dgelss
cgelss
zgelss
sgelss_lwork
dgelss_lwork
cgelss_lwork
zgelss_lwork
sgelsd
dgelsd
cgelsd
zgelsd
sgelsd_lwork
dgelsd_lwork
cgelsd_lwork
zgelsd_lwork
sgelsy
dgelsy
cgelsy
zgelsy
sgelsy_lwork
dgelsy_lwork
cgelsy_lwork
zgelsy_lwork
sgeqp3
dgeqp3
cgeqp3
zgeqp3
sgeqrf
dgeqrf
cgeqrf
zgeqrf
sgerqf
dgerqf
cgerqf
zgerqf
sgesdd
dgesdd
cgesdd
zgesdd
sgesdd_lwork
dgesdd_lwork
cgesdd_lwork
zgesdd_lwork
sgesv
dgesv
cgesv
zgesv
sgetrf
dgetrf
cgetrf
zgetrf
sgetri
dgetri
cgetri
zgetri
sgetri_lwork
dgetri_lwork
cgetri_lwork
zgetri_lwork
sgetrs
dgetrs
cgetrs
zgetrs
sgges
dgges
cgges
zgges
sggev
dggev
cggev
zggev
chbevd
zhbevd
chbevx
zhbevx
cheev
zheev
cheevd
zheevd
cheevr
zheevr
chegv
zhegv
chegvd
zhegvd
chegvx
zhegvx
dlasd4
slasd4
slaswp
dlaswp
claswp
zlaswp
slauum
dlauum
clauum
zlauum
spbsv
dpbsv
cpbsv
zpbsv
spbtrf
dpbtrf
cpbtrf
zpbtrf
spbtrs
dpbtrs
cpbtrs
zpbtrs
sposv
dposv
cposv
zposv
spotrf
dpotrf
cpotrf
zpotrf
spotri
dpotri
cpotri
zpotri
spotrs
dpotrs
cpotrs
zpotrs
strsyl
dtrsyl
ctrsyl
ztrsyl
strtri
dtrtri
ctrtri
ztrtri
strtrs
dtrtrs
ctrtrs
ztrtrs
cunghr
zunghr
cungqr
zungqr
cungrq
zungrq
cunmqr
zunmqr
sgtsv
dgtsv
cgtsv
zgtsv
sptsv
dptsv
cptsv
zptsv
slamch
dlamch
sorghr
dorghr
sorgqr
dorgqr
sorgrq
dorgrq
sormqr
dormqr
ssbev
dsbev
ssbevd
dsbevd
ssbevx
dsbevx
ssyev
dsyev
ssyevd
dsyevd
ssyevr
dsyevr
ssygv
dsygv
ssygvd
dsygvd
ssygvx
dsygvx
slange
dlange
clange
zlange
""" |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# adapted from http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
# -thepaul
# This is an implementation of wcwidth() and wcswidth() (defined in
# IEEE Std 1002.1-2001) for Unicode.
#
# http://www.opengroup.org/onlinepubs/007904975/functions/wcwidth.html
# http://www.opengroup.org/onlinepubs/007904975/functions/wcswidth.html
#
# In fixed-width output devices, Latin characters all occupy a single
# "cell" position of equal width, whereas ideographic CJK characters
# occupy two such cells. Interoperability between terminal-line
# applications and (teletype-style) character terminals using the
# UTF-8 encoding requires agreement on which character should advance
# the cursor by how many cell positions. No established formal
# standards exist at present on which Unicode character shall occupy
# how many cell positions on character terminals. These routines are
# a first attempt of defining such behavior based on simple rules
# applied to data provided by the Unicode Consortium.
#
# For some graphical characters, the Unicode standard explicitly
# defines a character-cell width via the definition of the East Asian
# FullWidth (F), Wide (W), Half-width (H), and Narrow (Na) classes.
# In all these cases, there is no ambiguity about which width a
# terminal shall use. For characters in the East Asian Ambiguous (A)
# class, the width choice depends purely on a preference of backward
# compatibility with either historic CJK or Western practice.
# Choosing single-width for these characters is easy to justify as
# the appropriate long-term solution, as the CJK practice of
# displaying these characters as double-width comes from historic
# implementation simplicity (8-bit encoded characters were displayed
# single-width and 16-bit ones double-width, even for Greek,
# Cyrillic, etc.) and not any typographic considerations.
#
# Much less clear is the choice of width for the Not East Asian
# (Neutral) class. Existing practice does not dictate a width for any
# of these characters. It would nevertheless make sense
# typographically to allocate two character cells to characters such
# as for instance EM SPACE or VOLUME INTEGRAL, which cannot be
# represented adequately with a single-width glyph. The following
# routines at present merely assign a single-cell width to all
# neutral characters, in the interest of simplicity. This is not
# entirely satisfactory and should be reconsidered before
# establishing a formal standard in this area. At the moment, the
# decision which Not East Asian (Neutral) characters should be
# represented by double-width glyphs cannot yet be answered by
# applying a simple rule from the Unicode database content. Setting
# up a proper standard for the behavior of UTF-8 character terminals
# will require a careful analysis not only of each Unicode character,
# but also of each presentation form, something the author of these
# routines has avoided to do so far.
#
# http://www.unicode.org/unicode/reports/tr11/
#
# NAME -- 2007-05-26 (Unicode 5.0)
#
# Permission to use, copy, modify, and distribute this software
# for any purpose and without fee is hereby granted. The author
# disclaims all warranties with regard to this software.
#
# Latest C version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
# auxiliary function for binary search in interval table
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
# SKR03
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR03.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerkonten zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde)hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
# SKR04
# =====
# Dieses Modul bietet Ihnen einen deutschen Kontenplan basierend auf dem SKR04.
# Gemäss der aktuellen Einstellungen ist die Firma nicht Umsatzsteuerpflichtig,
# d.h. im Standard existiert keine Zuordnung von Produkten und Sachkonten zu
# Steuerschlüsseln.
# Diese Grundeinstellung ist sehr einfach zu ändern und bedarf in der Regel
# grundsätzlich eine initiale Zuweisung von Steuerschlüsseln zu Produkten und / oder
# Sachkonten oder zu Partnern.
# Die Umsatzsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten bei den Produktstammdaten hinterlegt werden (in Abhängigkeit der
# Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter Finanzbuchhaltung
# (Kategorie: Umsatzsteuer).
# Die Vorsteuern (voller Steuersatz, reduzierte Steuer und steuerfrei)
# sollten ebenso bei den Produktstammdaten hinterlegt werden (in Abhängigkeit
# der Steuervorschriften). Die Zuordnung erfolgt auf dem Aktenreiter
# Finanzbuchhaltung (Kategorie: Vorsteuer).
# Die Zuordnung der Steuern für Ein- und Ausfuhren aus EU Ländern, sowie auch
# für den Ein- und Verkauf aus und in Drittländer sollten beim Partner
# (Lieferant/Kunde) hinterlegt werden (in Anhängigkeit vom Herkunftsland
# des Lieferanten/Kunden). Die Zuordnung beim Kunden ist 'höherwertig' als
# die Zuordnung bei Produkten und überschreibt diese im Einzelfall.
#
# Zur Vereinfachung der Steuerausweise und Buchung bei Auslandsgeschäften
# erlaubt OpenERP ein generelles Mapping von Steuerausweis und Steuerkonten
# (z.B. Zuordnung 'Umsatzsteuer 19%' zu 'steuerfreie Einfuhren aus der EU')
# zwecks Zuordnung dieses Mappings zum ausländischen Partner (Kunde/Lieferant).
# Die Rechnungsbuchung beim Einkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Vorsteuer Steuermessbetrag (z.B. Vorsteuer
# Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Vorsteuern' (z.B. Vorsteuer
# 19%). Durch multidimensionale Hierachien können verschiedene Positionen
# zusammengefasst werden und dann in Form eines Reports ausgegeben werden.
#
# Die Rechnungsbuchung beim Verkauf bewirkt folgendes:
# Die Steuerbemessungsgrundlage (exklusive Steuer) wird ausgewiesen bei den
# jeweiligen Kategorien für den Umsatzsteuer Steuermessbetrag
# (z.B. Umsatzsteuer Steuermessbetrag Voller Steuersatz 19%).
# Der Steuerbetrag erscheint unter der Kategorie 'Umsatzsteuer'
# (z.B. Umsatzsteuer 19%). Durch multidimensionale Hierachien können
# verschiedene Positionen zusammengefasst werden.
# Die zugewiesenen Steuerausweise können auf Ebene der einzelnen
# Rechnung (Eingangs- und Ausgangsrechnung) nachvollzogen werden,
# und dort gegebenenfalls angepasst werden.
# Rechnungsgutschriften führen zu einer Korrektur (Gegenposition)
# der Steuerbuchung, in Form einer spiegelbildlichen Buchung.
|
"""This module tests SyntaxErrors.
Here's an example of the sort of thing that is tested.
>>> def f(x):
... global x
Traceback (most recent call last):
SyntaxError: name 'x' is local and global (<doctest test.test_syntax[0]>, line 1)
The tests are all raise SyntaxErrors. They were created by checking
each C call that raises SyntaxError. There are several modules that
raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and
symtable.c.
The parser itself outlaws a lot of invalid syntax. None of these
errors are tested here at the moment. We should add some tests; since
there are infinitely many programs with invalid syntax, we would need
to be judicious in selecting some.
The compiler generates a synthetic module name for code executed by
doctest. Since all the code comes from the same module, a suffix like
[1] is appended to the module name, As a consequence, changing the
order of tests in this module means renumbering all the errors after
it. (Maybe we should enable the ellipsis option for these tests.)
In ast.c, syntax errors are raised by calling ast_error().
Errors from set_context():
>>> obj.None = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[1]>", line 1
SyntaxError: cannot assign to None
>>> None = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[2]>", line 1
SyntaxError: cannot assign to None
It's a syntax error to assign to the empty tuple. Why isn't it an
error to assign to the empty list? It will always raise some error at
runtime.
>>> () = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[3]>", line 1
SyntaxError: can't assign to ()
>>> f() = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[4]>", line 1
SyntaxError: can't assign to function call
>>> del f()
Traceback (most recent call last):
File "<doctest test.test_syntax[5]>", line 1
SyntaxError: can't delete function call
>>> a + 1 = 2
Traceback (most recent call last):
File "<doctest test.test_syntax[6]>", line 1
SyntaxError: can't assign to operator
>>> (x for x in x) = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[7]>", line 1
SyntaxError: can't assign to generator expression
>>> 1 = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[8]>", line 1
SyntaxError: can't assign to literal
>>> "abc" = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[8]>", line 1
SyntaxError: can't assign to literal
>>> `1` = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[10]>", line 1
SyntaxError: can't assign to repr
If the left-hand side of an assignment is a list or tuple, an illegal
expression inside that contain should still cause a syntax error.
This test just checks a couple of cases rather than enumerating all of
them.
>>> (a, "b", c) = (1, 2, 3)
Traceback (most recent call last):
File "<doctest test.test_syntax[11]>", line 1
SyntaxError: can't assign to literal
>>> [a, b, c + 1] = [1, 2, 3]
Traceback (most recent call last):
File "<doctest test.test_syntax[12]>", line 1
SyntaxError: can't assign to operator
>>> a if 1 else b = 1
Traceback (most recent call last):
File "<doctest test.test_syntax[13]>", line 1
SyntaxError: can't assign to conditional expression
From compiler_complex_args():
>>> def f(None=1):
... pass
Traceback (most recent call last):
File "<doctest test.test_syntax[14]>", line 1
SyntaxError: cannot assign to None
From ast_for_arguments():
>>> def f(x, y=1, z):
... pass
Traceback (most recent call last):
File "<doctest test.test_syntax[15]>", line 1
SyntaxError: non-default argument follows default argument
>>> def f(x, None):
... pass
Traceback (most recent call last):
File "<doctest test.test_syntax[16]>", line 1
SyntaxError: cannot assign to None
>>> def f(*None):
... pass
Traceback (most recent call last):
File "<doctest test.test_syntax[17]>", line 1
SyntaxError: cannot assign to None
>>> def f(**None):
... pass
Traceback (most recent call last):
File "<doctest test.test_syntax[18]>", line 1
SyntaxError: cannot assign to None
From ast_for_funcdef():
>>> def None(x):
... pass
Traceback (most recent call last):
File "<doctest test.test_syntax[19]>", line 1
SyntaxError: cannot assign to None
From ast_for_call():
>>> def f(it, *varargs):
... return list(it)
>>> L = range(10)
>>> f(x for x in L)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(x for x in L, 1)
Traceback (most recent call last):
File "<doctest test.test_syntax[23]>", line 1
SyntaxError: Generator expression must be parenthesized if not sole argument
>>> f((x for x in L), 1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... i244, i245, i246, i247, i248, i249, i250, i251, i252,
... i253, i254, i255)
Traceback (most recent call last):
File "<doctest test.test_syntax[25]>", line 1
SyntaxError: more than 255 arguments
The actual error cases counts positional arguments, keyword arguments,
and generator expression arguments separately. This test combines the
three.
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... (x for x in i244), i245, i246, i247, i248, i249, i250, i251,
... i252=1, i253=1, i254=1, i255=1)
Traceback (most recent call last):
File "<doctest test.test_syntax[26]>", line 1
SyntaxError: more than 255 arguments
>>> f(lambda x: x[0] = 3)
Traceback (most recent call last):
File "<doctest test.test_syntax[27]>", line 1
SyntaxError: lambda cannot contain assignment
The grammar accepts any test (basically, any expression) in the
keyword slot of a call site. Test a few different options.
>>> f(x()=2)
Traceback (most recent call last):
File "<doctest test.test_syntax[28]>", line 1
SyntaxError: keyword can't be an expression
>>> f(a or b=1)
Traceback (most recent call last):
File "<doctest test.test_syntax[29]>", line 1
SyntaxError: keyword can't be an expression
>>> f(x.y=1)
Traceback (most recent call last):
File "<doctest test.test_syntax[30]>", line 1
SyntaxError: keyword can't be an expression
More set_context():
>>> (x for x in x) += 1
Traceback (most recent call last):
File "<doctest test.test_syntax[31]>", line 1
SyntaxError: can't assign to generator expression
>>> None += 1
Traceback (most recent call last):
File "<doctest test.test_syntax[32]>", line 1
SyntaxError: cannot assign to None
>>> f() += 1
Traceback (most recent call last):
File "<doctest test.test_syntax[33]>", line 1
SyntaxError: can't assign to function call
Test continue in finally in weird combinations.
continue in for loop under finally shouuld be ok.
>>> def test():
... try:
... pass
... finally:
... for abc in range(10):
... continue
... print abc
>>> test()
9
Start simple, a continue in a finally should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
File "<doctest test.test_syntax[36]>", line 6
SyntaxError: 'continue' not supported inside 'finally' clause
This is essentially a continue in a finally which should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... try:
... continue
... except:
... pass
Traceback (most recent call last):
...
File "<doctest test.test_syntax[37]>", line 6
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
File "<doctest test.test_syntax[38]>", line 5
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
File "<doctest test.test_syntax[39]>", line 6
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... try:
... continue
... finally:
... pass
Traceback (most recent call last):
...
File "<doctest test.test_syntax[40]>", line 7
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try: pass
... finally:
... try:
... pass
... except:
... continue
Traceback (most recent call last):
...
File "<doctest test.test_syntax[41]>", line 8
SyntaxError: 'continue' not supported inside 'finally' clause
There is one test for a break that is not in a loop. The compiler
uses a single data structure to keep track of try-finally and loops,
so we need to be sure that a break is actually inside a loop. If it
isn't, there should be a syntax error.
>>> try:
... print 1
... break
... print 2
... finally:
... print 3
Traceback (most recent call last):
...
File "<doctest test.test_syntax[42]>", line 3
SyntaxError: 'break' outside loop
This should probably raise a better error than a SystemError (or none at all).
In 2.5 there was a missing exception and an assert was triggered in a debug
build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514
>>> while 1:
... while 2:
... while 3:
... while 4:
... while 5:
... while 6:
... while 8:
... while 9:
... while 10:
... while 11:
... while 12:
... while 13:
... while 14:
... while 15:
... while 16:
... while 17:
... while 18:
... while 19:
... while 20:
... while 21:
... while 22:
... break
Traceback (most recent call last):
...
SystemError: too many statically nested blocks
This tests assignment-context; there was a bug in Python 2.5 where compiling
a complex 'if' (one with 'elif') would fail to notice an invalid suite,
leading to spurious errors.
>>> if 1:
... x() = 1
... elif 1:
... pass
Traceback (most recent call last):
...
File "<doctest test.test_syntax[44]>", line 2
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
Traceback (most recent call last):
...
File "<doctest test.test_syntax[45]>", line 4
SyntaxError: can't assign to function call
>>> if 1:
... x() = 1
... elif 1:
... pass
... else:
... pass
Traceback (most recent call last):
...
File "<doctest test.test_syntax[46]>", line 2
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
... else:
... pass
Traceback (most recent call last):
...
File "<doctest test.test_syntax[47]>", line 4
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... pass
... else:
... x() = 1
Traceback (most recent call last):
...
File "<doctest test.test_syntax[48]>", line 6
SyntaxError: can't assign to function call
>>> f(a=23, a=234)
Traceback (most recent call last):
...
File "<doctest test.test_syntax[49]>", line 1
SyntaxError: keyword argument repeated
>>> del ()
Traceback (most recent call last):
...
File "<doctest test.test_syntax[50]>", line 1
SyntaxError: can't delete ()
>>> {1, 2, 3} = 42
Traceback (most recent call last):
...
File "<doctest test.test_syntax[50]>", line 1
SyntaxError: can't assign to literal
""" |
#
# We should use "initialize_scalar()" for all scalar assignments.
# See the Notes for that method. Search for "np.float64(0".
#
# Copyright (c) 2009-2014, NAME Sep 2014. New initialize_basin_vars(), using outlets.py.
# Removed obsolete functions.
#
# Jan 2013. Added "initialize_scalar()" method.
#
# Feb 2012. Complete BMI, starting from CSDMS_base.py
# This now takes the place of CSDMS_base.py.
#
# Nov 2011. Cleanup and conversion to BMI function names.
#
# May 2010. initialize_config_vars(), cleanup, etc.
#
# Aug 2009. Created, as CSDMS_base.py.
#
#-----------------------------------------------------------------------
#
# Notes: This file defines a "base class" with a BMI (Basic Model
# Interface) for CSDMS "process" components. These methods
# allow a component to "fit into" a CMI harness (IMPL file)
# that allows it to be used in the CSDMS/CCA framework.
#
# The BMI interface is designed to be completely framework
# independent, so this class contains no methods related to
# CCA framework concepts such as ports.
#
# Some "private" utility methods are defined at the end.
#
# Need to figure out UDUNITS and use in get_var_units().
#
#-----------------------------------------------------------------------
#
# unit_test()
#
# class BMI_component
#
# __init__()
#
# -------------------------------
# BMI methods to get model info
# -------------------------------
# get_status()
# get_attribute()
# set_attribute() (Experimental: not yet BMI)
# ------------------------------
# BMI methods to get grid info
# ------------------------------
# get_grid_shape()
# get_grid_spacing()
# get_grid_lower_left_corner()
# get_grid_attribute() ## NEW ADDITION TO BMI ??
# read_grid_info() ## (Not part of BMI)
#
# ----------------------------------
# BMI methods to get variable info
# ----------------------------------
# get_input_var_names()
# get_output_var_names()
# -------------------------
# get_var_name() # (override)
# get_var_units() # (override)
# get_var_rank()
# get_var_type()
# get_var_state() or "mode()" ############### "static" or "dynamic" ###########
# -------------------------
# get_values() # (9/22/14)
# set_values() # (9/22/14)
# get_values_at_indices()
# set_values_at_indices()
#
# ------------------------------
# BMI methods to get time info
# ------------------------------
# get_time_step()
# get_time_units()
# get_time() ## NEW ADDITION TO BMI ??
# get_start_time()
# get_current_time()
# get_end_time()
#
# --------------------------------------
# BMI methods for fine-grained control
# --------------------------------------
# initialize # (template)
# update() # (template)
# finalize()
# ------------------
# run_model() # (not required for BMI)
# check_finished() # (not part of BMI)
#
# -----------------------------
# More Time-related (not BMI)
# -----------------------------
# initialize_time_vars()
# update_time()
# print_time_and_value()
# get_run_time_string()
# print_run_time()
#
# -------------------------------
# Convenience methods (not BMI)
# -------------------------------
# print_final_report() # (6/30/10)
# print_traceback() # (10/10/10)
# -------------------------
# read_config_file() # (5/17/10, 5/9/11)
# initialize_config_vars() # (5/6/10)
# set_computed_input_vars # (5/6/10) over-ridden by each comp.
# initialize_basin_vars() # (9/19/14) New version that uses outlets.py.
# initialize_basin_vars0()
# -------------------------
# prepend_directory() # (may not work yet)
# check_directories()
# -------------------------
# initialize_scalar() # (2/5/13, for ref passing)
# is_scalar()
# is_vector()
# is_grid()
#
#-----------------------------------------------------------------------
|
#-- GAUDI jobOptions generated on Mon Jul 27 18:36:56 2015
#-- Contains event types :
#-- 15104010 - 67 files - 1001573 events - 226.66 GBytes
#-- Extra information about the data processing phases:
#-- Processing Pass Step-124834
#-- StepId : 124834
#-- StepName : Reco14a for MC
#-- ApplicationName : Brunel
#-- ApplicationVersion : v43r2p7
#-- OptionFiles : $APPCONFIGOPTS/Brunel/DataType-2012.py;$APPCONFIGOPTS/Brunel/MC-WithTruth.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py
#-- DDDB : fromPreviousStep
#-- CONDDB : fromPreviousStep
#-- ExtraPackages : AppConfig.v3r164
#-- Visible : Y
#-- Processing Pass Step-124620
#-- StepId : 124620
#-- StepName : Digi13 with G4 dE/dx
#-- ApplicationName : NAME
#-- ApplicationVersion : v26r3
#-- OptionFiles : $APPCONFIGOPTS/NAME/Default.py;$APPCONFIGOPTS/NAME/DataType-2012.py;$APPCONFIGOPTS/NAME/NAME-SiG4EnergyDeposit.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py
#-- DDDB : fromPreviousStep
#-- CONDDB : fromPreviousStep
#-- ExtraPackages : AppConfig.v3r164
#-- Visible : Y
#-- Processing Pass Step-126085
#-- StepId : 126085
#-- StepName : Sim08e - 2012 - MU - Pythia8
#-- ApplicationName : NAME
#-- ApplicationVersion : v45r7
#-- OptionFiles : $APPCONFIGOPTS/NAME/Sim08-Beam4000GeV-mu100-2012-nu2.5.py;$DECFILESROOT/options/@{eventType}.py;$LBPYTHIA8ROOT/options/Pythia8.py;$APPCONFIGOPTS/NAME/G4PL_FTFP_BERT_EmNoCuts.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py
#-- DDDB : dddb-20130929-1
#-- CONDDB : sim-20130522-1-vc-mu100
#-- ExtraPackages : AppConfig.v3r182;DecFiles.v27r17
#-- Visible : Y
#-- Processing Pass Step-124632
#-- StepId : 124632
#-- StepName : TCK-0x409f0045 Flagged for Sim08 2012
#-- ApplicationName : Moore
#-- ApplicationVersion : v14r8p1
#-- OptionFiles : $APPCONFIGOPTS/Moore/MooreSimProductionWithL0Emulation.py;$APPCONFIGOPTS/Conditions/TCK-0x409f0045.py;$APPCONFIGOPTS/Moore/DataType-2012.py;$APPCONFIGOPTS/L0/L0TCK-0x0045.py
#-- DDDB : fromPreviousStep
#-- CONDDB : fromPreviousStep
#-- ExtraPackages : AppConfig.v3r164
#-- Visible : Y
#-- Processing Pass Step-124630
#-- StepId : 124630
#-- StepName : Stripping20-NoPrescalingFlagged for Sim08
#-- ApplicationName : USERNAME
#-- ApplicationVersion : v32r2p1
#-- OptionFiles : $APPCONFIGOPTS/USERNAME/DV-Stripping20-Stripping-MC-NoPrescaling.py;$APPCONFIGOPTS/USERNAME/DataType-2012.py;$APPCONFIGOPTS/USERNAME/InputType-DST.py;$APPCONFIGOPTS/Persistency/Compression-ZLIB-1.py
#-- DDDB : fromPreviousStep
#-- CONDDB : fromPreviousStep
#-- ExtraPackages : AppConfig.v3r164
#-- Visible : Y
|
"""The tests for the MQTT light platform.
Configuration for RGB Version with brightness:
light:
platform: mqtt
name: "Office Light RGB"
state_topic: "office/rgb1/light/status"
command_topic: "office/rgb1/light/switch"
brightness_state_topic: "office/rgb1/brightness/status"
brightness_command_topic: "office/rgb1/brightness/set"
rgb_state_topic: "office/rgb1/rgb/status"
rgb_command_topic: "office/rgb1/rgb/set"
qos: 0
payload_on: "on"
payload_off: "off"
Configuration for XY Version with brightness:
light:
platform: mqtt
name: "Office Light XY"
state_topic: "office/xy1/light/status"
command_topic: "office/xy1/light/switch"
brightness_state_topic: "office/xy1/brightness/status"
brightness_command_topic: "office/xy1/brightness/set"
xy_state_topic: "office/xy1/xy/status"
xy_command_topic: "office/xy1/xy/set"
qos: 0
payload_on: "on"
payload_off: "off"
config without RGB:
light:
platform: mqtt
name: "Office Light"
state_topic: "office/rgb1/light/status"
command_topic: "office/rgb1/light/switch"
brightness_state_topic: "office/rgb1/brightness/status"
brightness_command_topic: "office/rgb1/brightness/set"
qos: 0
payload_on: "on"
payload_off: "off"
config without RGB and brightness:
light:
platform: mqtt
name: "Office Light"
state_topic: "office/rgb1/light/status"
command_topic: "office/rgb1/light/switch"
qos: 0
payload_on: "on"
payload_off: "off"
config for RGB Version with brightness and scale:
light:
platform: mqtt
name: "Office Light RGB"
state_topic: "office/rgb1/light/status"
command_topic: "office/rgb1/light/switch"
brightness_state_topic: "office/rgb1/brightness/status"
brightness_command_topic: "office/rgb1/brightness/set"
brightness_scale: 99
rgb_state_topic: "office/rgb1/rgb/status"
rgb_command_topic: "office/rgb1/rgb/set"
rgb_scale: 99
qos: 0
payload_on: "on"
payload_off: "off"
config with brightness and color temp
light:
platform: mqtt
name: "Office Light Color Temp"
state_topic: "office/rgb1/light/status"
command_topic: "office/rgb1/light/switch"
brightness_state_topic: "office/rgb1/brightness/status"
brightness_command_topic: "office/rgb1/brightness/set"
brightness_scale: 99
color_temp_state_topic: "office/rgb1/color_temp/status"
color_temp_command_topic: "office/rgb1/color_temp/set"
qos: 0
payload_on: "on"
payload_off: "off"
config with brightness and effect
light:
platform: mqtt
name: "Office Light Color Temp"
state_topic: "office/rgb1/light/status"
command_topic: "office/rgb1/light/switch"
brightness_state_topic: "office/rgb1/brightness/status"
brightness_command_topic: "office/rgb1/brightness/set"
brightness_scale: 99
effect_state_topic: "office/rgb1/effect/status"
effect_command_topic: "office/rgb1/effect/set"
effect_list:
- rainbow
- colorloop
qos: 0
payload_on: "on"
payload_off: "off"
config for RGB Version with white value and scale:
light:
platform: mqtt
name: "Office Light RGB"
state_topic: "office/rgb1/light/status"
command_topic: "office/rgb1/light/switch"
white_value_state_topic: "office/rgb1/white_value/status"
white_value_command_topic: "office/rgb1/white_value/set"
white_value_scale: 99
rgb_state_topic: "office/rgb1/rgb/status"
rgb_command_topic: "office/rgb1/rgb/set"
rgb_scale: 99
qos: 0
payload_on: "on"
payload_off: "off"
config for RGB Version with RGB command template:
light:
platform: mqtt
name: "Office Light RGB"
state_topic: "office/rgb1/light/status"
command_topic: "office/rgb1/light/switch"
rgb_state_topic: "office/rgb1/rgb/status"
rgb_command_topic: "office/rgb1/rgb/set"
rgb_command_template: "{{ '#%02x%02x%02x' | format(red, green, blue)}}"
qos: 0
payload_on: "on"
payload_off: "off"
Configuration for HS Version with brightness:
light:
platform: mqtt
name: "Office Light HS"
state_topic: "office/hs1/light/status"
command_topic: "office/hs1/light/switch"
brightness_state_topic: "office/hs1/brightness/status"
brightness_command_topic: "office/hs1/brightness/set"
hs_state_topic: "office/hs1/hs/status"
hs_command_topic: "office/hs1/hs/set"
qos: 0
payload_on: "on"
payload_off: "off"
""" |
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), NAME <EMAIL>, 2012-2013
# Copyright (c), NAME <EMAIL>, 2015
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# The match_hostname function and supporting code is under the terms and
# conditions of the Python Software Foundation License. They were taken from
# the Python3 standard library and adapted for use in Python2. See comments in the
# source for which code precisely is under this License. PSF License text
# follows:
#
# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
# --------------------------------------------
#
# 1. This LICENSE AGREEMENT is between the Python Software Foundation
# ("PSF"), and the Individual or Organization ("Licensee") accessing and
# otherwise using this software ("Python") in source or binary form and
# its associated documentation.
#
# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
# grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
# analyze, test, perform and/or display publicly, prepare derivative works,
# distribute, and otherwise use Python alone or in any derivative version,
# provided, however, that PSF's License Agreement and PSF's notice of copyright,
# i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
# 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are
# retained in Python alone or in any derivative version prepared by Licensee.
#
# 3. In the event Licensee prepares a derivative work that is based on
# or incorporates Python or any part thereof, and wants to make
# the derivative work available to others as provided herein, then
# Licensee hereby agrees to include in any such work a brief summary of
# the changes made to Python.
#
# 4. PSF is making Python available to Licensee on an "AS IS"
# basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
# IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
# INFRINGE ANY THIRD PARTY RIGHTS.
#
# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
#
# 6. This License Agreement will automatically terminate upon a material
# breach of its terms and conditions.
#
# 7. Nothing in this License Agreement shall be deemed to create any
# relationship of agency, partnership, or joint venture between PSF and
# Licensee. This License Agreement does not grant permission to use PSF
# trademarks or trade name in a trademark sense to endorse or promote
# products or services of Licensee, or any third party.
#
# 8. By copying, installing or otherwise using Python, Licensee
# agrees to be bound by the terms and conditions of this License
# Agreement.
|
"""This module tests SyntaxErrors.
Here's an example of the sort of thing that is tested.
>>> def f(x):
... global x
Traceback (most recent call last):
SyntaxError: name 'x' is parameter and global
The tests are all raise SyntaxErrors. They were created by checking
each C call that raises SyntaxError. There are several modules that
raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and
symtable.c.
The parser itself outlaws a lot of invalid syntax. None of these
errors are tested here at the moment. We should add some tests; since
there are infinitely many programs with invalid syntax, we would need
to be judicious in selecting some.
The compiler generates a synthetic module name for code executed by
doctest. Since all the code comes from the same module, a suffix like
[1] is appended to the module name, As a consequence, changing the
order of tests in this module means renumbering all the errors after
it. (Maybe we should enable the ellipsis option for these tests.)
In ast.c, syntax errors are raised by calling ast_error().
Errors from set_context():
>>> obj.None = 1
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> None = 1
Traceback (most recent call last):
SyntaxError: assignment to keyword
It's a syntax error to assign to the empty tuple. Why isn't it an
error to assign to the empty list? It will always raise some error at
runtime.
>>> () = 1
Traceback (most recent call last):
SyntaxError: can't assign to ()
>>> f() = 1
Traceback (most recent call last):
SyntaxError: can't assign to function call
>>> del f()
Traceback (most recent call last):
SyntaxError: can't delete function call
>>> a + 1 = 2
Traceback (most recent call last):
SyntaxError: can't assign to operator
>>> (x for x in x) = 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression
>>> 1 = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> "abc" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> b"" = 1
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> `1` = 1
Traceback (most recent call last):
SyntaxError: invalid syntax
If the left-hand side of an assignment is a list or tuple, an illegal
expression inside that contain should still cause a syntax error.
This test just checks a couple of cases rather than enumerating all of
them.
>>> (a, "b", c) = (1, 2, 3)
Traceback (most recent call last):
SyntaxError: can't assign to literal
>>> [a, b, c + 1] = [1, 2, 3]
Traceback (most recent call last):
SyntaxError: can't assign to operator
>>> a if 1 else b = 1
Traceback (most recent call last):
SyntaxError: can't assign to conditional expression
From compiler_complex_args():
>>> def f(None=1):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_arguments():
>>> def f(x, y=1, z):
... pass
Traceback (most recent call last):
SyntaxError: non-default argument follows default argument
>>> def f(x, None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> def f(*None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
>>> def f(**None):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_funcdef():
>>> def None(x):
... pass
Traceback (most recent call last):
SyntaxError: invalid syntax
From ast_for_call():
>>> def f(it, *varargs):
... return list(it)
>>> L = range(10)
>>> f(x for x in L)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(x for x in L, 1)
Traceback (most recent call last):
SyntaxError: Generator expression must be parenthesized if not sole argument
>>> f((x for x in L), 1)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... i244, i245, i246, i247, i248, i249, i250, i251, i252,
... i253, i254, i255)
Traceback (most recent call last):
SyntaxError: more than 255 arguments
The actual error cases counts positional arguments, keyword arguments,
and generator expression arguments separately. This test combines the
three.
>>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11,
... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22,
... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33,
... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44,
... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55,
... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66,
... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77,
... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88,
... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99,
... i100, i101, i102, i103, i104, i105, i106, i107, i108,
... i109, i110, i111, i112, i113, i114, i115, i116, i117,
... i118, i119, i120, i121, i122, i123, i124, i125, i126,
... i127, i128, i129, i130, i131, i132, i133, i134, i135,
... i136, i137, i138, i139, i140, i141, i142, i143, i144,
... i145, i146, i147, i148, i149, i150, i151, i152, i153,
... i154, i155, i156, i157, i158, i159, i160, i161, i162,
... i163, i164, i165, i166, i167, i168, i169, i170, i171,
... i172, i173, i174, i175, i176, i177, i178, i179, i180,
... i181, i182, i183, i184, i185, i186, i187, i188, i189,
... i190, i191, i192, i193, i194, i195, i196, i197, i198,
... i199, i200, i201, i202, i203, i204, i205, i206, i207,
... i208, i209, i210, i211, i212, i213, i214, i215, i216,
... i217, i218, i219, i220, i221, i222, i223, i224, i225,
... i226, i227, i228, i229, i230, i231, i232, i233, i234,
... i235, i236, i237, i238, i239, i240, i241, i242, i243,
... (x for x in i244), i245, i246, i247, i248, i249, i250, i251,
... i252=1, i253=1, i254=1, i255=1)
Traceback (most recent call last):
SyntaxError: more than 255 arguments
>>> f(lambda x: x[0] = 3)
Traceback (most recent call last):
SyntaxError: lambda cannot contain assignment
The grammar accepts any test (basically, any expression) in the
keyword slot of a call site. Test a few different options.
>>> f(x()=2)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
>>> f(a or b=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
>>> f(x.y=1)
Traceback (most recent call last):
SyntaxError: keyword can't be an expression
More set_context():
>>> (x for x in x) += 1
Traceback (most recent call last):
SyntaxError: can't assign to generator expression
>>> None += 1
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> f() += 1
Traceback (most recent call last):
SyntaxError: can't assign to function call
Test continue in finally in weird combinations.
continue in for loop under finally should be ok.
>>> def test():
... try:
... pass
... finally:
... for abc in range(10):
... continue
... print(abc)
>>> test()
9
Start simple, a continue in a finally should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
This is essentially a continue in a finally which should not be allowed.
>>> def test():
... for abc in range(10):
... try:
... pass
... finally:
... try:
... continue
... except:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try:
... pass
... finally:
... try:
... continue
... finally:
... pass
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
>>> def foo():
... for a in ():
... try: pass
... finally:
... try:
... pass
... except:
... continue
Traceback (most recent call last):
...
SyntaxError: 'continue' not supported inside 'finally' clause
There is one test for a break that is not in a loop. The compiler
uses a single data structure to keep track of try-finally and loops,
so we need to be sure that a break is actually inside a loop. If it
isn't, there should be a syntax error.
>>> try:
... print(1)
... break
... print(2)
... finally:
... print(3)
Traceback (most recent call last):
...
SyntaxError: 'break' outside loop
This should probably raise a better error than a SystemError (or none at all).
In 2.5 there was a missing exception and an assert was triggered in a debug
build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514
>>> while 1:
... while 2:
... while 3:
... while 4:
... while 5:
... while 6:
... while 8:
... while 9:
... while 10:
... while 11:
... while 12:
... while 13:
... while 14:
... while 15:
... while 16:
... while 17:
... while 18:
... while 19:
... while 20:
... while 21:
... while 22:
... break
Traceback (most recent call last):
...
SystemError: too many statically nested blocks
Misuse of the nonlocal statement can lead to a few unique syntax errors.
>>> def f(x):
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: name 'x' is parameter and nonlocal
>>> def f():
... global x
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: name 'x' is nonlocal and global
>>> def f():
... nonlocal x
Traceback (most recent call last):
...
SyntaxError: no binding for nonlocal 'x' found
From SF bug #1705365
>>> nonlocal x
Traceback (most recent call last):
...
SyntaxError: nonlocal declaration not allowed at module level
TODO(jhylton): Figure out how to test SyntaxWarning with doctest.
## >>> def f(x):
## ... def f():
## ... print(x)
## ... nonlocal x
## Traceback (most recent call last):
## ...
## SyntaxWarning: name 'x' is assigned to before nonlocal declaration
## >>> def f():
## ... x = 1
## ... nonlocal x
## Traceback (most recent call last):
## ...
## SyntaxWarning: name 'x' is assigned to before nonlocal declaration
This tests assignment-context; there was a bug in Python 2.5 where compiling
a complex 'if' (one with 'elif') would fail to notice an invalid suite,
leading to spurious errors.
>>> if 1:
... x() = 1
... elif 1:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... x() = 1
... elif 1:
... pass
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... x() = 1
... else:
... pass
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
>>> if 1:
... pass
... elif 1:
... pass
... else:
... x() = 1
Traceback (most recent call last):
...
SyntaxError: can't assign to function call
Make sure that the old "raise X, Y[, Z]" form is gone:
>>> raise X, Y
Traceback (most recent call last):
...
SyntaxError: invalid syntax
>>> raise X, Y, Z
Traceback (most recent call last):
...
SyntaxError: invalid syntax
>>> f(a=23, a=234)
Traceback (most recent call last):
...
SyntaxError: keyword argument repeated
>>> del ()
Traceback (most recent call last):
SyntaxError: can't delete ()
>>> {1, 2, 3} = 42
Traceback (most recent call last):
SyntaxError: can't assign to literal
Corner-cases that used to fail to raise the correct error:
>>> def f(*, x=lambda __debug__:0): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(*args:(lambda __debug__:0)): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(**kwargs:(lambda __debug__:0)): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> with (lambda *:0): pass
Traceback (most recent call last):
SyntaxError: named arguments must follow bare *
Corner-cases that used to crash:
>>> def f(**__debug__): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
>>> def f(*xx, __debug__): pass
Traceback (most recent call last):
SyntaxError: assignment to keyword
""" |
"""
=============================
Input/Output (:mod:`sisl.io`)
=============================
.. module:: sisl.io
:noindex:
Available files for reading/writing
sisl handles a large variety of input/output files from a large selection
of DFT software and other post-processing tools.
Since sisl may be used with many other packages all files are name *siles*
to distinguish them from files from other packages.
Basic IO methods/classes
========================
.. autosummary::
:toctree:
add_sile - add a file to the list of files that sisl can interact with
get_sile - retrieve a file object via a file name by comparing the extension
SileError - sisl specific error
.. _toc-io-supported:
External code in/out put supported
----------------------------------
List the relevant codes that `sisl` can interact with. If there are files you think
are missing, please create an issue `here <issue>`_.
- :ref:`toc-io-generic`
- :ref:`toc-io-bigdft`
- :ref:`toc-io-gulp`
- :ref:`toc-io-openmx`
- :ref:`toc-io-scaleup`
- :ref:`toc-io-siesta`
- :ref:`toc-io-transiesta`
- :ref:`toc-io-tbtrans`
- :ref:`toc-io-vasp`
- :ref:`toc-io-wannier90`
.. _toc-io-generic:
Generic files
=============
Files not specificly related to any code.
.. autosummary::
:toctree:
~table.tableSile - data file in tabular form
~xyz.xyzSile - atomic coordinate file
~pdb.pdbSile - atomic coordinates and MD content
~cube.cubeSile - atomic coordinates *and* 3D grid values
~molden.moldenSile - atomic coordinate file specific for Molden
~xsf.xsfSile - atomic coordinate file specific for XCrySDen
.. _toc-io-bigdft:
BigDFT (:mod:`~sisl.io.bigdft`)
===============================
.. currentmodule:: sisl.io.bigdft
.. autosummary::
:toctree:
asciiSileBigDFT - the input for BigDFT
.. _toc-io-gulp:
GULP (:mod:`~sisl.io.gulp`)
===========================
.. currentmodule:: sisl.io.gulp
.. autosummary::
:toctree:
gotSileGULP - the output from GULP
fcSileGULP - force constant output from GULP
.. _toc-io-openmx:
OpenMX (:mod:`~sisl.io.openmx`)
===============================
.. currentmodule:: sisl.io.openmx
.. autosummary::
:toctree:
omxSileOpenMX - input file
.. _toc-io-scaleup:
ScaleUp (:mod:`~sisl.io.scaleup`)
=================================
.. currentmodule:: sisl.io.scaleup
.. autosummary::
:toctree:
orboccSileScaleUp - orbital information
refSileScaleUp - reference coordinates
rhamSileScaleUp - Hamiltonian file
.. _toc-io-siesta:
Siesta (:mod:`~sisl.io.siesta`)
===============================
.. currentmodule:: sisl.io.siesta
.. autosummary::
:toctree:
fdfSileSiesta - input file
outSileSiesta - output file
xvSileSiesta - xyz and vxyz file
bandsSileSiesta - band structure information
eigSileSiesta - EIG file
pdosSileSiesta - PDOS file
gridSileSiesta - Grid charge information (binary)
gridncSileSiesta - NetCDF grid output files (netcdf)
onlysSileSiesta - Overlap matrix information
dmSileSiesta - density matrix information
hsxSileSiesta - Hamiltonian and overlap matrix information
wfsxSileSiesta - wavefunctions
ncSileSiesta - NetCDF output file
ionxmlSileSiesta - Basis-information from the ion.xml files
ionncSileSiesta - Basis-information from the ion.nc files
orbindxSileSiesta - Basis set information (no geometry information)
faSileSiesta - Forces on atoms
fcSileSiesta - Force constant matrix
kpSileSiesta - k-points from simulation
rkpSileSiesta - k-points to simulation
.. _toc-io-transiesta:
TranSiesta (:mod:`~sisl.io.siesta`)
===================================
.. autosummary::
:toctree:
tshsSileSiesta - TranSiesta Hamiltonian
tsdeSileSiesta - TranSiesta (energy) density matrix
tsgfSileSiesta - TranSiesta surface Green function files
tsvncSileSiesta - TranSiesta specific Hartree potential file
.. _toc-io-tbtrans:
TBtrans (:mod:`~sisl.io.tbtrans`)
=================================
.. currentmodule:: sisl.io.tbtrans
.. autosummary::
:toctree:
tbtncSileTBtrans
deltancSileTBtrans
tbtgfSileTBtrans - TBtrans surface Green function files
tbtsencSileTBtrans
tbtavncSileTBtrans
tbtprojncSileTBtrans
Additionally the PHtrans code also has these files
.. autosummary::
:toctree:
phtncSilePHtrans
phtsencSilePHtrans
phtavncSilePHtrans
phtprojncSilePHtrans
.. _toc-io-vasp:
VASP (:mod:`~sisl.io.vasp`)
===========================
.. currentmodule:: sisl.io.vasp
.. autosummary::
:toctree:
carSileVASP
doscarSileVASP
eigenvalSileVASP
chgSileVASP
locpotSileVASP
.. _toc-io-wannier90:
Wannier90 (:mod:`~sisl.io.wannier90`)
=====================================
.. currentmodule:: sisl.io.wannier90
.. autosummary::
:toctree:
winSileWannier90 - input file
.. #################################
.. Switch back to the sisl.io module
.. #################################
.. currentmodule:: sisl.io
Low level methods/classes
=========================
Classes and methods generically only used internally. If you wish to create
your own `Sile` you should inherit either of `Sile` (ASCII), `SileCDF` (NetCDF)
or `SileBin` (binary), then subsequently add it using `add_sile` which enables
its generic use in all routines etc.
.. autosummary::
:toctree:
get_siles - retrieve all files with specific attributes or methods
get_sile_class - retrieve class via a file name by comparing the extension
BaseSile - the base class for all sisl files
Sile - a base class for ASCII files
SileCDF - a base class for NetCDF files
SileBin - a base class for binary files
.. ###############################################
.. Add all io modules to the toc (to be reachable)
.. ###############################################
.. autosummary::
:toctree:
:hidden:
bigdft
gulp
openmx
scaleup
siesta
tbtrans
vasp
USERNAME |
# rgToolFactoryMultIn.py
# see https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home
#
# copyright NAME (ross stop lazarus at gmail stop com) May 2012
#
# all rights reserved
# Licensed under the LGPL
# suggestions for improvement and bug fixes welcome at https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home
#
# January 2015
# unified all setups by passing the script on the cl rather than via a PIPE - no need for treat_bash_special so removed
#
# in the process of building a complex tool
# added ability to choose one of the current toolshed package_r or package_perl or package_python dependencies and source that package
# add that package to tool_dependencies
# Note that once the generated tool is loaded, it will have that package's env.sh loaded automagically so there is no
# --envshpath in the parameters for the generated tool and it uses the system one which will be first on the adjusted path.
#
# sept 2014 added additional params from
# https://bitbucket.org/mvdbeek/dockertoolfactory/src/d4863bcf7b521532c7e8c61b6333840ba5393f73/DockerToolFactory.py?at=default
# passing them is complex
# and they are restricted to NOT contain commas or double quotes to ensure that they can be safely passed together on
# the toolfactory command line as a comma delimited double quoted string for parsing and passing to the script
# see examples on this tool form
# august 2014
# Allows arbitrary number of input files
# NOTE positional parameters are now passed to script
# and output (may be "None") is *before* arbitrary number of inputs
#
# march 2014
# had to remove dependencies because cross toolshed dependencies are not possible - can't pre-specify a toolshed url for graphicsmagick and ghostscript
# grrrrr - night before a demo
# added dependencies to a tool_dependencies.xml if html page generated so generated tool is properly portable
#
# added ghostscript and graphicsmagick as dependencies
# fixed a wierd problem where gs was trying to use the new_files_path from universe (database/tmp) as ./database/tmp
# errors ensued
#
# august 2013
# found a problem with GS if $TMP or $TEMP missing - now inject /tmp and warn
#
# july 2013
# added ability to combine images and individual log files into html output
# just make sure there's a log file foo.log and it will be output
# together with all images named like "foo_*.pdf
# otherwise old format for html
#
# January 2013
# problem pointed out by NAME added escaping for <>$ - thought I did that ages ago...
#
# August 11 2012
# changed to use shell=False and cl as a sequence
# This is a Galaxy tool factory for simple scripts in python, R or whatever ails ye.
# It also serves as the wrapper for the new tool.
#
# you paste and run your script
# Only works for simple scripts that read one input from the history.
# Optionally can write one new history dataset,
# and optionally collect any number of outputs into links on an autogenerated HTML page.
# DO NOT install on a public or important site - please.
# installed generated tools are fine if the script is safe.
# They just run normally and their user cannot do anything unusually insecure
# but please, practice safe toolshed.
# Read the fucking code before you install any tool
# especially this one
# After you get the script working on some test data, you can
# optionally generate a toolshed compatible gzip file
# containing your script safely wrapped as an ordinary Galaxy script in your local toolshed for
# safe and largely automated installation in a production Galaxy.
# If you opt for an HTML output, you get all the script outputs arranged
# as a single Html history item - all output files are linked, thumbnails for all the pdfs.
# Ugly but really inexpensive.
#
# Patches appreciated please.
#
#
# long route to June 2012 product
# Behold the awesome power of Galaxy and the toolshed with the tool factory to bind them
# derived from an integrated script model
# called rgBaseScriptWrapper.py
# Note to the unwary:
# This tool allows arbitrary scripting on your Galaxy as the Galaxy user
# There is nothing stopping a malicious user doing whatever they choose
# Extremely dangerous!!
# Totally insecure. So, trusted users only
#
# preferred model is a developer using their throw away workstation instance - ie a private site.
# no real risk. The universe_wsgi.ini admin_users string is checked - only admin users are permitted to run this tool.
#
|
"""
:mod:`disco.settings` -- Disco Settings
=======================================
Settings can be specified in a Python file and/or using environment variables.
Settings specified in environment variables override those stored in a file.
The default settings are intended to make it easy to get Disco running on a single node.
:command:`make install` will create a more reasonable settings file for a cluster environment,
and put it in ``/etc/disco/settings.py``
Disco looks in the following places for a settings file:
- The settings file specified using the command line utility
``--settings`` option.
- ``~/.disco``
- ``/etc/disco/settings.py``
Possible settings for Disco are as follows:
.. envvar:: DISCO_DATA
Directory to use for writing data.
Default obtained using ``os.path.join(DISCO_ROOT, data)``.
.. envvar:: DISCO_DEBUG
Sets the debugging level for Disco.
Default is ``1``.
.. envvar:: DISCO_ERLANG
Command used to launch Erlang on all nodes in the cluster.
Default usually ``erl``, but depends on the OS.
.. envvar:: DISCO_EVENTS
If set, events are logged to `stdout`.
If set to ``json``, events will be written as JSON strings.
If set to ``nocolor``, ANSI color escape sequences will not be used, even if the terminal supports it.
Default is unset (the empty string).
.. envvar:: DISCO_FLAGS
Default is the empty string.
.. envvar:: DISCO_HOME
The directory which Disco runs out of.
If you run Disco out of the source directory,
you shouldn't need to change this.
If you use ``make install`` to install Disco,
it will be set properly for you in ``/etc/disco/settings.py``.
.. envvar:: DISCO_HTTPD
Command used to launch `lighttpd`.
Default is ``lighttpd``.
.. envvar:: DISCO_MASTER_HOME
Directory containing the Disco ``master`` directory.
Default is obtained using ``os.path.join(DISCO_HOME, 'master')``.
.. envvar:: DISCO_MASTER_HOST
The hostname of the master.
Default obtained using ``socket.gethostname()``.
.. envvar:: DISCO_MASTER_ROOT
Directory to use for writing master data.
Default obtained using ``os.path.join(DISCO_DATA, '_%s' % DISCO_NAME)``.
.. envvar:: DISCO_MASTER_CONFIG
Directory to use for writing cluster configuration.
Default obtained using ``os.path.join(DISCO_ROOT, '%s.config' % DISCO_NAME)``.
.. envvar:: DISCO_NAME
A unique name for the Disco cluster.
Default obtained using ``'disco_%s' % DISCO_PORT``.
.. envvar:: DISCO_LOG_DIR
Directory where log-files are created.
The same path is used for all nodes in the cluster.
Default is obtained using ``os.path.join(DISCO_ROOT, 'log')``.
.. envvar:: DISCO_PID_DIR
Directory where pid-files are created.
The same path is used for all nodes in the cluster.
Default is obtained using ``os.path.join(DISCO_ROOT, 'run')``.
.. envvar:: DISCO_PORT
The port the workers use for `HTTP` communication.
Default is ``8989``.
.. envvar:: DISCO_ROOT
Root directory for Disco-written data and metadata.
Default is obtained using ``os.path.join(DISCO_HOME, 'root')``.
.. envvar:: DISCO_ROTATE_LOG
Whether to rotate the master log on startup.
Default is ``False``.
.. envvar:: DISCO_USER
The user Disco should run as.
Default obtained using ``os.getenv(LOGNAME)``.
.. envvar:: DISCO_JOB_OWNER
User name shown on the job status page for the user who
submitted the job.
Default is the login name @ host.
.. envvar:: DISCO_WWW_ROOT
Directory that is the document root for the master `HTTP` server.
Default obtained using ``os.path.join(DISCO_MASTER_HOME, www)``.
.. envvar:: DISCO_GC_AFTER
How long to wait before garbage collecting job-generated intermediate and result data.
Only results explictly saved to DDFS won't be garbage collected.
Default is ``100 * 365 * 24 * 60 * 60`` (100 years). (Note that this setting does not affect data in DDFS.)
.. envvar:: DISCO_PROFILE
Whether Disco should start profiling applications and send profiling data to
a graphite server.
.. envvar:: GRAPHITE_HOST
If DISCO_PROFILE is set, then some performance data from Disco
will be sent to the graphite host. The default is localhost.
We are assuming that the listening port is the default graphite
port.
.. envvar:: SYSTEMD_ENABLED
This adds -noshell to the erlang process. It provides compatibility for running
disco using a non-forking process type in the service definition.
.. envvar:: DATA_GC_INTERVAL
How long to wait before garbage collecting purged job data.
Default is ``12`` (hours).
.. envvar:: DISCO_WORKER_MAX_MEM
How much memory can be used by worker in total. Worker calls `resource.setrlimit(RLIMIT_AS, limit) <http://docs.python.org/library/resource.html#resource.setrlimit>`_ to set the limit when it starts. Can be either a percentage of total available memory or an exact number of bytes. Note that ``setrlimit`` behaves differently on Linux and Mac OS X, see *man setrlimit* for more information. Default is ``80%`` i.e. 80% of the total available memory.
Settings to control the proxying behavior:
.. envvar:: DISCO_PROXY_ENABLED
If set, enable proxying through the master. This is a master-side setting (set in ``master:/etc/disco/settings.py``).
Default is ``''``.
.. envvar:: DISCO_PROXY
The address of the proxy to use on the client side. This is in the format ``http://<proxy-host>:<proxy-port>``, where ``<proxy-port>`` normally matches the value of ``DISCO_PROXY_PORT`` set on the master.
Default is ``''``.
.. envvar:: DISCO_PROXY_PORT
The port the master proxy should run on. This is master-side setting (set in ``master:/etc/disco/settings.py``).
Default is ``8999``.
Settings to control the scheduler behavior:
.. envvar:: DISCO_SCHEDULER
The type of scheduler that disco should use.
The only options are `fair` and `fifo`.
Default is ``fair``.
.. envvar:: DISCO_SCHEDULER_ALPHA
Parameter controlling how much the ``fair`` scheduler punishes long-running jobs vs. short ones.
Default is .001 and should usually not need to be changed.
Settings used by the testing environment:
.. envvar:: DISCO_TEST_DISCODB
Whether or not to run :mod:`discodb` tests.
Default is ``''``.
.. envvar:: DISCO_TEST_HOST
The hostname that the test data server should bind on.
Default is ``DISCO_MASTER_HOST``.
.. envvar:: DISCO_TEST_PORT
The port that the test data server should bind to.
Default is ``9444``.
Settings used by DDFS:
.. envvar:: DDFS_ROOT
.. deprecated:: 0.4
Use :envvar:`DDFS_DATA` instead.
Only provided as a default for backwards compatability.
Default is obtained using ``os.path.join(DISCO_ROOT, 'ddfs')``.
.. envvar:: DDFS_DATA
The root data directory for DDFS.
Default is obtained using ``DDFS_ROOT``.
.. envvar:: DDFS_PUT_PORT
The port to use for writing to DDFS nodes.
Must be open to the Disco client unless proxying is used.
Default is ``8990``.
.. envvar:: DDFS_PUT_MAX
The maximum default number of retries for a `PUT` operation.
Default is ``3``.
.. envvar:: DDFS_GET_MAX
The maximum default number of retries for a `GET` operation.
Default is ``3``.
.. envvar:: DDFS_READ_TOKEN
The default read authorization token to use.
Default is ``None``.
.. envvar:: DDFS_WRITE_TOKEN
The default write authorization token to use.
Default is ``None``.
.. envvar:: DDFS_GC_INITIAL_WAIT
The amount of time to wait after startup before running GC (in minutes).
Default is ``''``, which triggers an internal default of 5 minutes.
.. envvar:: DDFS_GC_BALANCE_THRESHOLD
The distance a node's disk utilization can be from the average
disk utilization of the cluster before the node is considered
to be over-utilized or under-utilized. Default is ``0.1``.
.. envvar:: DDFS_PARANOID_DELETE
Instead of deleting unneeded files, DDFS garbage collector prefixes obsolete files with ``!trash.``, so they can be safely verified/deleted by an external process. For instance, the following command can be used to finally delete the files (assuming that ``DDFS_DATA = "/srv/disco/ddfs"``)::
find /srv/disco/ddfs/ -perm 600 -iname '!trash*' -exec rm {} \;
Default is ``''``.
The following settings are used by DDFS to determine the number of replicas for data/metadata to keep
(it is not recommended to use the provided defaults in a multinode cluster):
.. envvar:: DDFS_TAG_MIN_REPLICAS
The minimum number of replicas for a tag operation to succeed.
Default is ``1``.
.. envvar:: DDFS_TAG_REPLICAS
The number of replicas of tags that DDFS should aspire to keep.
Default is ``1``.
.. envvar:: DDFS_BLOB_REPLICAS
The number of replicas of blobs that DDFS should aspire to keep.
Default is ``1``.
.. envvar:: DDFS_SPACE_AWARE
Whether DDFS should take the amount of free space in the nodes
into account when choosing the nodes to write to. Default is
````.
.. envvar:: DDFS_ABSOLUTE_SPACE
Only effective in the space-aware mode.
If set, the nodes with the higher absolute free space will be
given precedence for hosting replicas. If unset, the nodes with
the highest ratio of the free space to the total space will be
given precedence for hosting the replicas.
""" |
"""
This file contains the core methods for the Batch-command- and
Batch-code-processors respectively. In short, these are two different
ways to build a game world using a normal text-editor without having
to do so 'on the fly' in-game. They also serve as an automatic backup
so you can quickly recreate a world also after a server reset. The
functions in this module is meant to form the backbone of a system
called and accessed through game commands.
The Batch-command processor is the simplest. It simply runs a list of
in-game commands in sequence by reading them from a text file. The
advantage of this is that the builder only need to remember the normal
in-game commands. They are also executing with full permission checks
etc, making it relatively safe for builders to use. The drawback is
that in-game there is really a builder-character walking around
building things, and it can be important to create rooms and objects
in the right order, so the character can move between them. Also
objects that affects players (such as mobs, dark rooms etc) will
affect the building character too, requiring extra care to turn
off/on.
The Batch-code processor is a more advanced system that accepts full
Python code, executing in chunks. The advantage of this is much more
power; practically anything imaginable can be coded and handled using
the batch-code processor. There is no in-game character that moves and
that can be affected by what is being built - the database is
populated on the fly. The drawback is safety and entry threshold - the
code is executed as would any server code, without mud-specific
permission checks and you have full access to modifying objects
etc. You also need to know Python and Evennia's API. Hence it's
recommended that the batch-code processor is limited only to
superusers or highly trusted staff.
=======================================================================
Batch-command processor file syntax
The batch-command processor accepts 'batchcommand files' e.g
'batch.ev', containing a sequence of valid evennia commands in a
simple format. The engine runs each command in sequence, as if they
had been run at the game prompt.
Each evennia command must be delimited by a line comment to mark its
end.
#INSERT path.batchcmdfile - this as the first entry on a line will
import and run a batch.ev file in this position, as if it was
written in this file.
This way entire game worlds can be created and planned offline; it is
especially useful in order to create long room descriptions where a
real offline text editor is often much better than any online text
editor or prompt.
Example of batch.ev file:
----------------------------
# batch file
# all lines starting with # are comments; they also indicate
# that a command definition is over.
@create box
# this comment ends the @create command.
@set box/desc = A large box.
Inside are some scattered piles of clothing.
It seems the bottom of the box is a bit loose.
# Again, this comment indicates the @set command is over. Note how
# the description could be freely added. Excess whitespace on a line
# is ignored. An empty line in the command definition is parsed as a \n
# (so two empty lines becomes a new paragraph).
@teleport #221
# (Assuming #221 is a warehouse or something.)
# (remember, this comment ends the @teleport command! Don'f forget it)
# Example of importing another file at this point.
#IMPORT examples.batch
@drop box
# Done, the box is in the warehouse! (this last comment is not necessary to
# close the @drop command since it's the end of the file)
-------------------------
An example batch file is game/gamesrc/commands/examples/batch_example.ev.
==========================================================================
Batch-code processor file syntax
The Batch-code processor accepts full python modules (e.g. "batch.py")
that looks identical to normal Python files with a few exceptions that
allows them to the executed in blocks. This way of working assures a
sequential execution of the file and allows for features like stepping
from block to block (without executing those coming before), as well
as automatic deletion of created objects etc. You can however also run
a batch-code python file directly using Python (and can also be de).
Code blocks are separated by python comments starting with special
code words.
#HEADER - this denotes commands global to the entire file, such as
import statements and global variables. They will
automatically be pasted at the top of all code
blocks. Observe that changes to these variables made in one
block is not preserved between blocks!
#CODE
#CODE (info)
#CODE (info) objname1, objname1, ... -
This designates a code block that will be executed like a
stand-alone piece of code together with any #HEADER
defined. (info) text is used by the interactive mode to
display info about the node to run. <objname>s mark the
(variable-)names of objects created in the code, and which
may be auto-deleted by the processor if desired (such as
when debugging the script). E.g., if the code contains the
command myobj = create.create_object(...), you could put
'myobj' in the #CODE header regardless of what the created
object is actually called in-game.
#INSERT path.filename - This imports another batch_code.py file and
runs it in the given position. paths are given as python
path. The inserted file will retain its own HEADERs which
will not be mixed with the HEADERs of the file importing
this file.
The following variables are automatically made available for the script:
caller - the object executing the script
Example batch.py file
-----------------------------------
#HEADER
import traceback
from django.config import settings
from src.utils import create
from game.gamesrc.typeclasses import basetypes
GOLD = 10
#CODE obj, obj2
obj = create.create_object(basetypes.Object)
obj2 = create.create_object(basetypes.Object)
obj.location = caller.location
obj.db.gold = GOLD
caller.msg("The object was created!")
#INSERT another_batch_file
#CODE
script = create.create_script()
""" |
"""
Interface to the UMFPACK library
================================
:Contains: UmfpackContext class
Parameters
----------
sys : constant,
one of UMFPACK system description constants, like
UMFPACK_A, UMFPACK_At, see umfSys list and UMFPACK docs
mtx : sparse matrix (CSR or CSC)
Input.
rhs : right hand side vector
Right Hand Side
autoTranspose : bool
Automatically changes `sys` to the transposed type, if `mtx` is in CSR,
since UMFPACK assumes CSC internally
Description
-----------
Routines for symbolic and numeric LU factorization of sparse
matrices and for solving systems of linear equations with sparse matrices.
Tested with UMFPACK V4.4 (Jan. 28, 2005), V5.0 (May 5, 2006)
Copyright (c) 2005 by NAME All Rights Reserved.
UMFPACK homepage: http://www.cise.ufl.edu/research/sparse/umfpack
Use 'print UmfpackContext().funs' to see all UMFPACK library functions the
module exposes, if you need something not covered by the examples below.
Installation
------------
Example site.cfg entry:
UMFPACK v4.4 in <dir>::
[amd]
library_dirs = <dir>/UMFPACK/AMD/Lib
include_dirs = <dir>/UMFPACK/AMD/Include
amd_libs = amd
[umfpack]
library_dirs = <dir>/UMFPACK/UMFPACK/Lib
include_dirs = <dir>/UMFPACK/UMFPACK/Include
umfpack_libs = umfpack
UMFPACK v5.0 (as part of UFsparse package) in <dir>:
[amd]
library_dirs = <dir>/UFsparse/AMD/Lib
include_dirs = <dir>/UFsparse/AMD/Include, <dir>/UFsparse/UFconfig
amd_libs = amd
[umfpack]
library_dirs = <dir>/UFsparse/UMFPACK/Lib
include_dirs = <dir>/UFsparse/UMFPACK/Include, <dir>/UFsparse/UFconfig
umfpack_libs = umfpack
Examples
--------
Assuming this module imported as um (import scipy.sparse.linalg.dsolve.umfpack as um)
Sparse matrix in CSR or CSC format: mtx
Righthand-side: rhs
Solution: sol
::
# Contruct the solver.
umfpack = um.UmfpackContext() # Use default 'di' family of UMFPACK routines.
# One-shot solution.
sol = umfpack( um.UMFPACK_A, mtx, rhs, autoTranspose = True )
# same as:
sol = umfpack.linsolve( um.UMFPACK_A, mtx, rhs, autoTranspose = True )
-or-
::
# Make LU decomposition.
umfpack.numeric( mtx )
...
# Use already LU-decomposed matrix.
sol1 = umfpack( um.UMFPACK_A, mtx, rhs1, autoTranspose = True )
sol2 = umfpack( um.UMFPACK_A, mtx, rhs2, autoTranspose = True )
# same as:
sol1 = umfpack.solve( um.UMFPACK_A, mtx, rhs1, autoTranspose = True )
sol2 = umfpack.solve( um.UMFPACK_A, mtx, rhs2, autoTranspose = True )
-or-
::
# Make symbolic decomposition.
umfpack.symbolic( mtx0 )
# Print statistics.
umfpack.report_symbolic()
# ...
# Make LU decomposition of mtx1 which has same structure as mtx0.
umfpack.numeric( mtx1 )
# Print statistics.
umfpack.report_numeric()
# Use already LU-decomposed matrix.
sol1 = umfpack( um.UMFPACK_A, mtx1, rhs1, autoTranspose = True )
# ...
# Make LU decomposition of mtx2 which has same structure as mtx0.
umfpack.numeric( mtx2 )
sol2 = umfpack.solve( um.UMFPACK_A, mtx2, rhs2, autoTranspose = True )
# Print all statistics.
umfpack.report_info()
-or-
::
# Get LU factors and permutation matrices of a matrix.
L, U, P, Q, R, do_recip = umfpack.lu( mtx )
Returns
-------
L : (M, min(M, N)) CSR matrix
Lower triangle
U : (min(M, N), N) CSC matrix
Upper triangule
P :
Vector of row permuations
Q :
Vector of column permuations
R :
Vector of diagonal row scalings
do_recip : bool
do_recip
Notes
-----
For a given matrix A, the decomposition satisfies:
$LU = PRAQ$ when do_recip is true,
$LU = P(R^{-1})AQ$ when do_recip is false
See Also
--------
umfpack, umfpack.linsolve, umfpack.solve
Setting control parameters
--------------------------
Assuming this module imported as um:
List of control parameter names is accessible as 'um.umfControls' - their
meaning and possible values are described in the UMFPACK documentation.
To each name corresponds an attribute of the 'um' module, such as,
for example 'um.UMFPACK_PRL' (controlling the verbosity of umfpack report
functions). These attributes are in fact indices into the control array
- to set the corresponding control array value, just do the following:
::
umfpack = um.UmfpackContext()
umfpack.control[um.UMFPACK_PRL] = 4 # Let's be more verbose.
""" |
# 25Apr.2016
# hy:24Jan.2017 v0.48
# Added feed augmented data online, no disk space consumption for these data
# Added test_model_online, replacing previous way of viewing test accuracy online.
# sudo apt-get install python-h5py
# Added eva.sh to run evaluation of multiple models
# Added function for evaluating multiple models, their result file names contain accuracy result.
# Added functionality to set different dropout rate for each layer for 3conv net
# Moved auxiliary functions to a new file tools.py
# Added function to obtain images of estimated receptive fields/active fields
# Added function to save all models and specified names according to training status
# Added graph 3conv, 4conv
# Added real batch training functionality
# Added functionality of feeding a tensor name
# Added function to save tensorflow models with max precision for a class, not overwritten by following data
# Added function do_crop2_parts to get parts in different sizes
# Added function for displaying evaluation results in a worksheet (result_for_table = 0).
# Added similarity.py to analyse similarity between classes, CAD samples and camera test images
# Created tensor_cnn_evaluate.py. It is used for testing multiple models. Input of each evaluation function includes:
# session,num_class,img_list,_labels
# Added stop condition to avoid overfitting
# Added function to load two models of different graphs. requirement: install tensorflow version > 0.8, numpy > 1.11.2
# Added display of all defined results for training, validation and test in one graph in tensorboard
# Added optimizer Adam and its parameters
# Added display of test result in RETRAIN
# Added a function to add more training data during a training. This data contains random noise.
# Added display of test result in CONTINUE_TRAIN. Some new variables are created for tensorflow for this purpose.
# Created a function for importing data, import_data(). This is used for displaying test result parallel to validation result.
# Added function to evaluate two models of same graph
# Added adaptive testing - evaluate_image_vague, create_test_slices to get top,bottom, left, right, center parts of a test image
# Added formula for calculating window size when webcam is used, also for rectangular form
# Added functions: random crop, random rotation, set scale, remove small object area
# Added def convert_result for converting sub-class to main-class result.
# Changed tensorboard backup path and added sub-folder to store tensorboard logs so that the logs can be compared easily.
# Changed model name to include specification info of a model.
# Specification information of a model such as number of hidden layers and tensor size must be set as the same when this model is reused later.
# Added functionality of continuing a broken training
# Added distortion tools for automatically generating and moving/removing data
# Added tensorboard log timestamp for comparing different model in live time, changed tensorboard log path
# Added function to do tracking in terms of shift mean #
# Added date time for log
# Training set: CAD samples for all six classes
# Added functionality of saving first convolutional layer feature output in training phase and test phase
# Added function to evaluate model with webcam
# Prepare_list is activated according to action selected for training or test
# Test set: lego positive samples for all six classes
# Added output info: when evaluating with images, proportion of correctly classified is included
# Added sequence configurations for based on training or test which is selected
# Added function to save correctly classified images/frames
# Added function to save misclassified images to folder ../MisClassifed, upper limit can be set
# Added log function, time count for training duration
# Test_Images: stored under ../Test_Images, they are lego positive samples that are not included in training set.
# Added the functionality to evaluate model with images
# Changed prepare_list to a global function to make test run smoothly.
# Changed condition for label, predict
# Changed display precision of matrix outputs to 2
# Added a formula to calculate shape, in settings.py
# Added a formula to set cropped frame to show ROI in demo
# Tested video_crop_tool.py, it does not require strict parameter for width as in this script
# Added global variables for width, height, crop sizes, defined in settings.py
# Changed places to adapt to lego data
# - All file paths in tensor_cnn_video.py, prepare_list.py, image_distortions.py, test.py
# - LABELS(=6), which is the number of sub-folders under ../Data
# To see tensorflow output use following command
# $tensorflow --logdir='enter_the_path_of_tensorboard_log'
#####################################################################################################
#import Image
#import ImageFilter
|
"""
This module contains the machinery handling assumptions.
All symbolic objects have assumption attributes that can be accessed via
.is_<assumption name> attribute.
Assumptions determine certain properties of symbolic objects and can
have 3 possible values: True, False, None. True is returned if the
object has the property and False is returned if it doesn't or can't
(i.e. doesn't make sense):
>>> from sympy import I
>>> I.is_algebraic
True
>>> I.is_real
False
>>> I.is_prime
False
When the property cannot be determined (or when a method is not
implemented) None will be returned, e.g. a generic symbol, x, may or
may not be positive so a value of None is returned for x.is_positive.
By default, all symbolic values are in the largest set in the given context
without specifying the property. For example, a symbol that has a property
being integer, is also real, complex, etc.
Here follows a list of possible assumption names:
.. glossary::
commutative
object commutes with any other object with
respect to multiplication operation.
complex
object can have only values from the set
of complex numbers.
imaginary
object value is a number that can be written as a real
number multiplied by the imaginary unit ``I``. See
[3]_. Please note, that ``0`` is not considered to be an
imaginary number, see
`issue #7649 <https://github.com/sympy/sympy/issues/7649>`_.
real
object can have only values from the set
of real numbers.
integer
object can have only values from the set
of integers.
odd
even
object can have only values from the set of
odd (even) integers [2]_.
prime
object is a natural number greater than ``1`` that has
no positive divisors other than ``1`` and itself. See [6]_.
composite
object is a positive integer that has at least one positive
divisor other than ``1`` or the number itself. See [4]_.
zero
nonzero
object is zero (not zero).
rational
object can have only values from the set
of rationals.
algebraic
object can have only values from the set
of algebraic numbers [11]_.
transcendental
object can have only values from the set
of transcendental numbers [10]_.
irrational
object value cannot be represented exactly by Rational, see [5]_.
finite
infinite
object absolute value is bounded (is value is
arbitrarily large). See [7]_, [8]_, [9]_.
negative
nonnegative
object can have only negative (only
nonnegative) values [1]_.
positive
nonpositive
object can have only positive (only
nonpositive) values.
hermitian
antihermitian
object belongs to the field of hermitian
(antihermitian) operators.
Examples
========
>>> from sympy import Symbol
>>> x = Symbol('x', real=True); x
x
>>> x.is_real
True
>>> x.is_complex
True
See Also
========
.. seealso::
:py:class:`sympy.core.numbers.ImaginaryUnit`
:py:class:`sympy.core.numbers.Zero`
:py:class:`sympy.core.numbers.One`
Notes
=====
Assumption values are stored in obj._assumptions dictionary or
are returned by getter methods (with property decorators) or are
attributes of objects/classes.
References
==========
.. [1] http://en.wikipedia.org/wiki/Negative_number
.. [2] http://en.wikipedia.org/wiki/Parity_%28mathematics%29
.. [3] http://en.wikipedia.org/wiki/Imaginary_number
.. [4] http://en.wikipedia.org/wiki/Composite_number
.. [5] http://en.wikipedia.org/wiki/Irrational_number
.. [6] http://en.wikipedia.org/wiki/Prime_number
.. [7] http://en.wikipedia.org/wiki/Finite
.. [8] https://docs.python.org/3/library/math.html#math.isfinite
.. [9] http://docs.scipy.org/doc/numpy/reference/generated/numpy.isfinite.html
.. [10] http://en.wikipedia.org/wiki/Transcendental_number
.. [11] http://en.wikipedia.org/wiki/Algebraic_number
""" |
"""
Define a simple format for saving numpy arrays to disk with the full
information about them.
The ``.npy`` format is the standard binary file format in NumPy for
persisting a *single* arbitrary NumPy array on disk. The format stores all
of the shape and dtype information necessary to reconstruct the array
correctly even on another machine with a different architecture.
The format is designed to be as simple as possible while achieving
its limited goals.
The ``.npz`` format is the standard format for persisting *multiple* NumPy
arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy``
files, one for each array.
Capabilities
------------
- Can represent all NumPy arrays including nested record arrays and
object arrays.
- Represents the data in its native binary form.
- Supports Fortran-contiguous arrays directly.
- Stores all of the necessary information to reconstruct the array
including shape and dtype on a machine of a different
architecture. Both little-endian and big-endian arrays are
supported, and a file with little-endian numbers will yield
a little-endian array on any machine reading the file. The
types are described in terms of their actual sizes. For example,
if a machine with a 64-bit C "long int" writes out an array with
"long ints", a reading machine with 32-bit C "long ints" will yield
an array with 64-bit integers.
- Is straightforward to reverse engineer. Datasets often live longer than
the programs that created them. A competent developer should be
able to create a solution in their preferred programming language to
read most ``.npy`` files that he has been given without much
documentation.
- Allows memory-mapping of the data. See `open_memmep`.
- Can be read from a filelike stream object instead of an actual file.
- Stores object arrays, i.e. arrays containing elements that are arbitrary
Python objects. Files with object arrays are not to be mmapable, but
can be read and written to disk.
Limitations
-----------
- Arbitrary subclasses of numpy.ndarray are not completely preserved.
Subclasses will be accepted for writing, but only the array data will
be written out. A regular numpy.ndarray object will be created
upon reading the file.
.. warning::
Due to limitations in the interpretation of structured dtypes, dtypes
with fields with empty names will have the names replaced by 'f0', 'f1',
etc. Such arrays will not round-trip through the format entirely
accurately. The data is intact; only the field names will differ. We are
working on a fix for this. This fix will not require a change in the
file format. The arrays with such structures can still be saved and
restored, and the correct dtype may be restored by using the
``loadedarray.view(correct_dtype)`` method.
File extensions
---------------
We recommend using the ``.npy`` and ``.npz`` extensions for files saved
in this format. This is by no means a requirement; applications may wish
to use these file formats but use an extension specific to the
application. In the absence of an obvious alternative, however,
we suggest using ``.npy`` and ``.npz``.
Version numbering
-----------------
The version numbering of these formats is independent of NumPy version
numbering. If the format is upgraded, the code in `numpy.io` will still
be able to read and write Version 1.0 files.
Format Version 1.0
------------------
The first 6 bytes are a magic string: exactly ``\\x93NUMPY``.
The next 1 byte is an unsigned byte: the major version number of the file
format, e.g. ``\\x01``.
The next 1 byte is an unsigned byte: the minor version number of the file
format, e.g. ``\\x00``. Note: the version of the file format is not tied
to the version of the numpy package.
The next 2 bytes form a little-endian unsigned short int: the length of
the header data HEADER_LEN.
The next HEADER_LEN bytes form the header data describing the array's
format. It is an ASCII string which contains a Python literal expression
of a dictionary. It is terminated by a newline (``\\n``) and padded with
spaces (``\\x20``) to make the total length of
``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment
purposes.
The dictionary contains three keys:
"descr" : dtype.descr
An object that can be passed as an argument to the `numpy.dtype`
constructor to create the array's dtype.
"fortran_order" : bool
Whether the array data is Fortran-contiguous or not. Since
Fortran-contiguous arrays are a common form of non-C-contiguity,
we allow them to be written directly to disk for efficiency.
"shape" : tuple of int
The shape of the array.
For repeatability and readability, the dictionary keys are sorted in
alphabetic order. This is for convenience only. A writer SHOULD implement
this if possible. A reader MUST NOT depend on this.
Following the header comes the array data. If the dtype contains Python
objects (i.e. ``dtype.hasobject is True``), then the data is a Python
pickle of the array. Otherwise the data is the contiguous (either C-
or Fortran-, depending on ``fortran_order``) bytes of the array.
Consumers can figure out the number of bytes by multiplying the number
of elements given by the shape (noting that ``shape=()`` means there is
1 element) by ``dtype.itemsize``.
Format Version 2.0
------------------
The version 1.0 format only allowed the array header to have a total size of
65535 bytes. This can be exceeded by structured arrays with a large number of
columns. The version 2.0 format extends the header size to 4 GiB.
`numpy.save` will automatically save in 2.0 format if the data requires it,
else it will always use the more compatible 1.0 format.
The description of the fourth element of the header therefore has become:
"The next 4 bytes form a little-endian unsigned int: the length of the header
data HEADER_LEN."
Notes
-----
The ``.npy`` format, including reasons for creating it and a comparison of
alternatives, is described fully in the "npy-format" NEP.
""" |
"""
=============
Miscellaneous
=============
IEEE 754 Floating Point Special Values
--------------------------------------
Special values defined in numpy: nan, inf,
NaNs can be used as a poor-man's mask (if you don't care what the
original value was)
Note: cannot use equality to test NaNs. E.g.: ::
>>> myarr = np.array([1., 0., np.nan, 3.])
>>> np.where(myarr == np.nan)
>>> np.nan == np.nan # is always False! Use special numpy functions instead.
False
>>> myarr[myarr == np.nan] = 0. # doesn't work
>>> myarr
array([ 1., 0., NaN, 3.])
>>> myarr[np.isnan(myarr)] = 0. # use this instead find
>>> myarr
array([ 1., 0., 0., 3.])
Other related special value functions: ::
isinf(): True if value is inf
isfinite(): True if not nan or inf
nan_to_num(): Map nan to 0, inf to max float, -inf to min float
The following corresponds to the usual functions except that nans are excluded
from the results: ::
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
How numpy handles numerical exceptions
--------------------------------------
The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
set individually for different kinds of exceptions. The different behaviors
are:
- 'ignore' : Take no action when the exception occurs.
- 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
- 'raise' : Raise a `FloatingPointError`.
- 'call' : Call a function specified using the `seterrcall` function.
- 'print' : Print a warning directly to ``stdout``.
- 'log' : Record error in a Log object specified by `seterrcall`.
These behaviors can be set for all kinds of errors or specific ones:
- all : apply to all numeric exceptions
- invalid : when NaNs are generated
- divide : divide by zero (for integers as well!)
- overflow : floating point overflows
- underflow : floating point underflows
Note that integer divide-by-zero is handled by the same machinery.
These behaviors are set on a per-thread basis.
Examples
--------
::
>>> oldsettings = np.seterr(all='warn')
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under='ignore')
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid='raise')
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
... print "saw stupid error!"
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all='call')
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
... # error-handling settings
Interfacing to C
----------------
Only a survey of the choices. Little detail on how each works.
1) Bare metal, wrap your own C-code manually.
- Plusses:
- Efficient
- No dependencies on other tools
- Minuses:
- Lots of learning overhead:
- need to learn basics of Python C API
- need to learn basics of numpy C API
- need to learn how to handle reference counting and love it.
- Reference counting often difficult to get right.
- getting it wrong leads to memory leaks, and worse, segfaults
- API will change for Python 3.0!
2) Cython
- Plusses:
- avoid learning C API's
- no dealing with reference counting
- can code in pseudo python and generate C code
- can also interface to existing C code
- should shield you from changes to Python C api
- has become the de-facto standard within the scientific Python community
- fast indexing support for arrays
- Minuses:
- Can write code in non-standard form which may become obsolete
- Not as flexible as manual wrapping
4) ctypes
- Plusses:
- part of Python standard library
- good for interfacing to existing sharable libraries, particularly
Windows DLLs
- avoids API/reference counting issues
- good numpy support: arrays have all these in their ctypes
attribute: ::
a.ctypes.data a.ctypes.get_strides
a.ctypes.data_as a.ctypes.shape
a.ctypes.get_as_parameter a.ctypes.shape_as
a.ctypes.get_data a.ctypes.strides
a.ctypes.get_shape a.ctypes.strides_as
- Minuses:
- can't use for writing code to be turned into C extensions, only a wrapper
tool.
5) SWIG (automatic wrapper generator)
- Plusses:
- around a long time
- multiple scripting language support
- C++ support
- Good for wrapping large (many functions) existing C libraries
- Minuses:
- generates lots of code between Python and the C code
- can cause performance problems that are nearly impossible to optimize
out
- interface files can be hard to write
- doesn't necessarily avoid reference counting issues or needing to know
API's
7) scipy.weave
- Plusses:
- can turn many numpy expressions into C code
- dynamic compiling and loading of generated C code
- can embed pure C code in Python module and have weave extract, generate
interfaces and compile, etc.
- Minuses:
- Future very uncertain: it's the only part of Scipy not ported to Python 3
and is effectively deprecated in favor of Cython.
8) Psyco
- Plusses:
- Turns pure python into efficient machine code through jit-like
optimizations
- very fast when it optimizes well
- Minuses:
- Only on intel (windows?)
- Doesn't do much for numpy?
Interfacing to Fortran:
-----------------------
The clear choice to wrap Fortran code is
`f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_.
Pyfort is an older alternative, but not supported any longer.
Fwrap is a newer project that looked promising but isn't being developed any
longer.
Interfacing to C++:
-------------------
1) Cython
2) CXX
3) Boost.python
4) SWIG
5) SIP (used mainly in PyQT)
""" |
"""
=====================================================
Optimization and root finding (:mod:`scipy.optimize`)
=====================================================
.. currentmodule:: scipy.optimize
Optimization
============
Local Optimization
------------------
.. autosummary::
:toctree: generated/
minimize - Unified interface for minimizers of multivariate functions
minimize_scalar - Unified interface for minimizers of univariate functions
OptimizeResult - The optimization result returned by some optimizers
OptimizeWarning - The optimization encountered problems
The `minimize` function supports the following methods:
.. toctree::
optimize.minimize-neldermead
optimize.minimize-powell
optimize.minimize-cg
optimize.minimize-bfgs
optimize.minimize-newtoncg
optimize.minimize-lbfgsb
optimize.minimize-tnc
optimize.minimize-cobyla
optimize.minimize-slsqp
optimize.minimize-dogleg
optimize.minimize-trustncg
optimize.minimize-trustkrylov
optimize.minimize-trustexact
The `minimize_scalar` function supports the following methods:
.. toctree::
optimize.minimize_scalar-brent
optimize.minimize_scalar-bounded
optimize.minimize_scalar-golden
The specific optimization method interfaces below in this subsection are
not recommended for use in new scripts; all of these methods are accessible
via a newer, more consistent interface provided by the functions above.
General-purpose multivariate methods:
.. autosummary::
:toctree: generated/
fmin - Nelder-Mead Simplex algorithm
fmin_powell - Powell's (modified) level set method
fmin_cg - Non-linear (Polak-Ribiere) conjugate gradient algorithm
fmin_bfgs - Quasi-Newton method (Broydon-Fletcher-Goldfarb-Shanno)
fmin_ncg - Line-search Newton Conjugate Gradient
Constrained multivariate methods:
.. autosummary::
:toctree: generated/
fmin_l_bfgs_b - Zhu, Byrd, and Nocedal's constrained optimizer
fmin_tnc - Truncated Newton code
fmin_cobyla - Constrained optimization by linear approximation
fmin_slsqp - Minimization using sequential least-squares programming
differential_evolution - stochastic minimization using differential evolution
Univariate (scalar) minimization methods:
.. autosummary::
:toctree: generated/
fminbound - Bounded minimization of a scalar function
brent - 1-D function minimization using Brent method
golden - 1-D function minimization using Golden Section method
Equation (Local) Minimizers
---------------------------
.. autosummary::
:toctree: generated/
leastsq - Minimize the sum of squares of M equations in N unknowns
least_squares - Feature-rich least-squares minimization.
nnls - Linear least-squares problem with non-negativity constraint
lsq_linear - Linear least-squares problem with bound constraints
Global Optimization
-------------------
.. autosummary::
:toctree: generated/
basinhopping - Basinhopping stochastic optimizer
brute - Brute force searching optimizer
differential_evolution - stochastic minimization using differential evolution
Rosenbrock function
-------------------
.. autosummary::
:toctree: generated/
rosen - The Rosenbrock function.
rosen_der - The derivative of the Rosenbrock function.
rosen_hess - The Hessian matrix of the Rosenbrock function.
rosen_hess_prod - Product of the Rosenbrock Hessian with a vector.
Fitting
=======
.. autosummary::
:toctree: generated/
curve_fit -- Fit curve to a set of points
Root finding
============
Scalar functions
----------------
.. autosummary::
:toctree: generated/
brentq - quadratic interpolation Brent method
brenth - Brent method, modified by Harris with hyperbolic extrapolation
ridder - Ridder's method
bisect - Bisection method
newton - Secant method or Newton's method
Fixed point finding:
.. autosummary::
:toctree: generated/
fixed_point - Single-variable fixed-point solver
Multidimensional
----------------
General nonlinear solvers:
.. autosummary::
:toctree: generated/
root - Unified interface for nonlinear solvers of multivariate functions
fsolve - Non-linear multi-variable equation solver
broyden1 - Broyden's first method
broyden2 - Broyden's second method
The `root` function supports the following methods:
.. toctree::
optimize.root-hybr
optimize.root-lm
optimize.root-broyden1
optimize.root-broyden2
optimize.root-anderson
optimize.root-linearmixing
optimize.root-diagbroyden
optimize.root-excitingmixing
optimize.root-krylov
optimize.root-dfsane
Large-scale nonlinear solvers:
.. autosummary::
:toctree: generated/
newton_krylov
anderson
Simple iterations:
.. autosummary::
:toctree: generated/
excitingmixing
linearmixing
diagbroyden
:mod:`Additional information on the nonlinear solvers <scipy.optimize.nonlin>`
Linear Programming
==================
General linear programming solver:
.. autosummary::
:toctree: generated/
linprog -- Unified interface for minimizers of linear programming problems
The `linprog` function supports the following methods:
.. toctree::
optimize.linprog-simplex
optimize.linprog-interior-point
The simplex method supports callback functions, such as:
.. autosummary::
:toctree: generated/
linprog_verbose_callback -- Sample callback function for linprog (simplex)
Assignment problems:
.. autosummary::
:toctree: generated/
linear_sum_assignment -- Solves the linear-sum assignment problem
Utilities
=========
.. autosummary::
:toctree: generated/
approx_fprime - Approximate the gradient of a scalar function
bracket - Bracket a minimum, given two starting points
check_grad - Check the supplied derivative using finite differences
line_search - Return a step that satisfies the strong Wolfe conditions
show_options - Show specific options optimization solvers
LbfgsInvHessProduct - Linear operator for L-BFGS approximate inverse Hessian
""" |
#!/usr/bin/env python
# (c) 2013, NAME <EMAIL>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
#
# Author: NAME <EMAIL>
#
# Description:
# This module queries local or remote Docker daemons and generates
# inventory information.
#
# This plugin does not support targeting of specific hosts using the --host
# flag. Instead, it queries the Docker API for each container, running
# or not, and returns this data all once.
#
# The plugin returns the following custom attributes on Docker containers:
# docker_args
# docker_config
# docker_created
# docker_driver
# docker_exec_driver
# docker_host_config
# docker_hostname_path
# docker_hosts_path
# docker_id
# docker_image
# docker_name
# docker_network_settings
# docker_path
# docker_resolv_conf_path
# docker_state
# docker_volumes
# docker_volumes_rw
#
# Requirements:
# The docker-py module: https://github.com/dotcloud/docker-py
#
# Notes:
# A config file can be used to configure this inventory module, and there
# are several environment variables that can be set to modify the behavior
# of the plugin at runtime:
# DOCKER_CONFIG_FILE
# DOCKER_HOST
# DOCKER_VERSION
# DOCKER_TIMEOUT
# DOCKER_PRIVATE_SSH_PORT
# DOCKER_DEFAULT_IP
#
# Environment Variables:
# environment variable: DOCKER_CONFIG_FILE
# description:
# - A path to a Docker inventory hosts/defaults file in YAML format
# - A sample file has been provided, colocated with the inventory
# file called 'docker.yml'
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_HOST
# description:
# - The socket on which to connect to a Docker daemon API
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_VERSION
# description:
# - Version of the Docker API to use
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_TIMEOUT
# description:
# - Timeout in seconds for connections to Docker daemon API
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_PRIVATE_SSH_PORT
# description:
# - The private port (container port) on which SSH is listening
# for connections
# default: 22
# required: false
# environment variable: DOCKER_DEFAULT_IP
# description:
# - This environment variable overrides the container SSH connection
# IP address (aka, 'ansible_ssh_host')
#
# This option allows one to override the ansible_ssh_host whenever
# Docker has exercised its default behavior of binding private ports
# to all interfaces of the Docker host. This behavior, when dealing
# with remote Docker hosts, does not allow Ansible to determine
# a proper host IP address on which to connect via SSH to containers.
# By default, this inventory module assumes all IP_ADDRESS-exposed
# ports to be bound to localhost:<port>. To override this
# behavior, for example, to bind a container's SSH port to the public
# interface of its host, one must manually set this IP.
#
# It is preferable to begin to launch Docker containers with
# ports exposed on publicly accessible IP addresses, particularly
# if the containers are to be targeted by Ansible for remote
# configuration, not accessible via localhost SSH connections.
#
# Docker containers can be explicitly exposed on IP addresses by
# a) starting the daemon with the --ip argument
# b) running containers with the -P/--publish ip::containerPort
# argument
# default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker
# required: false
#
# Examples:
# Use the config file:
# DOCKER_CONFIG_FILE=./docker.yml docker.py --list
#
# Connect to docker instance on localhost port 4243
# DOCKER_HOST=tcp://localhost:4243 docker.py --list
#
# Any container's ssh port exposed on IP_ADDRESS will mapped to
# another IP address (where Ansible will attempt to connect via SSH)
# DOCKER_DEFAULT_IP=IP_ADDRESS docker.py --list
|
# # ===============================================================================
# # Copyright 2013 NAME #
# # Licensed under the Apache License, Version 2.0 (the "License");
# # you may not use this file except in compliance with the License.
# # You may obtain a copy of the License at
# #
# # http://www.apache.org/licenses/LICENSE-2.0
# #
# # Unless required by applicable law or agreed to in writing, software
# # distributed under the License is distributed on an "AS IS" BASIS,
# # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# # See the License for the specific language governing permissions and
# # limitations under the License.
# # ===============================================================================
#
# # ============= enthought library imports =======================
# from chaco.array_data_source import ArrayDataSource
# from pyface.action.menu_manager import MenuManager
# from traits.api import HasTraits, List, Int, Any, Str, Event, on_trait_change, Instance
# from traitsui.api import View, UItem, TabularEditor, VSplit, \
# Handler, HGroup
# from traitsui.menu import Action
# from traitsui.tabular_adapter import TabularAdapter
# # ============= standard library imports ========================
# # ============= local library imports ==========================
# from pychron.core.helpers.isotope_utils import sort_isotopes
# from pychron.graph.stacked_regression_graph import StackedRegressionGraph
# from pychron.processing.analyses.changes import BlankChange, FitChange
#
#
# class ChangeAdapter(TabularAdapter):
# font = 'arial 10'
# create_date_width = Int(120)
#
#
# class BlankHistoryAdapter(ChangeAdapter):
# columns = [('Date', 'create_date'), ('Summary', 'summary')]
#
# def get_bg_color(self, object, trait, row, column=0):
# color = 'white'
# if self.item.active:
# color = '#B0C4DE'
# return color
#
# def get_menu(self, obj, trait, row, column):
# enabled = True
# if self.item.selected:
# enabled = bool(self.item.selected.values)
#
# diffable=len(obj.blank_selected)==2
#
# return MenuManager(Action(name='Show Time Series',
# action='show_blank_time_series',
# enabled=enabled),
# Action(name='Apply Change', action='apply_blank_change'),
# Action(name='Apply Change to Session', action='apply_session_blank_change'),
# Action(name='Diff Selected', action='diff_blank_histories',
# enabled=diffable))
#
#
# class FitHistoryAdapter(ChangeAdapter):
# columns = [('Date', 'create_date'), ('Summary', 'summary')]
#
#
# class FitAdapter(TabularAdapter):
# font = 'arial 10'
# columns = [('Isotope', 'isotope'), ('Fit', 'fit')]
#
#
# class IsotopeBlankAdapter(TabularAdapter):
# font = 'arial 10'
# columns = [('Isotope', 'isotope'), ('Blank Method', 'fit')]
# isotope_width = Int(80)
#
# class AnalysesAdapter(TabularAdapter):
# font = 'arial 10'
# columns = [('Run ID', 'record_id')]
#
#
# class HistoryHandler(Handler):
# def show_blank_time_series(self, info, obj):
# obj.show_blank_time_series()
#
# def apply_blank_change(self, info, obj):
# # obj.apply_blank_change()
# obj.apply_blank_change_needed = (False, obj.blank_selected[0])
#
# def apply_session_blank_change(self, info, obj):
# # obj.apply_session_blank_change()
# obj.apply_blank_change_needed = (True, obj.blank_selected[0])
#
# def diff_blank_histories(self, info, obj):
# obj.diff_blank_histories()
#
#
# class HistoryView(HasTraits):
# name = 'History'
#
# blank_changes = List
# fit_changes = List
# tag_changes = List
# commits = List
#
# blank_selected = List
# blank_right_clicked = Any
# fit_selected = Any
#
# analysis_uuid = Str
# apply_blank_change_needed = Event
# refresh_needed = Event
# load_ages_needed=Event
# # update_blank_selected = Event
# blank_selected_=Instance(BlankChange)
# # blank_selected_ = Property(depends_on='update_blank_selected')
#
# @on_trait_change('blank_selected[]')
# def _handle_blank_selected(self):
# # print self.blank_selected
# self.blank_selected_=self.blank_selected[0]
#
# def __init__(self, an, *args, **kw):
# super(HistoryView, self).__init__(*args, **kw)
# self._load(an)
#
# def load(self, an):
# self._load(an)
#
# def diff_blank_histories(self):
# from pychron.processing.analyses.view.blank_diff_view import BlankDiffView
#
# c = BlankDiffView()
# left, right = self.blank_selected
#
# self.load_ages_needed = left, right
# c.load(left, right)
# c.edit_traits()
#
# def show_blank_time_series(self):
# g = StackedRegressionGraph(window_height=0.75)
# isotopes = self.blank_selected[0].isotopes
# keys = sort_isotopes([iso.isotope for iso in isotopes], reverse=False)
# _mi, _ma = None, None
#
# for k in keys:
# iso = next((i for i in isotopes if i.isotope == k))
# # print iso.analyses
# g.new_plot(padding_right=10)
# g.set_time_xaxis()
# g.set_y_title(iso.isotope)
#
# g.new_series([self.timestamp], [iso.value],
# marker_size=3,
# fit=False,
# type='scatter', marker_color='black')
# vs = iso.values
# if vs:
# ts = [vi.timestamp for vi in vs]
# _mi = min(ts)
# _ma = max(ts)
# g.new_series(ts,
# [vi.value for vi in vs],
# yerror=ArrayDataSource([vi.error for vi in vs]),
# marker_size=3,
# fit=(iso.fit, 'SD'),
# type='scatter', marker_color='red')
#
# if not _mi:
# _mi, _ma = self.timestamp - 86400, self.timestamp + 86400
#
# g.set_x_limits(_mi, _ma, pad='0.1')
# g.refresh()
#
# g.set_x_title('Time', plotid=0)
# g.edit_traits()
#
# def _load(self, an):
# self.analysis_uuid = an.uuid
#
# self.blank_changes = an.blank_changes
# self.fit_changes = an.fit_changes
# if self.fit_changes:
# self.fit_selected = self.fit_changes[-1]
# else:
# self.fit_selected=FitChange()
#
# if self.blank_changes:
# self.blank_selected = next(([bi] for bi in self.blank_changes if bi.id == an.selected_blanks_id),
# self.blank_changes[-1:])
# else:
# self.blank_selected=[BlankChange()]
#
# self.timestamp = an.timestamp
#
# def traits_view(self):
# v = View(VSplit(UItem('blank_changes', editor=TabularEditor(adapter=BlankHistoryAdapter(),
# selected='blank_selected',
# refresh='refresh_needed',
# multi_select=True,
# editable=False)),
# HGroup(
# UItem('object.blank_selected_.isotopes', editor=TabularEditor(adapter=IsotopeBlankAdapter(),
# refresh='refresh_needed',
# selected='object.blank_selected_.selected',
# editable=False)),
# UItem('object.blank_selected_.selected.analyses',
# editor=TabularEditor(adapter=AnalysesAdapter(),
# refresh='refresh_needed',
# editable=False))),
# label='Blanks'),
# VSplit(
# UItem('fit_changes', editor=TabularEditor(adapter=FitHistoryAdapter(),
# selected='fit_selected',
# editable=False)),
# UItem('object.fit_selected.fits', editor=TabularEditor(adapter=FitAdapter(),
# editable=False)),
# label='Iso. Fits'),
# handler=HistoryHandler())
# return v
#
# # ============= EOF =============================================
#
|
"""
===============
Array Internals
===============
Internal organization of numpy arrays
=====================================
It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to NumPy".
NumPy arrays consist of two major components, the raw array data (from now on,
referred to as the data buffer), and the information about the raw array data.
The data buffer is typically what people think of as arrays in C or Fortran,
a contiguous (and fixed) block of memory containing fixed sized data items.
NumPy also contains a significant set of data that describes how to interpret
the data in the data buffer. This extra information contains (among other things):
1) The basic data element's size in bytes
2) The start of the data within the data buffer (an offset relative to the
beginning of the data buffer).
3) The number of dimensions and the size of each dimension
4) The separation between elements for each dimension (the 'stride'). This
does not have to be a multiple of the element size
5) The byte order of the data (which may not be the native byte order)
6) Whether the buffer is read-only
7) Information (via the dtype object) about the interpretation of the basic
data element. The basic data element may be as simple as a int or a float,
or it may be a compound object (e.g., struct-like), a fixed character field,
or Python object pointers.
8) Whether the array is to interpreted as C-order or Fortran-order.
This arrangement allow for very flexible use of arrays. One thing that it allows
is simple changes of the metadata to change the interpretation of the array buffer.
Changing the byteorder of the array is a simple change involving no rearrangement
of the data. The shape of the array can be changed very easily without changing
anything in the data buffer or any data copying at all
Among other things that are made possible is one can create a new array metadata
object that uses the same data buffer
to create a new view of that data buffer that has a different interpretation
of the buffer (e.g., different shape, offset, byte order, strides, etc) but
shares the same data bytes. Many operations in numpy do just this such as
slices. Other operations, such as transpose, don't move data elements
around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
Typically these new versions of the array metadata but the same data buffer are
new 'views' into the data buffer. There is a different ndarray object, but it
uses the same data buffer. This is why it is necessary to force copies through
use of the .copy() method if one really wants to make a new and independent
copy of the data buffer.
New views into arrays mean the object reference counts for the data buffer
increase. Simply doing away with the original array object will not remove the
data buffer if other views of it still exist.
Multidimensional Array Indexing Order Issues
============================================
What is the right way to index
multi-dimensional arrays? Before you jump to conclusions about the one and
true way to index multi-dimensional arrays, it pays to understand why this is
a confusing issue. This section will try to explain in detail how numpy
indexing works and why we adopt the convention we do for images, and when it
may be appropriate to adopt other conventions.
The first thing to understand is
that there are two conflicting conventions for indexing 2-dimensional arrays.
Matrix notation uses the first index to indicate which row is being selected and
the second index to indicate which column is selected. This is opposite the
geometrically oriented-convention for images where people generally think the
first index represents x position (i.e., column) and the second represents y
position (i.e., row). This alone is the source of much confusion;
matrix-oriented users and image-oriented users expect two different things with
regard to indexing.
The second issue to understand is how indices correspond
to the order the array is stored in memory. In Fortran the first index is the
most rapidly varying index when moving through the elements of a two
dimensional array as it is stored in memory. If you adopt the matrix
convention for indexing, then this means the matrix is stored one column at a
time (since the first index moves to the next row as it changes). Thus Fortran
is considered a Column-major language. C has just the opposite convention. In
C, the last index changes most rapidly as one moves through the array as
stored in memory. Thus C is a Row-major language. The matrix is stored by
rows. Note that in both cases it presumes that the matrix convention for
indexing is being used, i.e., for both Fortran and C, the first index is the
row. Note this convention implies that the indexing convention is invariant
and that the data order changes to keep that so.
But that's not the only way
to look at it. Suppose one has large two-dimensional arrays (images or
matrices) stored in data files. Suppose the data are stored by rows rather than
by columns. If we are to preserve our index convention (whether matrix or
image) that means that depending on the language we use, we may be forced to
reorder the data if it is read into memory to preserve our indexing
convention. For example if we read row-ordered data into memory without
reordering, it will match the matrix indexing convention for C, but not for
Fortran. Conversely, it will match the image indexing convention for Fortran,
but not for C. For C, if one is using data stored in row order, and one wants
to preserve the image index convention, the data must be reordered when
reading into memory.
In the end, which you do for Fortran or C depends on
which is more important, not reordering data or preserving the indexing
convention. For large images, reordering data is potentially expensive, and
often the indexing convention is inverted to avoid that.
The situation with
numpy makes this issue yet more complicated. The internal machinery of numpy
arrays is flexible enough to accept any ordering of indices. One can simply
reorder indices by manipulating the internal stride information for arrays
without reordering the data at all. NumPy will know how to map the new index
order to the data without moving the data.
So if this is true, why not choose
the index order that matches what you most expect? In particular, why not define
row-ordered images to use the image convention? (This is sometimes referred
to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
order options for array ordering in numpy.) The drawback of doing this is
potential performance penalties. It's common to access the data sequentially,
either implicitly in array operations or explicitly by looping over rows of an
image. When that is done, then the data will be accessed in non-optimal order.
As the first index is incremented, what is actually happening is that elements
spaced far apart in memory are being sequentially accessed, with usually poor
memory access speeds. For example, for a two dimensional image 'im' defined so
that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
Python behavior then im[0] would represent a column at x=0. Yet that data
would be spread over the whole array since the data are stored in row order.
Despite the flexibility of numpy's indexing, it can't really paper over the fact
basic operations are rendered inefficient because of data order or that getting
contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
im[0]), thus one can't use an idiom such as for row in im; for col in im does
work, but doesn't yield contiguous column data.
As it turns out, numpy is
smart enough when dealing with ufuncs to determine which index is the most
rapidly varying one in memory and uses that for the innermost loop. Thus for
ufuncs there is no large intrinsic advantage to either approach in most cases.
On the other hand, use of .flat with an FORTRAN ordered array will lead to
non-optimal memory access as adjacent elements in the flattened array (iterator,
actually) are not contiguous in memory.
Indeed, the fact is that Python
indexing on lists and other sequences naturally leads to an outside-to inside
ordering (the first index gets the largest grouping, the next the next largest,
and the last gets the smallest element). Since image data are normally stored
by rows, this corresponds to position within rows being the last item indexed.
If you do want to use Fortran ordering realize that
there are two approaches to consider: 1) accept that the first index is just not
the most rapidly changing in memory and have all your I/O routines reorder
your data when going from memory to disk or visa versa, or use numpy's
mechanism for mapping the first index to the most rapidly varying data. We
recommend the former if possible. The disadvantage of the latter is that many
of numpy's functions will yield arrays without Fortran ordering unless you are
careful to use the 'order' keyword. Doing this would be highly inconvenient.
Otherwise we recommend simply learning to reverse the usual order of indices
when accessing elements of an array. Granted, it goes against the grain, but
it is more in line with Python semantics and the natural order of the data.
""" |
"""
Writing Plugins
---------------
nose supports plugins for test collection, selection, observation and
reporting. There are two basic rules for plugins:
* Plugin classes should subclass :class:`nose.plugins.Plugin`.
* Plugins may implement any of the methods described in the class
:doc:`IPluginInterface <interface>` in nose.plugins.base. Please note that
this class is for documentary purposes only; plugins may not subclass
IPluginInterface.
Hello World
===========
Here's a basic plugin. It doesn't do much so read on for more ideas or dive
into the :doc:`IPluginInterface <interface>` to see all available hooks.
.. code-block:: python
import logging
import os
from nose.plugins import Plugin
log = logging.getLogger('nose.plugins.helloworld')
class HelloWorld(Plugin):
name = 'helloworld'
def options(self, parser, env=os.environ):
super(HelloWorld, self).options(parser, env=env)
def configure(self, options, conf):
super(HelloWorld, self).configure(options, conf)
if not self.enabled:
return
def finalize(self, result):
log.info('Hello pluginized world!')
Registering
===========
.. Note::
Important note: the following applies only to the default
plugin manager. Other plugin managers may use different means to
locate and load plugins.
For nose to find a plugin, it must be part of a package that uses
setuptools_, and the plugin must be included in the entry points defined
in the setup.py for the package:
.. code-block:: python
setup(name='Some plugin',
# ...
entry_points = {
'nose.plugins.0.10': [
'someplugin = someplugin:SomePlugin'
]
},
# ...
)
Once the package is installed with install or develop, nose will be able
to load the plugin.
.. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools
Registering a plugin without setuptools
=======================================
It is currently possible to register a plugin programmatically by
creating a custom nose runner like this :
.. code-block:: python
import nose
from yourplugin import YourPlugin
if __name__ == '__main__':
nose.main(addplugins=[YourPlugin()])
Defining options
================
All plugins must implement the methods ``options(self, parser, env)``
and ``configure(self, options, conf)``. Subclasses of nose.plugins.Plugin
that want the standard options should call the superclass methods.
nose uses optparse.OptionParser from the standard library to parse
arguments. A plugin's ``options()`` method receives a parser
instance. It's good form for a plugin to use that instance only to add
additional arguments that take only long arguments (--like-this). Most
of nose's built-in arguments get their default value from an environment
variable.
A plugin's ``configure()`` method receives the parsed ``OptionParser`` options
object, as well as the current config object. Plugins should configure their
behavior based on the user-selected settings, and may raise exceptions
if the configured behavior is nonsensical.
Logging
=======
nose uses the logging classes from the standard library. To enable users
to view debug messages easily, plugins should use ``logging.getLogger()`` to
acquire a logger in the ``nose.plugins`` namespace.
Recipes
=======
* Writing a plugin that monitors or controls test result output
Implement any or all of ``addError``, ``addFailure``, etc., to monitor test
results. If you also want to monitor output, implement
``setOutputStream`` and keep a reference to the output stream. If you
want to prevent the builtin ``TextTestResult`` output, implement
``setOutputSteam`` and *return a dummy stream*. The default output will go
to the dummy stream, while you send your desired output to the real stream.
Example: `examples/html_plugin/htmlplug.py`_
* Writing a plugin that handles exceptions
Subclass :doc:`ErrorClassPlugin <errorclasses>`.
Examples: :doc:`nose.plugins.deprecated <deprecated>`,
:doc:`nose.plugins.skip <skip>`
* Writing a plugin that adds detail to error reports
Implement ``formatError`` and/or ``formatFailure``. The error tuple
you return (error class, error message, traceback) will replace the
original error tuple.
Examples: :doc:`nose.plugins.capture <capture>`,
:doc:`nose.plugins.failuredetail <failuredetail>`
* Writing a plugin that loads tests from files other than python modules
Implement ``wantFile`` and ``loadTestsFromFile``. In ``wantFile``,
return True for files that you want to examine for tests. In
``loadTestsFromFile``, for those files, return an iterable
containing TestCases (or yield them as you find them;
``loadTestsFromFile`` may also be a generator).
Example: :doc:`nose.plugins.doctests <doctests>`
* Writing a plugin that prints a report
Implement ``begin`` if you need to perform setup before testing
begins. Implement ``report`` and output your report to the provided stream.
Examples: :doc:`nose.plugins.cover <cover>`, :doc:`nose.plugins.prof <prof>`
* Writing a plugin that selects or rejects tests
Implement any or all ``want*`` methods. Return False to reject the test
candidate, True to accept it -- which means that the test candidate
will pass through the rest of the system, so you must be prepared to
load tests from it if tests can't be loaded by the core loader or
another plugin -- and None if you don't care.
Examples: :doc:`nose.plugins.attrib <attrib>`,
:doc:`nose.plugins.doctests <doctests>`, :doc:`nose.plugins.testid <testid>`
More Examples
=============
See any builtin plugin or example plugin in the examples_ directory in
the nose source distribution. There is a list of third-party plugins
`on jottit`_.
.. _examples/html_plugin/htmlplug.py: http://python-nose.googlecode.com/svn/trunk/examples/html_plugin/htmlplug.py
.. _examples: http://python-nose.googlecode.com/svn/trunk/examples
.. _on jottit: http://nose-plugins.jottit.com/
""" |
"""
=====================================================
Optimization and root finding (:mod:`scipy.optimize`)
=====================================================
.. currentmodule:: scipy.optimize
Optimization
============
Local Optimization
------------------
.. autosummary::
:toctree: generated/
minimize - Unified interface for minimizers of multivariate functions
minimize_scalar - Unified interface for minimizers of univariate functions
OptimizeResult - The optimization result returned by some optimizers
OptimizeWarning - The optimization encountered problems
The `minimize` function supports the following methods:
.. toctree::
optimize.minimize-neldermead
optimize.minimize-powell
optimize.minimize-cg
optimize.minimize-bfgs
optimize.minimize-newtoncg
optimize.minimize-lbfgsb
optimize.minimize-tnc
optimize.minimize-cobyla
optimize.minimize-slsqp
optimize.minimize-dogleg
optimize.minimize-trustncg
The `minimize_scalar` function supports the following methods:
.. toctree::
optimize.minimize_scalar-brent
optimize.minimize_scalar-bounded
optimize.minimize_scalar-golden
The specific optimization method interfaces below in this subsection are
not recommended for use in new scripts; all of these methods are accessible
via a newer, more consistent interface provided by the functions above.
General-purpose multivariate methods:
.. autosummary::
:toctree: generated/
fmin - Nelder-Mead Simplex algorithm
fmin_powell - Powell's (modified) level set method
fmin_cg - Non-linear (Polak-Ribiere) conjugate gradient algorithm
fmin_bfgs - Quasi-Newton method (Broydon-Fletcher-Goldfarb-Shanno)
fmin_ncg - Line-search Newton Conjugate Gradient
Constrained multivariate methods:
.. autosummary::
:toctree: generated/
fmin_l_bfgs_b - Zhu, Byrd, and Nocedal's constrained optimizer
fmin_tnc - Truncated Newton code
fmin_cobyla - Constrained optimization by linear approximation
fmin_slsqp - Minimization using sequential least-squares programming
differential_evolution - stochastic minimization using differential evolution
Univariate (scalar) minimization methods:
.. autosummary::
:toctree: generated/
fminbound - Bounded minimization of a scalar function
brent - 1-D function minimization using Brent method
golden - 1-D function minimization using Golden Section method
Equation (Local) Minimizers
---------------------------
.. autosummary::
:toctree: generated/
leastsq - Minimize the sum of squares of M equations in N unknowns
least_squares - Feature-rich least-squares minimization.
nnls - Linear least-squares problem with non-negativity constraint
lsq_linear - Linear least-squares problem with bound constraints
Global Optimization
-------------------
.. autosummary::
:toctree: generated/
basinhopping - Basinhopping stochastic optimizer
brute - Brute force searching optimizer
differential_evolution - stochastic minimization using differential evolution
Rosenbrock function
-------------------
.. autosummary::
:toctree: generated/
rosen - The Rosenbrock function.
rosen_der - The derivative of the Rosenbrock function.
rosen_hess - The Hessian matrix of the Rosenbrock function.
rosen_hess_prod - Product of the Rosenbrock Hessian with a vector.
Fitting
=======
.. autosummary::
:toctree: generated/
curve_fit -- Fit curve to a set of points
Root finding
============
Scalar functions
----------------
.. autosummary::
:toctree: generated/
brentq - quadratic interpolation Brent method
brenth - Brent method, modified by Harris with hyperbolic extrapolation
ridder - Ridder's method
bisect - Bisection method
newton - Secant method or Newton's method
Fixed point finding:
.. autosummary::
:toctree: generated/
fixed_point - Single-variable fixed-point solver
Multidimensional
----------------
General nonlinear solvers:
.. autosummary::
:toctree: generated/
root - Unified interface for nonlinear solvers of multivariate functions
fsolve - Non-linear multi-variable equation solver
broyden1 - Broyden's first method
broyden2 - Broyden's second method
The `root` function supports the following methods:
.. toctree::
optimize.root-hybr
optimize.root-lm
optimize.root-broyden1
optimize.root-broyden2
optimize.root-anderson
optimize.root-linearmixing
optimize.root-diagbroyden
optimize.root-excitingmixing
optimize.root-krylov
optimize.root-dfsane
Large-scale nonlinear solvers:
.. autosummary::
:toctree: generated/
newton_krylov
anderson
Simple iterations:
.. autosummary::
:toctree: generated/
excitingmixing
linearmixing
diagbroyden
:mod:`Additional information on the nonlinear solvers <scipy.optimize.nonlin>`
Linear Programming
==================
Simplex Algorithm:
.. autosummary::
:toctree: generated/
linprog -- Linear programming using the simplex algorithm
linprog_verbose_callback -- Sample callback function for linprog
The `linprog` function supports the following methods:
.. toctree::
optimize.linprog-simplex
Assignment problems:
.. autosummary::
:toctree: generated/
linear_sum_assignment -- Solves the linear-sum assignment problem
Utilities
=========
.. autosummary::
:toctree: generated/
approx_fprime - Approximate the gradient of a scalar function
bracket - Bracket a minimum, given two starting points
check_grad - Check the supplied derivative using finite differences
line_search - Return a step that satisfies the strong Wolfe conditions
show_options - Show specific options optimization solvers
LbfgsInvHessProduct - Linear operator for L-BFGS approximate inverse Hessian
""" |
"""
Discrete Fourier Transform (:mod:`numpy.fft`)
=============================================
.. currentmodule:: numpy.fft
Standard FFTs
-------------
.. autosummary::
:toctree: generated/
fft Discrete Fourier transform.
ifft Inverse discrete Fourier transform.
fft2 Discrete Fourier transform in two dimensions.
ifft2 Inverse discrete Fourier transform in two dimensions.
fftn Discrete Fourier transform in N-dimensions.
ifftn Inverse discrete Fourier transform in N dimensions.
Real FFTs
---------
.. autosummary::
:toctree: generated/
rfft Real discrete Fourier transform.
irfft Inverse real discrete Fourier transform.
rfft2 Real discrete Fourier transform in two dimensions.
irfft2 Inverse real discrete Fourier transform in two dimensions.
rfftn Real discrete Fourier transform in N dimensions.
irfftn Inverse real discrete Fourier transform in N dimensions.
Hermitian FFTs
--------------
.. autosummary::
:toctree: generated/
hfft Hermitian discrete Fourier transform.
ihfft Inverse Hermitian discrete Fourier transform.
Helper routines
---------------
.. autosummary::
:toctree: generated/
fftfreq Discrete Fourier Transform sample frequencies.
rfftfreq DFT sample frequencies (for usage with rfft, irfft).
fftshift Shift zero-frequency component to center of spectrum.
ifftshift Inverse of fftshift.
Background information
----------------------
Fourier analysis is fundamentally a method for expressing a function as a
sum of periodic components, and for recovering the function from those
components. When both the function and its Fourier transform are
replaced with discretized counterparts, it is called the discrete Fourier
transform (DFT). The DFT has become a mainstay of numerical computing in
part because of a very fast algorithm for computing it, called the Fast
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
to light in its current form by NAME and NAME [CT]_. Press et al. [NR]_
provide an accessible introduction to Fourier analysis and its
applications.
Because the discrete Fourier transform separates its input into
components that contribute at discrete frequencies, it has a great number
of applications in digital signal processing, e.g., for filtering, and in
this context the discretized input to the transform is customarily
referred to as a *signal*, which exists in the *time domain*. The output
is called a *spectrum* or *transform* and exists in the *frequency
domain*.
Implementation details
----------------------
There are many ways to define the DFT, varying in the sign of the
exponent, normalization, etc. In this implementation, the DFT is defined
as
.. math::
A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\}
\\qquad k = 0,\\ldots,n-1.
The DFT is in general defined for complex inputs and outputs, and a
single-frequency component at linear frequency :math:`f` is
represented by a complex exponential
:math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t`
is the sampling interval.
The values in the result follow so-called "standard" order: If ``A =
fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the sum of
the signal), which is always purely real for real inputs. Then ``A[1:n/2]``
contains the positive-frequency terms, and ``A[n/2+1:]`` contains the
negative-frequency terms, in order of decreasingly negative frequency.
For an even number of input points, ``A[n/2]`` represents both positive and
negative Nyquist frequency, and is also purely real for real input. For
an odd number of input points, ``A[(n-1)/2]`` contains the largest positive
frequency, while ``A[(n+1)/2]`` contains the largest negative frequency.
The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies
of corresponding elements in the output. The routine
``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the
zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes
that shift.
When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)``
is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum.
The phase spectrum is obtained by ``np.angle(A)``.
The inverse DFT is defined as
.. math::
a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\}
\\qquad m = 0,\\ldots,n-1.
It differs from the forward transform by the sign of the exponential
argument and the default normalization by :math:`1/n`.
Normalization
-------------
The default normalization has the direct transforms unscaled and the inverse
transforms are scaled by :math:`1/n`. It is possible to obtain unitary
transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is
`None`) so that both direct and inverse transforms will be scaled by
:math:`1/\\sqrt{n}`.
Real and Hermitian transforms
-----------------------------
When the input is purely real, its transform is Hermitian, i.e., the
component at frequency :math:`f_k` is the complex conjugate of the
component at frequency :math:`-f_k`, which means that for real
inputs there is no information in the negative frequency components that
is not already available from the positive frequency components.
The family of `rfft` functions is
designed to operate on real inputs, and exploits this symmetry by
computing only the positive frequency components, up to and including the
Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex
output points. The inverses of this family assumes the same symmetry of
its input, and for an output of ``n`` points uses ``n/2+1`` input points.
Correspondingly, when the spectrum is purely real, the signal is
Hermitian. The `hfft` family of functions exploits this symmetry by
using ``n/2+1`` complex points in the input (time) domain for ``n`` real
points in the frequency domain.
In higher dimensions, FFTs are used, e.g., for image analysis and
filtering. The computational efficiency of the FFT means that it can
also be a faster way to compute large convolutions, using the property
that a convolution in the time domain is equivalent to a point-by-point
multiplication in the frequency domain.
Higher dimensions
-----------------
In two dimensions, the DFT is defined as
.. math::
A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1}
a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\}
\\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1,
which extends in the obvious way to higher dimensions, and the inverses
in higher dimensions also extend in the same way.
References
----------
.. [CT] NAME, NAME and John W. NAME, 1965, "An algorithm for the
machine calculation of complex Fourier series," *Math. Comput.*
19: 297-301.
.. [NR] NAME NAME NAME and NAME
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
12-13. Cambridge Univ. Press, Cambridge, UK.
Examples
--------
For examples, see the various functions.
""" |
"""
==============
Array Creation
==============
Introduction
============
There are 5 general mechanisms for creating arrays:
1) Conversion from other Python structures (e.g., lists, tuples)
2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros,
etc.)
3) Reading arrays from disk, either from standard or custom formats
4) Creating arrays from raw bytes through the use of strings or buffers
5) Use of special library functions (e.g., random)
This section will not cover means of replicating, joining, or otherwise
expanding or mutating existing arrays. Nor will it cover creating object
arrays or structured arrays. Both of those are covered in their own sections.
Converting Python array_like Objects to Numpy Arrays
====================================================
In general, numerical data arranged in an array-like structure in Python can
be converted to arrays through the use of the array() function. The most
obvious examples are lists and tuples. See the documentation for array() for
details for its use. Some objects may support the array-protocol and allow
conversion to arrays this way. A simple way to find out if the object can be
converted to a numpy array using array() is simply to try it interactively and
see if it works! (The Python Way).
Examples: ::
>>> x = np.array([2,3,1,0])
>>> x = np.array([2, 3, 1, 0])
>>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists,
and types
>>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]])
Intrinsic Numpy Array Creation
==============================
Numpy has built-in functions for creating arrays from scratch:
zeros(shape) will create an array filled with 0 values with the specified
shape. The default dtype is float64.
``>>> np.zeros((2, 3))
array([[ 0., 0., 0.], [ 0., 0., 0.]])``
ones(shape) will create an array filled with 1 values. It is identical to
zeros in all other respects.
arange() will create arrays with regularly incrementing values. Check the
docstring for complete information on the various ways it can be used. A few
examples will be given here: ::
>>> np.arange(10)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> np.arange(2, 10, dtype=np.float)
array([ 2., 3., 4., 5., 6., 7., 8., 9.])
>>> np.arange(2, 3, 0.1)
array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9])
Note that there are some subtleties regarding the last usage that the user
should be aware of that are described in the arange docstring.
linspace() will create arrays with a specified number of elements, and
spaced equally between the specified beginning and end values. For
example: ::
>>> np.linspace(1., 4., 6)
array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ])
The advantage of this creation function is that one can guarantee the
number of elements and the starting and end point, which arange()
generally will not do for arbitrary start, stop, and step values.
indices() will create a set of arrays (stacked as a one-higher dimensioned
array), one per dimension with each representing variation in that dimension.
An example illustrates much better than a verbal description: ::
>>> np.indices((3,3))
array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]])
This is particularly useful for evaluating functions of multiple dimensions on
a regular grid.
Reading Arrays From Disk
========================
This is presumably the most common case of large array creation. The details,
of course, depend greatly on the format of data on disk and so this section
can only give general pointers on how to handle various formats.
Standard Binary Formats
-----------------------
Various fields have standard formats for array data. The following lists the
ones with known python libraries to read them and return numpy arrays (there
may be others for which it is possible to read and convert to numpy arrays so
check the last section as well)
::
HDF5: PyTables
FITS: PyFITS
Examples of formats that cannot be read directly but for which it is not hard to
convert are those formats supported by libraries like PIL (able to read and
write many image formats such as jpg, png, etc).
Common ASCII Formats
------------------------
Comma Separated Value files (CSV) are widely used (and an export and import
option for programs like Excel). There are a number of ways of reading these
files in Python. There are CSV functions in Python and functions in pylab
(part of matplotlib).
More generic ascii files can be read using the io package in scipy.
Custom Binary Formats
---------------------
There are a variety of approaches one can use. If the file has a relatively
simple format then one can write a simple I/O library and use the numpy
fromfile() function and .tofile() method to read and write numpy arrays
directly (mind your byteorder though!) If a good C or C++ library exists that
read the data, one can wrap that library with a variety of techniques though
that certainly is much more work and requires significantly more advanced
knowledge to interface with C or C++.
Use of Special Libraries
------------------------
There are libraries that can be used to generate arrays for special purposes
and it isn't possible to enumerate all of them. The most common uses are use
of the many array generation functions in random that can generate arrays of
random values, and some utility functions to generate special matrices (e.g.
diagonal).
""" |
"""
===============
Array Internals
===============
Internal organization of numpy arrays
=====================================
It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy".
Numpy arrays consist of two major components, the raw array data (from now on,
referred to as the data buffer), and the information about the raw array data.
The data buffer is typically what people think of as arrays in C or Fortran,
a contiguous (and fixed) block of memory containing fixed sized data items.
Numpy also contains a significant set of data that describes how to interpret
the data in the data buffer. This extra information contains (among other things):
1) The basic data element's size in bytes
2) The start of the data within the data buffer (an offset relative to the
beginning of the data buffer).
3) The number of dimensions and the size of each dimension
4) The separation between elements for each dimension (the 'stride'). This
does not have to be a multiple of the element size
5) The byte order of the data (which may not be the native byte order)
6) Whether the buffer is read-only
7) Information (via the dtype object) about the interpretation of the basic
data element. The basic data element may be as simple as a int or a float,
or it may be a compound object (e.g., struct-like), a fixed character field,
or Python object pointers.
8) Whether the array is to interpreted as C-order or Fortran-order.
This arrangement allow for very flexible use of arrays. One thing that it allows
is simple changes of the metadata to change the interpretation of the array buffer.
Changing the byteorder of the array is a simple change involving no rearrangement
of the data. The shape of the array can be changed very easily without changing
anything in the data buffer or any data copying at all
Among other things that are made possible is one can create a new array metadata
object that uses the same data buffer
to create a new view of that data buffer that has a different interpretation
of the buffer (e.g., different shape, offset, byte order, strides, etc) but
shares the same data bytes. Many operations in numpy do just this such as
slices. Other operations, such as transpose, don't move data elements
around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move.
Typically these new versions of the array metadata but the same data buffer are
new 'views' into the data buffer. There is a different ndarray object, but it
uses the same data buffer. This is why it is necessary to force copies through
use of the .copy() method if one really wants to make a new and independent
copy of the data buffer.
New views into arrays mean the the object reference counts for the data buffer
increase. Simply doing away with the original array object will not remove the
data buffer if other views of it still exist.
Multidimensional Array Indexing Order Issues
============================================
What is the right way to index
multi-dimensional arrays? Before you jump to conclusions about the one and
true way to index multi-dimensional arrays, it pays to understand why this is
a confusing issue. This section will try to explain in detail how numpy
indexing works and why we adopt the convention we do for images, and when it
may be appropriate to adopt other conventions.
The first thing to understand is
that there are two conflicting conventions for indexing 2-dimensional arrays.
Matrix notation uses the first index to indicate which row is being selected and
the second index to indicate which column is selected. This is opposite the
geometrically oriented-convention for images where people generally think the
first index represents x position (i.e., column) and the second represents y
position (i.e., row). This alone is the source of much confusion;
matrix-oriented users and image-oriented users expect two different things with
regard to indexing.
The second issue to understand is how indices correspond
to the order the array is stored in memory. In Fortran the first index is the
most rapidly varying index when moving through the elements of a two
dimensional array as it is stored in memory. If you adopt the matrix
convention for indexing, then this means the matrix is stored one column at a
time (since the first index moves to the next row as it changes). Thus Fortran
is considered a Column-major language. C has just the opposite convention. In
C, the last index changes most rapidly as one moves through the array as
stored in memory. Thus C is a Row-major language. The matrix is stored by
rows. Note that in both cases it presumes that the matrix convention for
indexing is being used, i.e., for both Fortran and C, the first index is the
row. Note this convention implies that the indexing convention is invariant
and that the data order changes to keep that so.
But that's not the only way
to look at it. Suppose one has large two-dimensional arrays (images or
matrices) stored in data files. Suppose the data are stored by rows rather than
by columns. If we are to preserve our index convention (whether matrix or
image) that means that depending on the language we use, we may be forced to
reorder the data if it is read into memory to preserve our indexing
convention. For example if we read row-ordered data into memory without
reordering, it will match the matrix indexing convention for C, but not for
Fortran. Conversely, it will match the image indexing convention for Fortran,
but not for C. For C, if one is using data stored in row order, and one wants
to preserve the image index convention, the data must be reordered when
reading into memory.
In the end, which you do for Fortran or C depends on
which is more important, not reordering data or preserving the indexing
convention. For large images, reordering data is potentially expensive, and
often the indexing convention is inverted to avoid that.
The situation with
numpy makes this issue yet more complicated. The internal machinery of numpy
arrays is flexible enough to accept any ordering of indices. One can simply
reorder indices by manipulating the internal stride information for arrays
without reordering the data at all. Numpy will know how to map the new index
order to the data without moving the data.
So if this is true, why not choose
the index order that matches what you most expect? In particular, why not define
row-ordered images to use the image convention? (This is sometimes referred
to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN'
order options for array ordering in numpy.) The drawback of doing this is
potential performance penalties. It's common to access the data sequentially,
either implicitly in array operations or explicitly by looping over rows of an
image. When that is done, then the data will be accessed in non-optimal order.
As the first index is incremented, what is actually happening is that elements
spaced far apart in memory are being sequentially accessed, with usually poor
memory access speeds. For example, for a two dimensional image 'im' defined so
that im[0, 10] represents the value at x=0, y=10. To be consistent with usual
Python behavior then im[0] would represent a column at x=0. Yet that data
would be spread over the whole array since the data are stored in row order.
Despite the flexibility of numpy's indexing, it can't really paper over the fact
basic operations are rendered inefficient because of data order or that getting
contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs
im[0]), thus one can't use an idiom such as for row in im; for col in im does
work, but doesn't yield contiguous column data.
As it turns out, numpy is
smart enough when dealing with ufuncs to determine which index is the most
rapidly varying one in memory and uses that for the innermost loop. Thus for
ufuncs there is no large intrinsic advantage to either approach in most cases.
On the other hand, use of .flat with an FORTRAN ordered array will lead to
non-optimal memory access as adjacent elements in the flattened array (iterator,
actually) are not contiguous in memory.
Indeed, the fact is that Python
indexing on lists and other sequences naturally leads to an outside-to inside
ordering (the first index gets the largest grouping, the next the next largest,
and the last gets the smallest element). Since image data are normally stored
by rows, this corresponds to position within rows being the last item indexed.
If you do want to use Fortran ordering realize that
there are two approaches to consider: 1) accept that the first index is just not
the most rapidly changing in memory and have all your I/O routines reorder
your data when going from memory to disk or visa versa, or use numpy's
mechanism for mapping the first index to the most rapidly varying data. We
recommend the former if possible. The disadvantage of the latter is that many
of numpy's functions will yield arrays without Fortran ordering unless you are
careful to use the 'order' keyword. Doing this would be highly inconvenient.
Otherwise we recommend simply learning to reverse the usual order of indices
when accessing elements of an array. Granted, it goes against the grain, but
it is more in line with Python semantics and the natural order of the data.
""" |
"""
LAADS (Level 1 Atmosphere Archive and Distribution System)
NASA's LAADS hosts petabytes of land and atmospheric data obtained from a
wide array of different sources
--------------------------------------------------------------------------------------------
MODIS (Moderate Resolution Imaging Spectroradiometer)
- MODIS Collection 3 - Atmosphere 2002 Golden Data Set
- MODIS Collection 4 - Atmosphere 2002 Golden Data set
- MODIS Collection 4 - Gap Filled and Smoothed Land Products for NACP
- MODIS Collection 4.1 - Land Surface Temperature
- MODIS Collection 5 - L1, Atmosphere (Aqua 2002-2008, Terra 2000-March 2010) and Land
- MODIS Collection 5.1 - Selected Atmosphere and Land Products
- MODIS Collection 5.5 - Selected Land Products
- MODIS Collection 6 - L1. Atmosphere and Land data will be available after 2012
TERRA (EOA AM) and AQUA (EOS PM) Satellites
- Terra & Aqua Level 0 Products
- Terra & Aqua Level 1 Products
- Terra & Aqua Atmosphere Level 2 Products
- Terra & Aqua/Combined Atmosphere Level 3 Products
- Terra & Aqua Land Level 2 Products
- Terra & Aqua/Combined Land Level 3/Level 4 CMG Products
- Terra & Aqua Land Level 3/Level 4 Daily Tiled LST Products
- Combined Land Level 3/Level 4 4-Day Tiled Products
- Terra & Aqua/Combined Land Level 3/Level 4 8-Day Tiled Products
- Terra & Aqua/Combined Land Level 3/Level 4 16-Day Tiled Products
- Terra & Aqua/Combined Land Level 3/Level 4 Monthly Tiled Products
- Terra & Aqua/Combined Land Level 3/Level 4 Yearly Tiled Products
- Terra NACP 8-Day and Annual Tiled Products
-------------------------------------------------------------------------------------------
MAIAC MODIS (MultiAngle Implementation of Atmospheric Correction MODIS)
- MAIACBRF: Daily Surface Reflectance
- MAIACAOT: Daily Aerosol Optical Thickness
- MAIACRTLS: 8-day BRDF model parameters
-------------------------------------------------------------------------------------------
VIIRS (Visible Infrared Imaging Radiometer Suite)
- NPP Level 0 Products
- NPP Level 1 Products
- NPP Level 1 5-Minute Products
- NPP Level 1 Daily Products
- NPP Level 2 Products
- NPP Level 2 5-Minute Products
- NPP Level 2G Daily Products
- NPP Level 3 Products
- NPP Level 3 Daily Products
- NPP Level 3 Daily Tiled Products
- NPP Level 3 8-Day Tiled Products
- NPP Level 3 16-Day Tiled Products
- NPP Level 3 17-Day Tiled Products
- NPP Level 3 Monthly Tiled Products
- NPP Level 3 Quarterly Tiled Products
-------------------------------------------------------------------------------------------
eMAS (enhanced MODIS Airborne Simulator)
+----------------+-------------+----------------------------------------------+
| Campaign | Dates | Flight Numbers |
+-----------------+-------------+---------------------------------------------+
| TC4 | 07/07/2007- | 07-915, 07-916, 07-918, 07-919, 07-920, |
| | 08/08/207 | 07-921 |
+----------------+-------------+----------------------------------------------+
| CLASIC | 06/07/2007- | 07-619, 07-621, 07-622, 07-625, 07-626, |
| | 06/30/2007 | 07-627, 07-628, 07-630, 07-631, 07-632 |
+----------------+-------------+----------------------------------------------+
| CCVEX | 07/24/2006- | 06-610, 06-611, 06-612, 06-613, 06-614, |
| | 08/15/2006 | 06-615, 06-616, 06-617, 06-618, 06-619, |
| | | 06-620, 06-621, 06-622 |
+----------------+-------------+----------------------------------------------+
| DFRC 05 | 10/19/2005- | 06-901, 06-907, 06-908 |
| | 12/09/2005 | |
+----------------+-------------+----------------------------------------------+
| DFRC 06 | 09/26/2006- | 06-602, 06-603, 06-630 |
| | 10/13/2006 | |
+----------------+-------------+----------------------------------------------+
| TCSP | 07/01/2005- | 05-921, 05-922, 05-923, 05-924, 05-925, |
| | 07/28/2005 | 05-926, 05-927, 05-928, 05-929, 05-930, |
| | | 05-931, 05-932, 05-933, 05-934, 05-935, |
| | | 05-936 |
+----------------+-------------+----------------------------------------------+
| SSMIS #3 | 03/07/2005- | 05-910, 05-911, 05-912 |
| | 03/16/2005 | |
+----------------+-------------+----------------------------------------------+
| SSMIS #2 | 12/02/2004- | 05-904, 05-905 |
| | 12/20/2004 | |
+----------------+-------------+----------------------------------------------+
| SSMIS #1 | 03/15/2004- | 04-912, 04-913, 04-914, 04-917, 04-918 |
| | 03/26/2004 | |
+----------------+-------------+----------------------------------------------+
| DFRC 04 | 02/11/2004 | 04-906, 04-907, 04-919, 04-920, 04-921, |
| | 10/28/2004 | 04-940, 04-941, 04-942, 04-943, 04-944, |
| | | 04-945, 04-946, 04-953, 04-954, 04-955, |
| | | 04-956, 04-959, 05-901, 05-902 |
+----------------+-------------+----------------------------------------------+
| ATOST | 11/17/2003- | 04-615, 04-616, 04-619, 04-621, 04-622, |
| | 12/17/2003 | 04-623, 04-624, 04-625, 04-626, 04-627 |
+----------------+-------------+----------------------------------------------+
| GLAS | 10/16/2003- | 04-605, 04-606, 04-607 |
| | 10/18/2003 | |
+----------------+-------------+----------------------------------------------+
| DFRC 03 | 04/09/2003- | 03-623, 03-624, 03-942, 03-943, 03-944, |
| | 06/29/2003 | 03-945, 03-946, 03-947 |
+----------------+-------------+----------------------------------------------+
| USERNAME | 02/18/2003- | 03-610, 03-611, 03-612, 03-613, 03-614, |
| | 04/07/2003 | 03-615, 03-616, 03-617, 03-618, 03-619, |
| | | 03-622, 03-625, 03-931, 03-932, 03-933, |
| | | 03-934, 03-935 |
+----------------+-------------+----------------------------------------------+
| TX-2002 | 11/20/2002- | 03-911, 03-912, 03-913, 03-914, 03-915, |
| | 12/13/2002 | 03-916, 03-917, 03-918, 03-919, 03-920, |
| | | 03-921, 03-922, 03-923 |
+----------------+-------------+----------------------------------------------+
| DFRC 02 | 08/07/2002- | 02-926, 02-927, 02-928, 02-929, 02-930, |
| | 08/10/2002 | 02-931, 02-932, 02-959, 02-960, 02-961 |
+----------------+-------------+----------------------------------------------+
| CRYSTAL-FACE | 07/01/2002- | 02-941, 02-942, 02-943, 02-944, 02-945, |
| | 07/31/2002 | 02-946, 02-947, 02-948, 02-949, 02-950, |
| | | 02-951, 02-952, 02-953, 02-954, 02-955, |
| | | 02-956, 02-957 |
+----------------+-------------+----------------------------------------------+
| CAMEX 4 | 08/13/2001- | 01-122, 01-130, 01-131, 01-132, 01-133, |
| | 09/26/2001 | 01-135, 01-136, 01-137, 01-138, 01-139, |
| | | 01-140, 01-141, 01-142, 01-143 |
+----------------+-------------+----------------------------------------------+
| DFRC 01 | 03/08/2001- | 01-046, 01-047, 01-048, 01-059, 01-061, |
| | 10/03/2001 | 01-062, 01-093, 01-099, 02-602, 02-603 |
+----------------+-------------+----------------------------------------------+
| NAME | 07/09/2001- | 01-100, 01-101, 01-102, 01-103, 01-104, |
| | 08/03/2001 | 01-105, 01-106, 01-107, 01-108, 01-109, |
| | | 01-110 |
+----------------+-------------+----------------------------------------------+
| TX-2001 | 03/14/2001- | 01-049, 01-050, 01-051, 01-052, 01-053, |
| | 04/05/2001 | 01-054, 01-055, 01-056, 01-057, 01-058 |
+----------------+-------------+----------------------------------------------+
| Pre-SAFARI | 07/25/2000- | 00-137, 00-140, 00-142 |
| | 08/03/2000 | |
+----------------+-------------+----------------------------------------------+
| SAFARI 2000 | 08/06/2000- | 00-143, 00-147, 00-148, 00-149, 00-150, |
| | 09/25/2000 | 00-151, 00-152, 00-153, 00-155, 00-156, |
| | | 00-157, 00-158, 00-160, 00-175, 00-176, |
| | | 00-177, 00-178, 00-179, 00-180 |
+----------------+-------------+----------------------------------------------+
| WISC-T2000 | 02/24/2000- | 00-062, 00-063, 00-064, 00-065, 00-066, |
| | 03/13/2000 | 00-067, 00-068, 00-069, 00-070, 00-071 |
+----------------+-------------+----------------------------------------------+
| Wallops 2000 | 05/24/2000- | 00-110, 00-111 |
| | 05/25/2000 | |
+----------------+-------------+----------------------------------------------+
| DFRC 00 | 02/03/2000- | 00-057, 00-058, 00-059, 00-060, 00-112, |
| | 10/13/2000 | 00-113, 00-114, 00-115, 00-116, 00-117, |
| | | 00-118, 00-119, 01-001, 01-003 |
+----------------+-------------+----------------------------------------------+
| Hawaii 2000 | 04/04/2000- | 00-077, 00-079, 00-080, 00-081, 00-082, |
| | 04/27/2000 | 00-083, 00-084, 00-086, 00-087, 00-088, |
| | | 00-089, 00-090, 00-091, 00-092, 00-093 |
+----------------+-------------+----------------------------------------------+
| NAME | 05/07/1999- | 99-065, 99-067, 99-068, 99-069, 99-070, |
| | 05/27/1999 | 99-071, 99-072, 99-073, 99-074, 99-075, |
| | | 99-076, 99-077 |
+----------------+-------------+----------------------------------------------+
| DFRC 99 | 06/30/1999- | 99-091, 99-090, 99-087, 99-086, 99-085, |
| | 10/19/1999 | 00-013, 00-011, 00-008 |
+----------------+-------------+----------------------------------------------+
| WINTEX | 03/15/1999- | 99-050, 99-051, 99-053, 99-054, 99-055, |
| | 04/03/1999 | 99-056, 99-057, 99-058, 99-059, 99-060 |
+----------------+-------------+----------------------------------------------+
| TRMM-LBA | 01/22/1999- | 99-029, 99-030, 99-031, 99-032, 99-033, |
| | 02/23/1999 | 99-034, 99-037, 99-038, 99-039, 99-040, |
| | | 99-042, 99-043, 99-044, 99-045 |
+----------------+-------------+----------------------------------------------+
| DFRC 98 | 03/09/1998- | 98-031, 98-032, 98-033, 98-036, 98-040, |
| | 01/04/1999 | 98-041, 98-043, 98-078, 98-079, 98-080, |
| | | 99-017, 99-018, 99-019, 99-020, 99-022, |
| | | 99-023, 99-024 |
+--------------+-------------+------------------------------------------------+
| Wallops 98 | 07/11/1998- | 98-086, 98-087, 98-088, 98-089, 98-090 |
| | 07/16/1998 | |
+----------------+-------------+----------------------------------------------+
| FIRE-ACE | 05/13/1998- | 98-063, 98-064, 98-065, 98-066, 98-067, |
| | 06/08/1998 | 98-068, 98-069, 98-070, 98-071, 98-072, |
| | | 98-073, 98-074, 98-075, 98-07698_077 |
+----------------+-------------+----------------------------------------------+
| Wallops 97 | 08/01/1997- | 97-135, 97-136, 97-137, 97-138, 97-139, |
| | 08/21/1997 | 97-140, 97-141, 97-142 |
+----------------+-------------+----------------------------------------------+
| ARC 97 | 03/03/1997- | 97-062, 97-063, 97-064, 97-113, 97-114, |
| | 07/30/1997 | 97-126, 97-127, 97-128, 97-133 |
+----------------+-------------+----------------------------------------------+
| WINCE | 01/28/1997- | 97-041, 97-042, 97-043, 97-044, 97-045, |
| | 02/13/1997 | 97-046, 97-047, 97-048, 97-049, 97-050 |
+----------------+-------------+----------------------------------------------+
| ARC 96 | 09/09/1996- | 96-176, 96-177, 97-012, 97-014 |
| | 11/11/1996 | |
+----------------+-------------+----------------------------------------------+
| Spokane 96 | 07/30/1996- | 96-156, 96-158, 96-159, 96-160, 96-162 |
| | 08/20/1996 | |
+----------------+-------------+----------------------------------------------+
| TARFOX | 07/07/1996- | 96-145, 96-146, 96-147, 96-148, 96-149, |
| | 07/26/1996 | 96-150, 96-151, 96-152, 96-153, 96-154 |
+----------------+-------------+----------------------------------------------+
| Wallops 96 | 07/05/1996 | 96-144 |
+----------------+-------------+----------------------------------------------+
| ARC 96 | 04/02/1996- | 96-088, 96-089, 96-126, 96-127, 96-128 |
| | 06/05/1996 | |
+----------------+-------------+----------------------------------------------+
| SUCCESS | 04/08/1996- | 96-100, 6-101, 96-102, 96-103, 96-104, |
| | 05/15/1996 | 96-105, 96-106, 96-107, 96-108, 96-109, |
| | | 96-110, 96-111, 96-112, 96-113, 96-114, |
| | | 96-115, 96-116, 96-117 |
+----------------+-------------+----------------------------------------------+
| BOREAS 96 | 08/14/1996 | 96-161 |
+----------------+-------------+----------------------------------------------+
| ARESE | 09/25/1995- | 95-175, 95-176, 95-197, 95-198, 95-199, |
| | 10/23/1995 | 96-001, 96-002, 96-003, 96-004, 96-006, |
| | | 96-007, 96-008, 96-009, 96-010, 96-020 |
+----------------+-------------+----------------------------------------------+
| PR 95 | 09/21/1995- | 95-172, 95-173, 95-174 |
| | 09/23/1995 | |
+----------------+-------------+----------------------------------------------+
| SCAR-B | 08/13/1995- | 95-158, 95-160, 95-161, 95-162, 95-163, |
| | 09/11/1995 | 95-164, 95-165, 95-1665-167, 95-168, 95-169, |
| | | 95-170 |
+----------------+-------------+----------------------------------------------+
| ARC 95 | 06/19/1995- | 95-122, 95-123, 95-124, 95-125, 95-126, |
| | 08/10/1995 | 95-153, 95-156, 95-157 |
+----------------+-------------+----------------------------------------------+
| ARMCAS | 06/02/1995- | 95-112, 95-113, 95-115, 95-116, 95-117, |
| | 06/16/1995 | 95-118, 95-119, 95-120, 95-121 |
+--------------+-------------+------------------------------------------------+
| ALASKA-April95 | 03/29/1995- | 95-069, 95-070, 95-071, 95-072, 95-073, |
| | 04/25/1995 | 95-074, 95-075, 95-076, 95-077, 95-078, |
| | | 95-079 |
+----------------+-------------+----------------------------------------------+
... And so on ...
--------------------------------------------------------------------------------------------
In a final product we would continue listing the various products
provided from the LAADS, Landsat, and EOS projects. We propose a logical
and intuitive database structure which would abstract the sattelite/sensor
relationships and simplify the process of aquiring data for practitioners.
Data field = {Land, Atmospheric, Ocean}
Data type = {Temperature, Height, Color, Velocity, etc...}
Sensors = {MODIS, VIIRS, AVHRR, SeaWifs, TOMS, NSCAT, MISR, MERIS, etc...}
There will be One to Many Relationships as each dictionary can have multiple
children in the succeeding dictionary. Next each sensor would be broken down
into Collections/(Archive Set), and then Products/Levels distinguishing between
the missions/projects and post processing algorithms respectively. Finally the
date and time of aquisition. The Collections/(Archive Set) and Products/Levels
would not be required, but would provide extra levels of filtering for the more
savvy users.
...For now we will stop here and continue development of the prototype. """ |
"""
Define a simple format for saving numpy arrays to disk with the full
information about them.
The ``.npy`` format is the standard binary file format in NumPy for
persisting a *single* arbitrary NumPy array on disk. The format stores all
of the shape and dtype information necessary to reconstruct the array
correctly even on another machine with a different architecture.
The format is designed to be as simple as possible while achieving
its limited goals.
The ``.npz`` format is the standard format for persisting *multiple* NumPy
arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy``
files, one for each array.
Capabilities
------------
- Can represent all NumPy arrays including nested record arrays and
object arrays.
- Represents the data in its native binary form.
- Supports Fortran-contiguous arrays directly.
- Stores all of the necessary information to reconstruct the array
including shape and dtype on a machine of a different
architecture. Both little-endian and big-endian arrays are
supported, and a file with little-endian numbers will yield
a little-endian array on any machine reading the file. The
types are described in terms of their actual sizes. For example,
if a machine with a 64-bit C "long int" writes out an array with
"long ints", a reading machine with 32-bit C "long ints" will yield
an array with 64-bit integers.
- Is straightforward to reverse engineer. Datasets often live longer than
the programs that created them. A competent developer should be
able to create a solution in his preferred programming language to
read most ``.npy`` files that he has been given without much
documentation.
- Allows memory-mapping of the data. See `open_memmep`.
- Can be read from a filelike stream object instead of an actual file.
- Stores object arrays, i.e. arrays containing elements that are arbitrary
Python objects. Files with object arrays are not to be mmapable, but
can be read and written to disk.
Limitations
-----------
- Arbitrary subclasses of numpy.ndarray are not completely preserved.
Subclasses will be accepted for writing, but only the array data will
be written out. A regular numpy.ndarray object will be created
upon reading the file.
.. warning::
Due to limitations in the interpretation of structured dtypes, dtypes
with fields with empty names will have the names replaced by 'f0', 'f1',
etc. Such arrays will not round-trip through the format entirely
accurately. The data is intact; only the field names will differ. We are
working on a fix for this. This fix will not require a change in the
file format. The arrays with such structures can still be saved and
restored, and the correct dtype may be restored by using the
``loadedarray.view(correct_dtype)`` method.
File extensions
---------------
We recommend using the ``.npy`` and ``.npz`` extensions for files saved
in this format. This is by no means a requirement; applications may wish
to use these file formats but use an extension specific to the
application. In the absence of an obvious alternative, however,
we suggest using ``.npy`` and ``.npz``.
Version numbering
-----------------
The version numbering of these formats is independent of NumPy version
numbering. If the format is upgraded, the code in `numpy.io` will still
be able to read and write Version 1.0 files.
Format Version 1.0
------------------
The first 6 bytes are a magic string: exactly ``\\x93NUMPY``.
The next 1 byte is an unsigned byte: the major version number of the file
format, e.g. ``\\x01``.
The next 1 byte is an unsigned byte: the minor version number of the file
format, e.g. ``\\x00``. Note: the version of the file format is not tied
to the version of the numpy package.
The next 2 bytes form a little-endian unsigned short int: the length of
the header data HEADER_LEN.
The next HEADER_LEN bytes form the header data describing the array's
format. It is an ASCII string which contains a Python literal expression
of a dictionary. It is terminated by a newline (``\\n``) and padded with
spaces (``\\x20``) to make the total length of
``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment
purposes.
The dictionary contains three keys:
"descr" : dtype.descr
An object that can be passed as an argument to the `numpy.dtype`
constructor to create the array's dtype.
"fortran_order" : bool
Whether the array data is Fortran-contiguous or not. Since
Fortran-contiguous arrays are a common form of non-C-contiguity,
we allow them to be written directly to disk for efficiency.
"shape" : tuple of int
The shape of the array.
For repeatability and readability, the dictionary keys are sorted in
alphabetic order. This is for convenience only. A writer SHOULD implement
this if possible. A reader MUST NOT depend on this.
Following the header comes the array data. If the dtype contains Python
objects (i.e. ``dtype.hasobject is True``), then the data is a Python
pickle of the array. Otherwise the data is the contiguous (either C-
or Fortran-, depending on ``fortran_order``) bytes of the array.
Consumers can figure out the number of bytes by multiplying the number
of elements given by the shape (noting that ``shape=()`` means there is
1 element) by ``dtype.itemsize``.
Notes
-----
The ``.npy`` format, including reasons for creating it and a comparison of
alternatives, is described fully in the "npy-format" NEP.
""" |
"""Test module for the noddy examples
Noddy 1:
>>> import noddy
>>> n1 = noddy.Noddy()
>>> n2 = noddy.Noddy()
>>> del n1
>>> del n2
Noddy 2
>>> import noddy2
>>> n1 = noddy2.Noddy('jim', 'fulton', 42)
>>> n1.first
'jim'
>>> n1.last
'NAME n1.number
42
>>> n1.name()
'jim NAME n1.first = 'will'
>>> n1.name()
'will NAME n1.last = 'NAME n1.name()
'will NAME del n1.first
>>> n1.name()
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first = 'drew'
>>> n1.first
'drew'
>>> del n1.number
Traceback (most recent call last):
...
TypeError: can't delete numeric/char attribute
>>> n1.number=2
>>> n1.number
2
>>> n1.first = 42
>>> n1.name()
'42 NAME n2 = noddy2.Noddy()
>>> n2.name()
' '
>>> n2.first
''
>>> n2.last
''
>>> del n2.first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.name()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: first
>>> n2.number
0
>>> n3 = noddy2.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
>>> del n1
>>> del n2
Noddy 3
>>> import noddy3
>>> n1 = noddy3.Noddy('jim', 'fulton', 42)
>>> n1 = noddy3.Noddy('jim', 'fulton', 42)
>>> n1.name()
'jim NAME del n1.first
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: Cannot delete the first attribute
>>> n1.first = 42
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: The first attribute value must be a string
>>> n1.first = 'will'
>>> n1.name()
'will NAME n2 = noddy3.Noddy()
>>> n2 = noddy3.Noddy()
>>> n2 = noddy3.Noddy()
>>> n3 = noddy3.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
>>> del n1
>>> del n2
Noddy 4
>>> import noddy4
>>> n1 = noddy4.Noddy('jim', 'fulton', 42)
>>> n1.first
'jim'
>>> n1.last
'NAME n1.number
42
>>> n1.name()
'jim NAME n1.first = 'will'
>>> n1.name()
'will NAME n1.last = 'NAME n1.name()
'will NAME del n1.first
>>> n1.name()
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first
Traceback (most recent call last):
...
AttributeError: first
>>> n1.first = 'drew'
>>> n1.first
'drew'
>>> del n1.number
Traceback (most recent call last):
...
TypeError: can't delete numeric/char attribute
>>> n1.number=2
>>> n1.number
2
>>> n1.first = 42
>>> n1.name()
'42 NAME n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2 = noddy4.Noddy()
>>> n2.name()
' '
>>> n2.first
''
>>> n2.last
''
>>> del n2.first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.first
Traceback (most recent call last):
...
AttributeError: first
>>> n2.name()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
AttributeError: first
>>> n2.number
0
>>> n3 = noddy4.Noddy('jim', 'fulton', 'waaa')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: an integer is required
Test cyclic gc(?)
>>> import gc
>>> gc.disable()
>>> x = []
>>> l = [x]
>>> n2.first = l
>>> n2.first
[[]]
>>> l.append(n2)
>>> del l
>>> del n1
>>> del n2
>>> sys.getrefcount(x)
3
>>> ignore = gc.collect()
>>> sys.getrefcount(x)
2
>>> gc.enable()
""" |
"""
==============
Array Creation
==============
Introduction
============
There are 5 general mechanisms for creating arrays:
1) Conversion from other Python structures (e.g., lists, tuples)
2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros,
etc.)
3) Reading arrays from disk, either from standard or custom formats
4) Creating arrays from raw bytes through the use of strings or buffers
5) Use of special library functions (e.g., random)
This section will not cover means of replicating, joining, or otherwise
expanding or mutating existing arrays. Nor will it cover creating object
arrays or structured arrays. Both of those are covered in their own sections.
Converting Python array_like Objects to Numpy Arrays
====================================================
In general, numerical data arranged in an array-like structure in Python can
be converted to arrays through the use of the array() function. The most
obvious examples are lists and tuples. See the documentation for array() for
details for its use. Some objects may support the array-protocol and allow
conversion to arrays this way. A simple way to find out if the object can be
converted to a numpy array using array() is simply to try it interactively and
see if it works! (The Python Way).
Examples: ::
>>> x = np.array([2,3,1,0])
>>> x = np.array([2, 3, 1, 0])
>>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists,
and types
>>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]])
Intrinsic Numpy Array Creation
==============================
Numpy has built-in functions for creating arrays from scratch:
zeros(shape) will create an array filled with 0 values with the specified
shape. The default dtype is float64.
``>>> np.zeros((2, 3))
array([[ 0., 0., 0.], [ 0., 0., 0.]])``
ones(shape) will create an array filled with 1 values. It is identical to
zeros in all other respects.
arange() will create arrays with regularly incrementing values. Check the
docstring for complete information on the various ways it can be used. A few
examples will be given here: ::
>>> np.arange(10)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> np.arange(2, 10, dtype=np.float)
array([ 2., 3., 4., 5., 6., 7., 8., 9.])
>>> np.arange(2, 3, 0.1)
array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9])
Note that there are some subtleties regarding the last usage that the user
should be aware of that are described in the arange docstring.
linspace() will create arrays with a specified number of elements, and
spaced equally between the specified beginning and end values. For
example: ::
>>> np.linspace(1., 4., 6)
array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ])
The advantage of this creation function is that one can guarantee the
number of elements and the starting and end point, which arange()
generally will not do for arbitrary start, stop, and step values.
indices() will create a set of arrays (stacked as a one-higher dimensioned
array), one per dimension with each representing variation in that dimension.
An example illustrates much better than a verbal description: ::
>>> np.indices((3,3))
array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]])
This is particularly useful for evaluating functions of multiple dimensions on
a regular grid.
Reading Arrays From Disk
========================
This is presumably the most common case of large array creation. The details,
of course, depend greatly on the format of data on disk and so this section
can only give general pointers on how to handle various formats.
Standard Binary Formats
-----------------------
Various fields have standard formats for array data. The following lists the
ones with known python libraries to read them and return numpy arrays (there
may be others for which it is possible to read and convert to numpy arrays so
check the last section as well)
::
HDF5: PyTables
FITS: PyFITS
Examples of formats that cannot be read directly but for which it is not hard to
convert are those formats supported by libraries like PIL (able to read and
write many image formats such as jpg, png, etc).
Common ASCII Formats
------------------------
Comma Separated Value files (CSV) are widely used (and an export and import
option for programs like Excel). There are a number of ways of reading these
files in Python. There are CSV functions in Python and functions in pylab
(part of matplotlib).
More generic ascii files can be read using the io package in scipy.
Custom Binary Formats
---------------------
There are a variety of approaches one can use. If the file has a relatively
simple format then one can write a simple I/O library and use the numpy
fromfile() function and .tofile() method to read and write numpy arrays
directly (mind your byteorder though!) If a good C or C++ library exists that
read the data, one can wrap that library with a variety of techniques though
that certainly is much more work and requires significantly more advanced
knowledge to interface with C or C++.
Use of Special Libraries
------------------------
There are libraries that can be used to generate arrays for special purposes
and it isn't possible to enumerate all of them. The most common uses are use
of the many array generation functions in random that can generate arrays of
random values, and some utility functions to generate special matrices (e.g.
diagonal).
""" |
# -*- coding: utf-8 -*-
#
# hill_tononi_Vp.py
#
# This file is part of NEST.
#
# Copyright (C) 2004 The NEST Initiative
#
# NEST is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# NEST is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with NEST. If not, see <http://www.gnu.org/licenses/>.
# ! ===========================================
# ! NEST Topology Module: A Case-Based Tutorial
# ! ===========================================
# !
# ! :Author: Hans Ekkehard Plesser
# ! :Institution: Norwegian University of Life Sciences
# ! :Version: 0.4
# ! :Date: 21 November 2012
# ! :Copyright: The NEST Initiative (2004)
# ! :License: Creative Commons Attribution License
# !
# ! **NOTE:** The network generated by this script does generate
# ! dynamics in which the activity of the entire system, especially
# ! Rp and Vp oscillates with approx 5 Hz. This is different from
# ! the full model. Deviations are due to the different model type
# ! and the elimination of a number of connections, with no changes
# ! to the weights.
# !
# ! Introduction
# ! ============
# !
# ! This tutorial shows you how to implement a simplified version of the
# ! Hill-Tononi model of the early visual pathway using the NEST Topology
# ! module. The model is described in the paper
# !
# ! NAME and G. Tononi.
# ! Modeling Sleep and Wakefulness in the Thalamocortical System.
# ! J Neurophysiology **93**:1671-1698 (2005).
# ! Freely available via `doi 10.1152/jn.00915.2004
# ! <http://dx.doi.org/10.1152/jn.00915.2004>`_.
# !
# ! We simplify the model somewhat both to keep this tutorial a bit
# ! shorter, and because some details of the Hill-Tononi model are not
# ! currently supported by NEST. Simplifications include:
# !
# ! 1. We use the ``iaf_cond_alpha`` neuron model, which is
# ! simpler than the Hill-Tononi model.
# !
# ! #. As the ``iaf_cond_alpha`` neuron model only supports two
# ! synapses (labeled "ex" and "in"), we only include AMPA and
# ! GABA_A synapses.
# !
# ! #. We ignore the secondary pathway (Ts, Rs, Vs), since it adds just
# ! more of the same from a technical point of view.
# !
# ! #. Synaptic delays follow a Gaussian distribution in the HT
# ! model. This implies actually a Gaussian distributions clipped at
# ! some small, non-zero delay, since delays must be
# ! positive. Currently, there is a bug in the Topology module when
# ! using clipped Gaussian distribution. We therefore draw delays from a
# ! uniform distribution.
# !
# ! #. Some further adaptations are given at the appropriate locations in
# ! the script.
# !
# ! This tutorial is divided in the following sections:
# !
# ! Philosophy_
# ! Discusses the philosophy applied to model implementation in this
# ! tutorial
# !
# ! Preparations_
# ! Neccessary steps to use NEST and the Topology Module
# !
# ! `Configurable Parameters`_
# ! Define adjustable network parameters
# !
# ! `Neuron Models`_
# ! Define the neuron models needed by the network model
# !
# ! Populations_
# ! Create Populations
# !
# ! `Synapse models`_
# ! Define the synapse models used in the network model
# !
# ! Connections_
# ! Create Connections
# !
# ! `Example simulation`_
# ! Perform a small simulation for illustration. This
# ! section also discusses the setup for recording.
# ! Philosophy
# ! ==========
# ! A network models has two essential components: *populations* and
# ! *projections*. We first use NEST's ``CopyModel()`` mechanism to
# ! create specific models for all populations and subpopulations in
# ! the network, and then create the populations using the Topology
# ! modules ``CreateLayer()`` function.
# !
# ! We use a two-stage process to create the connections, mainly
# ! because the same configurations are required for a number of
# ! projections: we first define dictionaries specifying the
# ! connections, then apply these dictionaries later.
# !
# ! The way in which we declare the network model here is an
# ! example. You should not consider it the last word: we expect to see
# ! a significant development in strategies and tools for network
# ! descriptions in the future. The following contributions to CNS\*09
# ! seem particularly interesting
# !
# ! - NAME & NAME Declarative model description and
# ! code generation for hybrid individual- and population-based
# ! simulations of the early visual system (P57);
# ! - NAMEok, NAME & NAME Describing
# ! and exchanging models of neurons and neuronal networks with
# ! NeuroML (F1);
# !
# ! as well as the following paper which will apply in PLoS
# ! Computational Biology shortly:
# !
# ! - NAME NAME & Hans Ekkehard Plesser.
# ! Towards reproducible descriptions of neuronal network models.
# ! Preparations
# ! ============
# ! Please make sure that your ``PYTHONPATH`` is set correctly, so
# ! that Python can find the NEST Python module.
# ! **Note:** By default, the script does not show any graphics.
# ! Set ``SHOW_FIGURES`` to ``True`` to activate graphics.
# ! This example uses the function GetLeaves, which is deprecated. A
# ! deprecation warning is therefore issued. For details about deprecated
# ! functions, see documentation.
|
#!/usr/bin/env python
# ***** BEGIN LICENSE BLOCK *****
# Version: MPL 1.1/GPL 2.0/LGPL 2.1
#
# The contents of this file are subject to the Mozilla Public License Version
# 1.1 (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
# http://www.mozilla.org/MPL/
#
# Software distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
# for the specific language governing rights and limitations under the
# License.
#
# The Original Code is font utility code.
#
# The Initial Developer of the Original Code is Mozilla Corporation.
# Portions created by the Initial Developer are Copyright (C) 2009
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# NAME <EMAIL>
#
# Alternatively, the contents of this file may be used under the terms of
# either the GNU General Public License Version 2 or later (the "GPL"), or
# the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
# in which case the provisions of the GPL or the LGPL are applicable instead
# of those above. If you wish to allow use of your version of this file only
# under the terms of either the GPL or the LGPL, and not to allow others to
# use your version of this file under the terms of the MPL, indicate your
# decision by deleting the provisions above and replace them with the notice
# and other provisions required by the GPL or the LGPL. If you do not delete
# the provisions above, a recipient may use your version of this file under
# the terms of any one of the MPL, the GPL or the LGPL.
#
# ***** END LICENSE BLOCK ***** */
# eotlitetool.py - create EOT version of OpenType font for use with IE
#
# Usage: eotlitetool.py [-o output-filename] font1 [font2 ...]
#
# OpenType file structure
# http://www.microsoft.com/typography/otspec/otff.htm
#
# Types:
#
# BYTE 8-bit unsigned integer.
# CHAR 8-bit signed integer.
# USHORT 16-bit unsigned integer.
# SHORT 16-bit signed integer.
# ULONG 32-bit unsigned integer.
# Fixed 32-bit signed fixed-point number (16.16)
# LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer.
#
# SFNT Header
#
# Fixed sfnt version // 0x00010000 for version 1.0.
# USHORT numTables // Number of tables.
# USHORT searchRange // (Maximum power of 2 <= numTables) x 16.
# USHORT entrySelector // Log2(maximum power of 2 <= numTables).
# USHORT rangeShift // NumTables x 16-searchRange.
#
# Table Directory
#
# ULONG tag // 4-byte identifier.
# ULONG checkSum // CheckSum for this table.
# ULONG offset // Offset from beginning of TrueType font file.
# ULONG length // Length of this table.
#
# OS/2 Table (Version 4)
#
# USHORT version // 0x0004
# SHORT xAvgCharWidth
# USHORT usWeightClass
# USHORT usWidthClass
# USHORT fsType
# SHORT ySubscriptXSize
# SHORT ySubscriptYSize
# SHORT ySubscriptXOffset
# SHORT ySubscriptYOffset
# SHORT ySuperscriptXSize
# SHORT ySuperscriptYSize
# SHORT ySuperscriptXOffset
# SHORT ySuperscriptYOffset
# SHORT yStrikeoutSize
# SHORT yStrikeoutPosition
# SHORT sFamilyClass
# BYTE panose[10]
# ULONG ulUnicodeRange1 // Bits 0-31
# ULONG ulUnicodeRange2 // Bits 32-63
# ULONG ulUnicodeRange3 // Bits 64-95
# ULONG ulUnicodeRange4 // Bits 96-127
# CHAR achVendID[4]
# USHORT fsSelection
# USHORT usFirstCharIndex
# USHORT usLastCharIndex
# SHORT sTypoAscender
# SHORT sTypoDescender
# SHORT sTypoLineGap
# USHORT usWinAscent
# USHORT usWinDescent
# ULONG ulCodePageRange1 // Bits 0-31
# ULONG ulCodePageRange2 // Bits 32-63
# SHORT sxHeight
# SHORT sCapHeight
# USHORT usDefaultChar
# USHORT usBreakChar
# USHORT usMaxContext
#
#
# The Naming Table is organized as follows:
#
# [name table header]
# [name records]
# [string data]
#
# Name Table Header
#
# USHORT format // Format selector (=0).
# USHORT count // Number of name records.
# USHORT stringOffset // Offset to start of string storage (from start of table).
#
# Name Record
#
# USHORT platformID // Platform ID.
# USHORT encodingID // Platform-specific encoding ID.
# USHORT languageID // Language ID.
# USHORT nameID // Name ID.
# USHORT length // String length (in bytes).
# USHORT offset // String offset from start of storage area (in bytes).
#
# head Table
#
# Fixed tableVersion // Table version number 0x00010000 for version 1.0.
# Fixed fontRevision // Set by font manufacturer.
# ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum.
# ULONG magicNumber // Set to 0x5F0F3CF5.
# USHORT flags
# USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines.
# LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer
# LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer
# SHORT xMin // For all glyph bounding boxes.
# SHORT yMin
# SHORT xMax
# SHORT yMax
# USHORT macStyle
# USHORT lowestRecPPEM // Smallest readable size in pixels.
# SHORT fontDirectionHint
# SHORT indexToLocFormat // 0 for short offsets, 1 for long.
# SHORT glyphDataFormat // 0 for current format.
#
#
#
# Embedded OpenType (EOT) file format
# http://www.w3.org/Submission/EOT/
#
# EOT version 0x00020001
#
# An EOT font consists of a header with the original OpenType font
# appended at the end. Most of the data in the EOT header is simply a
# copy of data from specific tables within the font data. The exceptions
# are the 'Flags' field and the root string name field. The root string
# is a set of names indicating domains for which the font data can be
# used. A null root string implies the font data can be used anywhere.
# The EOT header is in little-endian byte order but the font data remains
# in big-endian order as specified by the OpenType spec.
#
# Overall structure:
#
# [EOT header]
# [EOT name records]
# [font data]
#
# EOT header
#
# ULONG eotSize // Total structure length in bytes (including string and font data)
# ULONG fontDataSize // Length of the OpenType font (FontData) in bytes
# ULONG version // Version number of this format - 0x00020001
# ULONG flags // Processing Flags (0 == no special processing)
# BYTE fontPANOSE[10] // OS/2 Table panose
# BYTE charset // DEFAULT_CHARSET (0x01)
# BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise
# ULONG weight // OS/2 Table usWeightClass
# USHORT fsType // OS/2 Table fsType (specifies embedding permission flags)
# USHORT magicNumber // Magic number for EOT file - 0x504C.
# ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1
# ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2
# ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3
# ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4
# ULONG codePageRange1 // OS/2 Table ulCodePageRange1
# ULONG codePageRange2 // OS/2 Table ulCodePageRange2
# ULONG checkSumAdjustment // head Table CheckSumAdjustment
# ULONG reserved[4] // Reserved - must be 0
# USHORT padding1 // Padding - must be 0
#
# EOT name records
#
# USHORT FamilyNameSize // Font family name size in bytes
# BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16
# USHORT Padding2 // Padding - must be 0
#
# USHORT StyleNameSize // Style name size in bytes
# BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16
# USHORT Padding3 // Padding - must be 0
#
# USHORT VersionNameSize // Version name size in bytes
# bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16
# USHORT Padding4 // Padding - must be 0
#
# USHORT FullNameSize // Full name size in bytes
# BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16
# USHORT Padding5 // Padding - must be 0
#
# USHORT RootStringSize // Root string size in bytes
# BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
|
#!/usr/bin/env python
# (c) 2013, NAME <EMAIL>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
#
# Author: NAME <EMAIL>
#
# Description:
# This module queries local or remote Docker daemons and generates
# inventory information.
#
# This plugin does not support targeting of specific hosts using the --host
# flag. Instead, it queries the Docker API for each container, running
# or not, and returns this data all once.
#
# The plugin returns the following custom attributes on Docker containers:
# docker_args
# docker_config
# docker_created
# docker_driver
# docker_exec_driver
# docker_host_config
# docker_hostname_path
# docker_hosts_path
# docker_id
# docker_image
# docker_name
# docker_network_settings
# docker_path
# docker_resolv_conf_path
# docker_state
# docker_volumes
# docker_volumes_rw
#
# Requirements:
# The docker-py module: https://github.com/dotcloud/docker-py
#
# Notes:
# A config file can be used to configure this inventory module, and there
# are several environment variables that can be set to modify the behavior
# of the plugin at runtime:
# DOCKER_CONFIG_FILE
# DOCKER_HOST
# DOCKER_VERSION
# DOCKER_TIMEOUT
# DOCKER_PRIVATE_SSH_PORT
# DOCKER_DEFAULT_IP
#
# Environment Variables:
# environment variable: DOCKER_CONFIG_FILE
# description:
# - A path to a Docker inventory hosts/defaults file in YAML format
# - A sample file has been provided, colocated with the inventory
# file called 'docker.yml'
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_HOST
# description:
# - The socket on which to connect to a Docker daemon API
# required: false
# default: Uses docker.docker.Client constructor defaults
# environment variable: DOCKER_VERSION
# description:
# - Version of the Docker API to use
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_TIMEOUT
# description:
# - Timeout in seconds for connections to Docker daemon API
# default: Uses docker.docker.Client constructor defaults
# required: false
# environment variable: DOCKER_PRIVATE_SSH_PORT
# description:
# - The private port (container port) on which SSH is listening
# for connections
# default: 22
# required: false
# environment variable: DOCKER_DEFAULT_IP
# description:
# - This environment variable overrides the container SSH connection
# IP address (aka, 'ansible_ssh_host')
#
# This option allows one to override the ansible_ssh_host whenever
# Docker has exercised its default behavior of binding private ports
# to all interfaces of the Docker host. This behavior, when dealing
# with remote Docker hosts, does not allow Ansible to determine
# a proper host IP address on which to connect via SSH to containers.
# By default, this inventory module assumes all IP_ADDRESS-exposed
# ports to be bound to localhost:<port>. To override this
# behavior, for example, to bind a container's SSH port to the public
# interface of its host, one must manually set this IP.
#
# It is preferable to begin to launch Docker containers with
# ports exposed on publicly accessible IP addresses, particularly
# if the containers are to be targeted by Ansible for remote
# configuration, not accessible via localhost SSH connections.
#
# Docker containers can be explicitly exposed on IP addresses by
# a) starting the daemon with the --ip argument
# b) running containers with the -P/--publish ip::containerPort
# argument
# default: IP_ADDRESS if port exposed on IP_ADDRESS by Docker
# required: false
#
# Examples:
# Use the config file:
# DOCKER_CONFIG_FILE=./docker.yml docker.py --list
#
# Connect to docker instance on localhost port 4243
# DOCKER_HOST=tcp://localhost:4243 docker.py --list
#
# Any container's ssh port exposed on IP_ADDRESS will mapped to
# another IP address (where Ansible will attempt to connect via SSH)
# DOCKER_DEFAULT_IP=IP_ADDRESS docker.py --list
|
"""
===================
Universal Functions
===================
Ufuncs are, generally speaking, mathematical functions or operations that are
applied element-by-element to the contents of an array. That is, the result
in each output array element only depends on the value in the corresponding
input array (or arrays) and on no other array elements. NumPy comes with a
large suite of ufuncs, and scipy extends that suite substantially. The simplest
example is the addition operator: ::
>>> np.array([0,2,3,4]) + np.array([1,1,-1,2])
array([1, 3, 2, 6])
The ufunc module lists all the available ufuncs in numpy. Documentation on
the specific ufuncs may be found in those modules. This documentation is
intended to address the more general aspects of ufuncs common to most of
them. All of the ufuncs that make use of Python operators (e.g., +, -, etc.)
have equivalent functions defined (e.g. add() for +)
Type coercion
=============
What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of
two different types? What is the type of the result? Typically, the result is
the higher of the two types. For example: ::
float32 + float64 -> float64
int8 + int32 -> int32
int16 + float32 -> float32
float32 + complex64 -> complex64
There are some less obvious cases generally involving mixes of types
(e.g. uints, ints and floats) where equal bit sizes for each are not
capable of saving all the information in a different type of equivalent
bit size. Some examples are int32 vs float32 or uint32 vs int32.
Generally, the result is the higher type of larger size than both
(if available). So: ::
int32 + float32 -> float64
uint32 + int32 -> int64
Finally, the type coercion behavior when expressions involve Python
scalars is different than that seen for arrays. Since Python has a
limited number of types, combining a Python int with a dtype=np.int8
array does not coerce to the higher type but instead, the type of the
array prevails. So the rules for Python scalars combined with arrays is
that the result will be that of the array equivalent the Python scalar
if the Python scalar is of a higher 'kind' than the array (e.g., float
vs. int), otherwise the resultant type will be that of the array.
For example: ::
Python int + int8 -> int8
Python float + int8 -> float64
ufunc methods
=============
Binary ufuncs support 4 methods.
**.reduce(arr)** applies the binary operator to elements of the array in
sequence. For example: ::
>>> np.add.reduce(np.arange(10)) # adds all elements of array
45
For multidimensional arrays, the first dimension is reduced by default: ::
>>> np.add.reduce(np.arange(10).reshape(2,5))
array([ 5, 7, 9, 11, 13])
The axis keyword can be used to specify different axes to reduce: ::
>>> np.add.reduce(np.arange(10).reshape(2,5),axis=1)
array([10, 35])
**.accumulate(arr)** applies the binary operator and generates an an
equivalently shaped array that includes the accumulated amount for each
element of the array. A couple examples: ::
>>> np.add.accumulate(np.arange(10))
array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45])
>>> np.multiply.accumulate(np.arange(1,9))
array([ 1, 2, 6, 24, 120, 720, 5040, 40320])
The behavior for multidimensional arrays is the same as for .reduce(),
as is the use of the axis keyword).
**.reduceat(arr,indices)** allows one to apply reduce to selected parts
of an array. It is a difficult method to understand. See the documentation
at:
**.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and
arr2. It will work on multidimensional arrays (the shape of the result is
the concatenation of the two input shapes.: ::
>>> np.multiply.outer(np.arange(3),np.arange(4))
array([[0, 0, 0, 0],
[0, 1, 2, 3],
[0, 2, 4, 6]])
Output arguments
================
All ufuncs accept an optional output array. The array must be of the expected
output shape. Beware that if the type of the output array is of a different
(and lower) type than the output result, the results may be silently truncated
or otherwise corrupted in the downcast to the lower type. This usage is useful
when one wants to avoid creating large temporary arrays and instead allows one
to reuse the same array memory repeatedly (at the expense of not being able to
use more convenient operator notation in expressions). Note that when the
output argument is used, the ufunc still returns a reference to the result.
>>> x = np.arange(2)
>>> np.add(np.arange(2),np.arange(2.),x)
array([0, 2])
>>> x
array([0, 2])
and & or as ufuncs
==================
Invariably people try to use the python 'and' and 'or' as logical operators
(and quite understandably). But these operators do not behave as normal
operators since Python treats these quite differently. They cannot be
overloaded with array equivalents. Thus using 'and' or 'or' with an array
results in an error. There are two alternatives:
1) use the ufunc functions logical_and() and logical_or().
2) use the bitwise operators & and \\|. The drawback of these is that if
the arguments to these operators are not boolean arrays, the result is
likely incorrect. On the other hand, most usages of logical_and and
logical_or are with boolean arrays. As long as one is careful, this is
a convenient way to apply these operators.
""" |
"""Stuff to parse AIFF-C and AIFF files.
Unless explicitly stated otherwise, the description below is true
both for AIFF-C files and AIFF files.
An AIFF-C file has the following structure.
+-----------------+
| FORM |
+-----------------+
| <size> |
+----+------------+
| | AIFC |
| +------------+
| | <chunks> |
| | . |
| | . |
| | . |
+----+------------+
An AIFF file has the string "AIFF" instead of "AIFC".
A chunk consists of an identifier (4 bytes) followed by a size (4 bytes,
big endian order), followed by the data. The size field does not include
the size of the 8 byte header.
The following chunk types are recognized.
FVER
<version number of AIFF-C defining document> (AIFF-C only).
MARK
<# of markers> (2 bytes)
list of markers:
<marker ID> (2 bytes, must be > 0)
<position> (4 bytes)
<marker name> ("pstring")
COMM
<# of channels> (2 bytes)
<# of sound frames> (4 bytes)
<size of the samples> (2 bytes)
<sampling frequency> (10 bytes, IEEE 80-bit extended
floating point)
in AIFF-C files only:
<compression type> (4 bytes)
<human-readable version of compression type> ("pstring")
SSND
<offset> (4 bytes, not used by this program)
<blocksize> (4 bytes, not used by this program)
<sound data>
A pstring consists of 1 byte length, a string of characters, and 0 or 1
byte pad to make the total length even.
Usage.
Reading AIFF files:
f = aifc.open(file, 'r')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods read(), seek(), and close().
In some types of audio files, if the setpos() method is not used,
the seek() method is not necessary.
This returns an instance of a class with the following public methods:
getnchannels() -- returns number of audio channels (1 for
mono, 2 for stereo)
getsampwidth() -- returns sample width in bytes
getframerate() -- returns sampling frequency
getnframes() -- returns number of audio frames
getcomptype() -- returns compression type ('NONE' for AIFF files)
getcompname() -- returns human-readable version of
compression type ('not compressed' for AIFF files)
getparams() -- returns a tuple consisting of all of the
above in the above order
getmarkers() -- get the list of marks in the audio file or None
if there are no marks
getmark(id) -- get mark with the specified id (raises an error
if the mark does not exist)
readframes(n) -- returns at most n frames of audio
rewind() -- rewind to the beginning of the audio stream
setpos(pos) -- seek to the specified position
tell() -- return the current position
close() -- close the instance (make it unusable)
The position returned by tell(), the position given to setpos() and
the position of marks are all compatible and have nothing to do with
the actual position in the file.
The close() method is called automatically when the class instance
is destroyed.
Writing AIFF files:
f = aifc.open(file, 'w')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods write(), tell(), seek(), and
close().
This returns an instance of a class with the following public methods:
aiff() -- create an AIFF file (AIFF-C default)
aifc() -- create an AIFF-C file
setnchannels(n) -- set the number of channels
setsampwidth(n) -- set the sample width
setframerate(n) -- set the frame rate
setnframes(n) -- set the number of frames
setcomptype(type, name)
-- set the compression type and the
human-readable compression type
setparams(tuple)
-- set all parameters at once
setmark(id, pos, name)
-- add specified mark to the list of marks
tell() -- return current position in output file (useful
in combination with setmark())
writeframesraw(data)
-- write audio frames without pathing up the
file header
writeframes(data)
-- write audio frames and patch up the file header
close() -- patch up the file header and close the
output file
You should set the parameters before the first writeframesraw or
writeframes. The total number of frames does not need to be set,
but when it is set to the correct value, the header does not have to
be patched up.
It is best to first set all parameters, perhaps possibly the
compression type, and then write audio frames using writeframesraw.
When all frames have been written, either call writeframes('') or
close() to patch up the sizes in the header.
Marks can be added anytime. If there are any marks, ypu must call
close() after all frames have been written.
The close() method is called automatically when the class instance
is destroyed.
When a file is opened with the extension '.aiff', an AIFF file is
written, otherwise an AIFF-C file is written. This default can be
changed by calling aiff() or aifc() before the first writeframes or
writeframesraw.
""" |
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#
# See LICENSE for more details.
#
# Copyright (c) 2020 ScyllaDB
# Data validation module may be used with cassandra-stress user profile only
#
# **************** Caution **************************************************************
# BE AWARE: During data validation all materialized views/expected table rows will be read into the memory
# Be sure your dataset will be less then 2Gb.
# ****************************************************************************************
#
# Here is described Data validation module and requirements for user profile.
# Please, read the explanation and requirements
#
#
# 2 kinds of updates may be tested:
#
# 1. update rows
# Example:
# update one column:
# set lwt_indicator=30000000,
# condition: for all rows where if lwt_indicator < 0
#
# update two columns:
# set lwt_indicator=20000000 and author='text',
# condition: for all rows where if lwt_indicator > 0 and lwt_indicator <= 1000000 and author != 'text'
#
# 2. delete rows
# Example:
# deletes:
# delete from blogposts where
# condition: lwt_indicator > 10000 and lwt_indicator < 100000
#
#
# And additional expected that all rows in the certain rand won't be updated
# Example:
# where lwt_indicator > 1000000 and lwt_indicator < 20000000
#
# Based on this 3 types of validation will be performed:
#
# - Rows which are expected to stay intact, are not updated
# REQUIREMENTS:
# 1. create view with substring "_not_updated" in the name. Example: blogposts_not_updated_lwt_indicator
# 2. view name length have to be less then 40 characters
# - Rows for a certain row range, that were updated
# REQUIREMENTS:
# 1. create 2 views:
# 1 - first one holds rows that candidates for this update. View name length have to be less then 27
# characters. Example: blogposts_update_one_column_lwt_indicator
# 2 - Second one holds rows after update.
# The name of this view have to be as: <name of first view>+<_after_update>
# Example: blogposts_update_one_column_lwt_indicator_after_update
#
# - Rows for another row range that part of them were deleted
# REQUIREMENT:
# with substring "_deletions" in the name should be created (example, blogposts_deletions)
#
#
# ***Validation implementation***
#
# 1. Rows which are expected to stay intact.
# To be able to validate that, the MV with substring "_not_updated" in the name should be created (example,
# blogposts_not_updated_lwt_indicator)
#
# Example:
# - create MATERIALIZED VIEW blogposts_not_updated_lwt_indicator as select lwt_indicator, author from blogposts
# where domain is not null and published_date is not null and lwt_indicator > 1000000 and lwt_indicator < 20000000
# PRIMARY KEY(lwt_indicator, domain, published_date);
# This MV hold all rows that shouldn't be updated.
#
# Once the prepare_write_cmd part will be completed, all data from the view
# blogposts_not_updated_lwt_indicator will be copied to a side table called
# blogposts_not_updated_lwt_indicator_expect (created by test). This table uses as the expected data for this
# validation.
#
# When test is finished, the test checks that data in the blogposts_not_updated_lwt_indicator and
# blogposts_not_updated_lwt_indicator_expect will be same.
#
# 2. For the rows one or more columns were updated, the data validation behaves as follow:
#
# Two more Materialized View have to be added added:
# 1 - First one holds rows that candidates for this update (before the update):
# ****IMPORTANT: name of this view have no to be longer then 27 characters****
# Example:
# create MATERIALIZED VIEW blogposts_update_one_column_lwt_indicator as select domain, lwt_indicator, author
# from blogposts where domain is not null and published_date is not null and lwt_indicator < 0
# PRIMARY KEY(lwt_indicator, domain, published_date);
#
# 2 - Second one holds rows after update.
# The name of this view have to be as: <name of first view>+<_after_update>
# Example:
# create MATERIALIZED VIEW blogposts_update_one_column_lwt_indicator_after_update as
# select lwt_indicator, author from blogposts
# where domain is not null and published_date is not null and
# lwt_indicator = 30000000 PRIMARY KEY(lwt_indicator, domain, published_date);
#
# Once the prepare_write_cmd part will be completed, all primary key data from the first view
# will be copied to a side table called <view name>+<_expect> (example, blogposts_not_updated_lwt_indicator_expect)
# that created by test. This table uses as the expected data for this validation.
#
# Ideally, The view blogposts_update_one_column_lwt_indicator should be empty after the update and
# the view blogposts_update_one_column_lwt_indicator_after_update will hold same amount of data as
# blogposts_update_one_column_lwt_indicator had before the update.
#
# However, due to the fact we cannot control the c-s workload to go over all the rows (should visit all the rows
# with appropriate value), it's validated that first and second views together has same primary keys that in
# expected data table
#
#
# 3. Rows which are deleted.
# To be able to validate that, the MV with substring "_deletions" in the name should be created (example,
# blogposts_deletions)
#
# Example:
# - create MATERIALIZED VIEW blogposts_deletions as select lwt_indicator, author from blogposts
# where domain is not null and published_date is not null and lwt_indicator > 10000 and lwt_indicator < 100000
# PRIMARY KEY(lwt_indicator, domain, published_date);
# This MV hold all rows that may be updated.
#
# Once the prepare_write_cmd part will be completed, rows in the view will be counted and saved.
#
# When test is finished, rows will be counted in the view and validate that this count less then count before
# running stress.
#
|
"""
============
Array basics
============
Array types and conversions between types
=========================================
Numpy supports a much greater variety of numerical types than Python does.
This section shows which are available, and how to modify an array's data-type.
========== =========================================================
Data type Description
========== =========================================================
bool Boolean (True or False) stored as a byte
int Platform integer (normally either ``int32`` or ``int64``)
int8 Byte (-128 to 127)
int16 Integer (-32768 to 32767)
int32 Integer (-2147483648 to 2147483647)
int64 Integer (9223372036854775808 to 9223372036854775807)
uint8 Unsigned integer (0 to 255)
uint16 Unsigned integer (0 to 65535)
uint32 Unsigned integer (0 to 4294967295)
uint64 Unsigned integer (0 to 18446744073709551615)
float Shorthand for ``float64``.
float16 Half precision float: sign bit, 5 bits exponent,
10 bits mantissa
float32 Single precision float: sign bit, 8 bits exponent,
23 bits mantissa
float64 Double precision float: sign bit, 11 bits exponent,
52 bits mantissa
complex Shorthand for ``complex128``.
complex64 Complex number, represented by two 32-bit floats (real
and imaginary components)
complex128 Complex number, represented by two 64-bit floats (real
and imaginary components)
========== =========================================================
Numpy numerical types are instances of ``dtype`` (data-type) objects, each
having unique characteristics. Once you have imported NumPy using
::
>>> import numpy as np
the dtypes are available as ``np.bool``, ``np.float32``, etc.
Advanced types, not listed in the table above, are explored in
section :ref:`structured_arrays`.
There are 5 basic numerical types representing booleans (bool), integers (int),
unsigned integers (uint) floating point (float) and complex. Those with numbers
in their name indicate the bitsize of the type (i.e. how many bits are needed
to represent a single value in memory). Some types, such as ``int`` and
``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit
vs. 64-bit machines). This should be taken into account when interfacing
with low-level code (such as C or Fortran) where the raw memory is addressed.
Data-types can be used as functions to convert python numbers to array scalars
(see the array scalar section for an explanation), python sequences of numbers
to arrays of that type, or as arguments to the dtype keyword that many numpy
functions or methods accept. Some examples::
>>> import numpy as np
>>> x = np.float32(1.0)
>>> x
1.0
>>> y = np.int_([1,2,4])
>>> y
array([1, 2, 4])
>>> z = np.arange(3, dtype=np.uint8)
>>> z
array([0, 1, 2], dtype=uint8)
Array types can also be referred to by character codes, mostly to retain
backward compatibility with older packages such as Numeric. Some
documentation may still refer to these, for example::
>>> np.array([1, 2, 3], dtype='f')
array([ 1., 2., 3.], dtype=float32)
We recommend using dtype objects instead.
To convert the type of an array, use the .astype() method (preferred) or
the type itself as a function. For example: ::
>>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE
array([ 0., 1., 2.])
>>> np.int8(z)
array([0, 1, 2], dtype=int8)
Note that, above, we use the *Python* float object as a dtype. NumPy knows
that ``int`` refers to ``np.int``, ``bool`` means ``np.bool`` and
that ``float`` is ``np.float``. The other data-types do not have Python
equivalents.
To determine the type of an array, look at the dtype attribute::
>>> z.dtype
dtype('uint8')
dtype objects also contain information about the type, such as its bit-width
and its byte-order. The data type can also be used indirectly to query
properties of the type, such as whether it is an integer::
>>> d = np.dtype(int)
>>> d
dtype('int32')
>>> np.issubdtype(d, int)
True
>>> np.issubdtype(d, float)
False
Array Scalars
=============
Numpy generally returns elements of arrays as array scalars (a scalar
with an associated dtype). Array scalars differ from Python scalars, but
for the most part they can be used interchangeably (the primary
exception is for versions of Python older than v2.x, where integer array
scalars cannot act as indices for lists and tuples). There are some
exceptions, such as when code requires very specific attributes of a scalar
or when it checks specifically whether a value is a Python scalar. Generally,
problems are easily fixed by explicitly converting array scalars
to Python scalars, using the corresponding Python type function
(e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``).
The primary advantage of using array scalars is that
they preserve the array type (Python may not have a matching scalar type
available, e.g. ``int16``). Therefore, the use of array scalars ensures
identical behaviour between arrays and scalars, irrespective of whether the
value is inside an array or not. NumPy scalars also have many of the same
methods arrays do.
""" |
"""
Basic functions used by several sub-packages and
useful to have in the main name-space.
Type Handling
-------------
================ ===================
iscomplexobj Test for complex object, scalar result
isrealobj Test for real object, scalar result
iscomplex Test for complex elements, array result
isreal Test for real elements, array result
imag Imaginary part
real Real part
real_if_close Turns complex number with tiny imaginary part to real
isneginf Tests for negative infinity, array result
isposinf Tests for positive infinity, array result
isnan Tests for nans, array result
isinf Tests for infinity, array result
isfinite Tests for finite numbers, array result
isscalar True if argument is a scalar
nan_to_num Replaces NaN's with 0 and infinities with large numbers
cast Dictionary of functions to force cast to each type
common_type Determine the minimum common type code for a group
of arrays
mintypecode Return minimal allowed common typecode.
================ ===================
Index Tricks
------------
================ ===================
mgrid Method which allows easy construction of N-d
'mesh-grids'
``r_`` Append and construct arrays: turns slice objects into
ranges and concatenates them, for 2d arrays appends rows.
index_exp Konrad Hinsen's index_expression class instance which
can be useful for building complicated slicing syntax.
================ ===================
Useful Functions
----------------
================ ===================
select Extension of where to multiple conditions and choices
extract Extract 1d array from flattened array according to mask
insert Insert 1d array of values into Nd array according to mask
linspace Evenly spaced samples in linear space
logspace Evenly spaced samples in logarithmic space
fix Round x to nearest integer towards zero
mod Modulo mod(x,y) = x % y except keeps sign of y
amax Array maximum along axis
amin Array minimum along axis
ptp Array max-min along axis
cumsum Cumulative sum along axis
prod Product of elements along axis
cumprod Cumluative product along axis
diff Discrete differences along axis
angle Returns angle of complex argument
unwrap Unwrap phase along given axis (1-d algorithm)
sort_complex Sort a complex-array (based on real, then imaginary)
trim_zeros Trim the leading and trailing zeros from 1D array.
vectorize A class that wraps a Python function taking scalar
arguments into a generalized function which can handle
arrays of arguments using the broadcast rules of
numerix Python.
================ ===================
Shape Manipulation
------------------
================ ===================
squeeze Return a with length-one dimensions removed.
atleast_1d Force arrays to be > 1D
atleast_2d Force arrays to be > 2D
atleast_3d Force arrays to be > 3D
vstack Stack arrays vertically (row on row)
hstack Stack arrays horizontally (column on column)
column_stack Stack 1D arrays as columns into 2D array
dstack Stack arrays depthwise (along third dimension)
split Divide array into a list of sub-arrays
hsplit Split into columns
vsplit Split into rows
dsplit Split along third dimension
================ ===================
Matrix (2D Array) Manipulations
-------------------------------
================ ===================
fliplr 2D array with columns flipped
flipud 2D array with rows flipped
rot90 Rotate a 2D array a multiple of 90 degrees
eye Return a 2D array with ones down a given diagonal
diag Construct a 2D array from a vector, or return a given
diagonal from a 2D array.
mat Construct a Matrix
bmat Build a Matrix from blocks
================ ===================
Polynomials
-----------
================ ===================
poly1d A one-dimensional polynomial class
poly Return polynomial coefficients from roots
roots Find roots of polynomial given coefficients
polyint Integrate polynomial
polyder Differentiate polynomial
polyadd Add polynomials
polysub Substract polynomials
polymul Multiply polynomials
polydiv Divide polynomials
polyval Evaluate polynomial at given argument
================ ===================
Import Tricks
-------------
================ ===================
ppimport Postpone module import until trying to use it
ppimport_attr Postpone module import until trying to use its attribute
ppresolve Import postponed module and return it.
================ ===================
Machine Arithmetics
-------------------
================ ===================
machar_single Single precision floating point arithmetic parameters
machar_double Double precision floating point arithmetic parameters
================ ===================
Threading Tricks
----------------
================ ===================
ParallelExec Execute commands in parallel thread.
================ ===================
1D Array Set Operations
-----------------------
Set operations for 1D numeric arrays based on sort() function.
================ ===================
ediff1d Array difference (auxiliary function).
unique Unique elements of an array.
intersect1d Intersection of 1D arrays with unique elements.
setxor1d Set exclusive-or of 1D arrays with unique elements.
in1d Test whether elements in a 1D array are also present in
another array.
union1d Union of 1D arrays with unique elements.
setdiff1d Set difference of 1D arrays with unique elements.
================ ===================
""" |
"""
=============
Miscellaneous
=============
IEEE 754 Floating Point Special Values
--------------------------------------
Special values defined in numpy: nan, inf,
NaNs can be used as a poor-man's mask (if you don't care what the
original value was)
Note: cannot use equality to test NaNs. E.g.: ::
>>> myarr = np.array([1., 0., np.nan, 3.])
>>> np.where(myarr == np.nan)
>>> np.nan == np.nan # is always False! Use special numpy functions instead.
False
>>> myarr[myarr == np.nan] = 0. # doesn't work
>>> myarr
array([ 1., 0., NaN, 3.])
>>> myarr[np.isnan(myarr)] = 0. # use this instead find
>>> myarr
array([ 1., 0., 0., 3.])
Other related special value functions: ::
isinf(): True if value is inf
isfinite(): True if not nan or inf
nan_to_num(): Map nan to 0, inf to max float, -inf to min float
The following corresponds to the usual functions except that nans are excluded
from the results: ::
nansum()
nanmax()
nanmin()
nanargmax()
nanargmin()
>>> x = np.arange(10.)
>>> x[3] = np.nan
>>> x.sum()
nan
>>> np.nansum(x)
42.0
How numpy handles numerical exceptions
--------------------------------------
The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow``
and ``'ignore'`` for ``underflow``. But this can be changed, and it can be
set individually for different kinds of exceptions. The different behaviors
are:
- 'ignore' : Take no action when the exception occurs.
- 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module).
- 'raise' : Raise a `FloatingPointError`.
- 'call' : Call a function specified using the `seterrcall` function.
- 'print' : Print a warning directly to ``stdout``.
- 'log' : Record error in a Log object specified by `seterrcall`.
These behaviors can be set for all kinds of errors or specific ones:
- all : apply to all numeric exceptions
- invalid : when NaNs are generated
- divide : divide by zero (for integers as well!)
- overflow : floating point overflows
- underflow : floating point underflows
Note that integer divide-by-zero is handled by the same machinery.
These behaviors are set on a per-thread basis.
Examples
--------
::
>>> oldsettings = np.seterr(all='warn')
>>> np.zeros(5,dtype=np.float32)/0.
invalid value encountered in divide
>>> j = np.seterr(under='ignore')
>>> np.array([1.e-100])**10
>>> j = np.seterr(invalid='raise')
>>> np.sqrt(np.array([-1.]))
FloatingPointError: invalid value encountered in sqrt
>>> def errorhandler(errstr, errflag):
... print "saw stupid error!"
>>> np.seterrcall(errorhandler)
<function err_handler at 0x...>
>>> j = np.seterr(all='call')
>>> np.zeros(5, dtype=np.int32)/0
FloatingPointError: invalid value encountered in divide
saw stupid error!
>>> j = np.seterr(**oldsettings) # restore previous
... # error-handling settings
Interfacing to C
----------------
Only a survey of the choices. Little detail on how each works.
1) Bare metal, wrap your own C-code manually.
- Plusses:
- Efficient
- No dependencies on other tools
- Minuses:
- Lots of learning overhead:
- need to learn basics of Python C API
- need to learn basics of numpy C API
- need to learn how to handle reference counting and love it.
- Reference counting often difficult to get right.
- getting it wrong leads to memory leaks, and worse, segfaults
- API will change for Python 3.0!
2) Cython
- Plusses:
- avoid learning C API's
- no dealing with reference counting
- can code in pseudo python and generate C code
- can also interface to existing C code
- should shield you from changes to Python C api
- has become the de-facto standard within the scientific Python community
- fast indexing support for arrays
- Minuses:
- Can write code in non-standard form which may become obsolete
- Not as flexible as manual wrapping
4) ctypes
- Plusses:
- part of Python standard library
- good for interfacing to existing sharable libraries, particularly
Windows DLLs
- avoids API/reference counting issues
- good numpy support: arrays have all these in their ctypes
attribute: ::
a.ctypes.data a.ctypes.get_strides
a.ctypes.data_as a.ctypes.shape
a.ctypes.get_as_parameter a.ctypes.shape_as
a.ctypes.get_data a.ctypes.strides
a.ctypes.get_shape a.ctypes.strides_as
- Minuses:
- can't use for writing code to be turned into C extensions, only a wrapper
tool.
5) SWIG (automatic wrapper generator)
- Plusses:
- around a long time
- multiple scripting language support
- C++ support
- Good for wrapping large (many functions) existing C libraries
- Minuses:
- generates lots of code between Python and the C code
- can cause performance problems that are nearly impossible to optimize
out
- interface files can be hard to write
- doesn't necessarily avoid reference counting issues or needing to know
API's
7) scipy.weave
- Plusses:
- can turn many numpy expressions into C code
- dynamic compiling and loading of generated C code
- can embed pure C code in Python module and have weave extract, generate
interfaces and compile, etc.
- Minuses:
- Future very uncertain: it's the only part of Scipy not ported to Python 3
and is effectively deprecated in favor of Cython.
8) Psyco
- Plusses:
- Turns pure python into efficient machine code through jit-like
optimizations
- very fast when it optimizes well
- Minuses:
- Only on intel (windows?)
- Doesn't do much for numpy?
Interfacing to Fortran:
-----------------------
The clear choice to wrap Fortran code is
`f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_.
Pyfort is an older alternative, but not supported any longer.
Fwrap is a newer project that looked promising but isn't being developed any
longer.
Interfacing to C++:
-------------------
1) Cython
2) CXX
3) Boost.python
4) SWIG
5) SIP (used mainly in PyQT)
""" |
"""==============
Array indexing
==============
Array indexing refers to any use of the square brackets ([]) to index
array values. There are many options to indexing, which give numpy
indexing great power, but with power comes some complexity and the
potential for confusion. This section is just an overview of the
various options and issues related to indexing. Aside from single
element indexing, the details on most of these options are to be
found in related sections.
Assignment vs referencing
=========================
Most of the following examples show the use of indexing when
referencing data in an array. The examples work just as well
when assigning to an array. See the section at the end for
specific examples and explanations on how assignments work.
Single element indexing
=======================
Single element indexing for a 1-D array is what one expects. It work
exactly like that for other standard Python sequences. It is 0-based,
and accepts negative indices for indexing from the end of the array. ::
>>> x = np.arange(10)
>>> x[2]
2
>>> x[-2]
8
Unlike lists and tuples, numpy arrays support multidimensional indexing
for multidimensional arrays. That means that it is not necessary to
separate each dimension's index into its own set of square brackets. ::
>>> x.shape = (2,5) # now x is 2-dimensional
>>> x[1,3]
8
>>> x[1,-1]
9
Note that if one indexes a multidimensional array with fewer indices
than dimensions, one gets a subdimensional array. For example: ::
>>> x[0]
array([0, 1, 2, 3, 4])
That is, each index specified selects the array corresponding to the
rest of the dimensions selected. In the above example, choosing 0
means that the remaining dimension of length 5 is being left unspecified,
and that what is returned is an array of that dimensionality and size.
It must be noted that the returned array is not a copy of the original,
but points to the same values in memory as does the original array.
In this case, the 1-D array at the first position (0) is returned.
So using a single index on the returned array, results in a single
element being returned. That is: ::
>>> x[0][2]
2
So note that ``x[0,2] = x[0][2]`` though the second case is more
inefficient as a new temporary array is created after the first index
that is subsequently indexed by 2.
Note to those used to IDL or Fortran memory order as it relates to
indexing. Numpy uses C-order indexing. That means that the last
index usually represents the most rapidly changing memory location,
unlike Fortran or IDL, where the first index represents the most
rapidly changing location in memory. This difference represents a
great potential for confusion.
Other indexing options
======================
It is possible to slice and stride arrays to extract arrays of the
same number of dimensions, but of different sizes than the original.
The slicing and striding works exactly the same way it does for lists
and tuples except that they can be applied to multiple dimensions as
well. A few examples illustrates best: ::
>>> x = np.arange(10)
>>> x[2:5]
array([2, 3, 4])
>>> x[:-7]
array([0, 1, 2])
>>> x[1:7:2]
array([1, 3, 5])
>>> y = np.arange(35).reshape(5,7)
>>> y[1:5:2,::3]
array([[ 7, 10, 13],
[21, 24, 27]])
Note that slices of arrays do not copy the internal array data but
also produce new views of the original data.
It is possible to index arrays with other arrays for the purposes of
selecting lists of values out of arrays into new arrays. There are
two different ways of accomplishing this. One uses one or more arrays
of index values. The other involves giving a boolean array of the proper
shape to indicate the values to be selected. Index arrays are a very
powerful tool that allow one to avoid looping over individual elements in
arrays and thus greatly improve performance.
It is possible to use special features to effectively increase the
number of dimensions in an array through indexing so the resulting
array aquires the shape needed for use in an expression or with a
specific function.
Index arrays
============
Numpy arrays may be indexed with other arrays (or any other sequence-
like object that can be converted to an array, such as lists, with the
exception of tuples; see the end of this document for why this is). The
use of index arrays ranges from simple, straightforward cases to
complex, hard-to-understand cases. For all cases of index arrays, what
is returned is a copy of the original data, not a view as one gets for
slices.
Index arrays must be of integer type. Each value in the array indicates
which value in the array to use in place of the index. To illustrate: ::
>>> x = np.arange(10,1,-1)
>>> x
array([10, 9, 8, 7, 6, 5, 4, 3, 2])
>>> x[np.array([3, 3, 1, 8])]
array([7, 7, 9, 2])
The index array consisting of the values 3, 3, 1 and 8 correspondingly
create an array of length 4 (same as the index array) where each index
is replaced by the value the index array has in the array being indexed.
Negative values are permitted and work as they do with single indices
or slices: ::
>>> x[np.array([3,3,-3,8])]
array([7, 7, 4, 2])
It is an error to have index values out of bounds: ::
>>> x[np.array([3, 3, 20, 8])]
<type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9
Generally speaking, what is returned when index arrays are used is
an array with the same shape as the index array, but with the type
and values of the array being indexed. As an example, we can use a
multidimensional index array instead: ::
>>> x[np.array([[1,1],[2,3]])]
array([[9, 9],
[8, 7]])
Indexing Multi-dimensional arrays
=================================
Things become more complex when multidimensional arrays are indexed,
particularly with multidimensional index arrays. These tend to be
more unusal uses, but theyare permitted, and they are useful for some
problems. We'll start with thesimplest multidimensional case (using
the array y from the previous examples): ::
>>> y[np.array([0,2,4]), np.array([0,1,2])]
array([ 0, 15, 30])
In this case, if the index arrays have a matching shape, and there is
an index array for each dimension of the array being indexed, the
resultant array has the same shape as the index arrays, and the values
correspond to the index set for each position in the index arrays. In
this example, the first index value is 0 for both index arrays, and
thus the first value of the resultant array is y[0,0]. The next value
is y[2,1], and the last is y[4,2].
If the index arrays do not have the same shape, there is an attempt to
broadcast them to the same shape. If they cannot be broadcast to the
same shape, an exception is raised: ::
>>> y[np.array([0,2,4]), np.array([0,1])]
<type 'exceptions.ValueError'>: shape mismatch: objects cannot be
broadcast to a single shape
The broadcasting mechanism permits index arrays to be combined with
scalars for other indices. The effect is that the scalar value is used
for all the corresponding values of the index arrays: ::
>>> y[np.array([0,2,4]), 1]
array([ 1, 15, 29])
Jumping to the next level of complexity, it is possible to only
partially index an array with index arrays. It takes a bit of thought
to understand what happens in such cases. For example if we just use
one index array with y: ::
>>> y[np.array([0,2,4])]
array([[ 0, 1, 2, 3, 4, 5, 6],
[14, 15, 16, 17, 18, 19, 20],
[28, 29, 30, 31, 32, 33, 34]])
What results is the construction of a new array where each value of
the index array selects one row from the array being indexed and the
resultant array has the resulting shape (size of row, number index
elements).
An example of where this may be useful is for a color lookup table
where we want to map the values of an image into RGB triples for
display. The lookup table could have a shape (nlookup, 3). Indexing
such an array with an image with shape (ny, nx) with dtype=np.uint8
(or any integer type so long as values are with the bounds of the
lookup table) will result in an array of shape (ny, nx, 3) where a
triple of RGB values is associated with each pixel location.
In general, the shape of the resulant array will be the concatenation
of the shape of the index array (or the shape that all the index arrays
were broadcast to) with the shape of any unused dimensions (those not
indexed) in the array being indexed.
Boolean or "mask" index arrays
==============================
Boolean arrays used as indices are treated in a different manner
entirely than index arrays. Boolean arrays must be of the same shape
as the initial dimensions of the array being indexed. In the
most straightforward case, the boolean array has the same shape: ::
>>> b = y>20
>>> y[b]
array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34])
Unlike in the case of integer index arrays, in the boolean case, the
result is a 1-D array containing all the elements in the indexed array
corresponding to all the true elements in the boolean array. The
elements in the indexed array are always iterated and returned in
:term:`row-major` (C-style) order. The result is also identical to
``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy
of the data, not a view as one gets with slices.
The result will be multidimensional if y has more dimensions than b.
For example: ::
>>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y
array([False, False, False, True, True], dtype=bool)
>>> y[b[:,5]]
array([[21, 22, 23, 24, 25, 26, 27],
[28, 29, 30, 31, 32, 33, 34]])
Here the 4th and 5th rows are selected from the indexed array and
combined to make a 2-D array.
In general, when the boolean array has fewer dimensions than the array
being indexed, this is equivalent to y[b, ...], which means
y is indexed by b followed by as many : as are needed to fill
out the rank of y.
Thus the shape of the result is one dimension containing the number
of True elements of the boolean array, followed by the remaining
dimensions of the array being indexed.
For example, using a 2-D boolean array of shape (2,3)
with four True elements to select rows from a 3-D array of shape
(2,3,5) results in a 2-D result of shape (4,5): ::
>>> x = np.arange(30).reshape(2,3,5)
>>> x
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
>>> b = np.array([[True, True, False], [False, True, True]])
>>> x[b]
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]])
For further details, consult the numpy reference documentation on array indexing.
Combining index arrays with slices
==================================
Index arrays may be combined with slices. For example: ::
>>> y[np.array([0,2,4]),1:3]
array([[ 1, 2],
[15, 16],
[29, 30]])
In effect, the slice is converted to an index array
np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array
to produce a resultant array of shape (3,2).
Likewise, slicing can be combined with broadcasted boolean indices: ::
>>> y[b[:,5],1:3]
array([[22, 23],
[29, 30]])
Structural indexing tools
=========================
To facilitate easy matching of array shapes with expressions and in
assignments, the np.newaxis object can be used within array indices
to add new dimensions with a size of 1. For example: ::
>>> y.shape
(5, 7)
>>> y[:,np.newaxis,:].shape
(5, 1, 7)
Note that there are no new elements in the array, just that the
dimensionality is increased. This can be handy to combine two
arrays in a way that otherwise would require explicitly reshaping
operations. For example: ::
>>> x = np.arange(5)
>>> x[:,np.newaxis] + x[np.newaxis,:]
array([[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8]])
The ellipsis syntax maybe used to indicate selecting in full any
remaining unspecified dimensions. For example: ::
>>> z = np.arange(81).reshape(3,3,3,3)
>>> z[1,...,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
This is equivalent to: ::
>>> z[1,:,:,2]
array([[29, 32, 35],
[38, 41, 44],
[47, 50, 53]])
Assigning values to indexed arrays
==================================
As mentioned, one can select a subset of an array to assign to using
a single index, slices, and index and mask arrays. The value being
assigned to the indexed array must be shape consistent (the same shape
or broadcastable to the shape the index produces). For example, it is
permitted to assign a constant to a slice: ::
>>> x = np.arange(10)
>>> x[2:7] = 1
or an array of the right size: ::
>>> x[2:7] = np.arange(5)
Note that assignments may result in changes if assigning
higher types to lower types (like floats to ints) or even
exceptions (assigning complex to floats or ints): ::
>>> x[1] = 1.2
>>> x[1]
1
>>> x[1] = 1.2j
<type 'exceptions.TypeError'>: can't convert complex to long; use
long(abs(z))
Unlike some of the references (such as array and mask indices)
assignments are always made to the original data in the array
(indeed, nothing else would make sense!). Note though, that some
actions may not work as one may naively expect. This particular
example is often surprising to people: ::
>>> x = np.arange(0, 50, 10)
>>> x
array([ 0, 10, 20, 30, 40])
>>> x[np.array([1, 1, 3, 1])] += 1
>>> x
array([ 0, 11, 20, 31, 40])
Where people expect that the 1st location will be incremented by 3.
In fact, it will only be incremented by 1. The reason is because
a new array is extracted from the original (as a temporary) containing
the values at 1, 1, 3, 1, then the value 1 is added to the temporary,
and then the temporary is assigned back to the original array. Thus
the value of the array at x[1]+1 is assigned to x[1] three times,
rather than being incremented 3 times.
Dealing with variable numbers of indices within programs
========================================================
The index syntax is very powerful but limiting when dealing with
a variable number of indices. For example, if you want to write
a function that can handle arguments with various numbers of
dimensions without having to write special case code for each
number of possible dimensions, how can that be done? If one
supplies to the index a tuple, the tuple will be interpreted
as a list of indices. For example (using the previous definition
for the array z): ::
>>> indices = (1,1,1,1)
>>> z[indices]
40
So one can use code to construct tuples of any number of indices
and then use these within an index.
Slices can be specified within programs by using the slice() function
in Python. For example: ::
>>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2]
>>> z[indices]
array([39, 40])
Likewise, ellipsis can be specified by code by using the Ellipsis
object: ::
>>> indices = (1, Ellipsis, 1) # same as [1,...,1]
>>> z[indices]
array([[28, 31, 34],
[37, 40, 43],
[46, 49, 52]])
For this reason it is possible to use the output from the np.where()
function directly as an index since it always returns a tuple of index
arrays.
Because the special treatment of tuples, they are not automatically
converted to an array as a list would be. As an example: ::
>>> z[[1,1,1,1]] # produces a large array
array([[[[27, 28, 29],
[30, 31, 32], ...
>>> z[(1,1,1,1)] # returns a single value
40
""" |
#-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
# All rights reserved.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
# License regarding the Viridis, Magma, Plasma and Inferno colormaps:
#
# New matplotlib colormaps by NAME NAME and (in the case of viridis) NAME The Viridis, Magma, Plasma, and Inferno colormaps are released under the
# CC0 license / public domain dedication. We would appreciate credit if you
# use or redistribute these colormaps, but do not impose any legal
# restrictions.
#
# To the extent possible under law, the persons who associated CC0 with
# mpl-colormaps have waived all copyright and related or neighboring rights
# to mpl-colormaps.
#
# You should have received a copy of the CC0 legalcode along with this
# work. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
#-----------------------------------------------------------------------------
# License regarding the brewer palettes:
#
# This product includes color specifications and designs developed by
# NAME (http://colorbrewer2.org/). The Brewer colormaps are
# licensed under the Apache v2 license. You may obtain a copy of the
# License at http://www.apache.org/licenses/LICENSE-2.0
#-----------------------------------------------------------------------------
# License regarding the cividis palette from https://github.com/pnnl/cmaputil
#
# Copyright (c) 2017, Battelle Memorial Institute
#
# 1. Battelle Memorial Institute (hereinafter Battelle) hereby grants
# permission to any person or entity lawfully obtaining a copy of this software
# and associated documentation files (hereinafter "the Software") to
# redistribute and use the Software in source and binary forms, with or without
# modification. Such person or entity may use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and may permit
# others to do so, subject to the following conditions:
#
# + Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimers.
#
# + Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# + Other than as used herein, neither the name Battelle Memorial Institute or
# Battelle may be used in any form whatsoever without the express written
# consent of Battelle.
#
# 2. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL BATTELLE OR CONTRIBUTORS BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#-----------------------------------------------------------------------------
# License regarding the D3 color palettes (Category10, Category20,
# Category20b, and Category 20c):
#
# Copyright 2010-2015 NAME All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of the author nor the names of contributors may be used to
# endorse or promote products derived from this software without specific
# prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#-----------------------------------------------------------------------------
|
"""Blaze integration with the Pipeline API.
For an overview of the blaze project, see blaze.pydata.org
The blaze loader for the Pipeline API is designed to allow us to load
data from arbitrary sources as long as we can execute the needed expressions
against the data with blaze.
Data Format
-----------
The blaze Pipeline API loader expects that data is formatted in a tabular way.
The only required column in your table is ``asof_date`` where this column
represents the date this data is referencing. For example, one might have a CSV
like:
asof_date,value
2014-01-06,0
2014-01-07,1
2014-01-08,2
This says that the value on 2014-01-01 was 0 and so on.
Optionally, we may provide a ``timestamp`` column to be used to represent
point in time data. This column tells us when the data was known, or became
available to for use. Using our same CSV, we could write this with a timestamp
like:
asof_date,timestamp,value
2014-01-06,2014-01-07,0
2014-01-07,2014-01-08,1
2014-01-08,2014-01-09,2
This says that the value was 0 on 2014-01-01; however, we did not learn this
until 2014-01-02. This is useful for avoiding look-ahead bias in your
pipelines. If this column does not exist, the ``asof_date`` column will be used
instead.
If your data references a particular asset, you can add a ``sid`` column to
your dataset to represent this. For example:
asof_date,value,sid
2014-01-06,0,10
2014-01-06,1,20
2014-01-07,1,10
2014-01-07,2,20
2014-01-08,2,10
2014-01-08,3,20
This says that on 2014-01-01, the asset with id 10 had a value of 0, and the
asset with id 20 had a value of 1.
One of the key features of the Pipeline API is the handling of adjustments and
restatements. Often our data will be amended after the fact and we would like
to trade on the newest information; however, we do not want to introduce this
knowledge to our model too early. The blaze loader handles this case by
accepting a second ``deltas`` expression that contains all of the restatements
in the original expression.
For example, let's use our table from above:
asof_date,value
2014-01-06,0
2014-01-07,1
2014-01-08,2
Imagine that on the fourth the vendor realized that the calculation was
incorrect and the value on the first was actually -1. Then, on the fifth, they
realized that the value for the third was actually 3. We can construct a
``deltas`` expression to pass to our blaze loader that has the same shape as
our baseline table but only contains these new values like:
asof_date,timestamp,value
2014-01-06,2014-01-09,-1
2014-01-08,2014-01-10,3
This shows that we learned on the fourth that the value on the first was
actually -1 and that we learned on the fifth that the value on the third was
actually 3. By pulling our data into these two tables and not silently updating
our original table we can run our pipelines using the information we would
have had on that day, and we can prevent lookahead bias in the pipelines.
Conversion from Blaze to the Pipeline API
-----------------------------------------
Now that our data is structured in the way that the blaze loader expects, we
are ready to convert our blaze expressions into Pipeline API objects.
This module (zipline.pipeline.loaders.blaze) exports a function called
``from_blaze`` which performs this mapping.
The expression that you are trying to convert must either be tabular or
array-like. This means the ``dshape`` must be like:
``Dim * {A: B}`` or ``Dim * A``.
This represents an expression of dimension 1 which may be fixed or variable,
whose measure is either some record or a scalar.
The record case defines the entire table with all of the columns, this maps the
blaze expression into a pipeline DataSet. This dataset will have a column for
each field of the record. Some datashape types cannot be coerced into Pipeline
API compatible types and in that case, a column cannot be constructed.
Currently any numeric type that may be promoted to a float64 is compatible with
the Pipeline API.
The scalar case defines a single column pulled out a table. For example, let
``expr = bz.symbol('s', 'var * {field: int32, asof_date: datetime}')``.
When we pass ``expr.field`` to ``from_blaze``, we will walk back up the
expression tree until we find the table that ``field`` is defined on. We will
then proceed with the record case to construct a dataset; however, before
returning the dataset we will pull out only the column that was passed in.
For full documentation, see ``help(from_blaze)`` or ``from_blaze?`` in IPython.
Using our Pipeline DataSets and Columns
---------------------------------------
Once we have mapped our blaze expressions into Pipeline API objects, we may
use them just like any other datasets or columns. For more information on how
to run a pipeline or using the Pipeline API, see:
www.quantopian.com/help#pipeline-api
""" |
"""
Numerical python functions written for compatability with matlab(TM)
commands with the same names.
Matlab(TM) compatible functions
-------------------------------
:func:`cohere`
Coherence (normalized cross spectral density)
:func:`csd`
Cross spectral density uing Welch's average periodogram
:func:`detrend`
Remove the mean or best fit line from an array
:func:`find`
Return the indices where some condition is true;
numpy.nonzero is similar but more general.
:func:`griddata`
interpolate irregularly distributed data to a
regular grid.
:func:`prctile`
find the percentiles of a sequence
:func:`prepca`
Principal Component Analysis
:func:`psd`
Power spectral density uing Welch's average periodogram
:func:`rk4`
A 4th order runge kutta integrator for 1D or ND systems
:func:`specgram`
Spectrogram (power spectral density over segments of time)
Miscellaneous functions
-------------------------
Functions that don't exist in matlab(TM), but are useful anyway:
:meth:`cohere_pairs`
Coherence over all pairs. This is not a matlab function, but we
compute coherence a lot in my lab, and we compute it for a lot of
pairs. This function is optimized to do this efficiently by
caching the direct FFTs.
:meth:`rk4`
A 4th order Runge-Kutta ODE integrator in case you ever find
yourself stranded without scipy (and the far superior
scipy.integrate tools)
record array helper functions
-------------------------------
A collection of helper methods for numpyrecord arrays
.. _htmlonly::
See :ref:`misc-examples-index`
:meth:`rec2txt`
pretty print a record array
:meth:`rec2csv`
store record array in CSV file
:meth:`csv2rec`
import record array from CSV file with type inspection
:meth:`rec_append_fields`
adds field(s)/array(s) to record array
:meth:`rec_drop_fields`
drop fields from record array
:meth:`rec_join`
join two record arrays on sequence of fields
:meth:`rec_groupby`
summarize data by groups (similar to SQL GROUP BY)
:meth:`rec_summarize`
helper code to filter rec array fields into new fields
For the rec viewer functions(e rec2csv), there are a bunch of Format
objects you can pass into the functions that will do things like color
negative values red, set percent formatting and scaling, etc.
Example usage::
r = csv2rec('somefile.csv', checkrows=0)
formatd = dict(
weight = FormatFloat(2),
change = FormatPercent(2),
cost = FormatThousands(2),
)
rec2excel(r, 'test.xls', formatd=formatd)
rec2csv(r, 'test.csv', formatd=formatd)
scroll = rec2gtk(r, formatd=formatd)
win = gtk.Window()
win.set_size_request(600,800)
win.add(scroll)
win.show_all()
gtk.main()
Deprecated functions
---------------------
The following are deprecated; please import directly from numpy (with
care--function signatures may differ):
:meth:`conv`
convolution (numpy.convolve)
:meth:`corrcoef`
The matrix of correlation coefficients
:meth:`hist`
Histogram (numpy.histogram)
:meth:`linspace`
Linear spaced array from min to max
:meth:`load`
load ASCII file - use numpy.loadtxt
:meth:`meshgrid`
Make a 2D grid from 2 1 arrays (numpy.meshgrid)
:meth:`polyfit`
least squares best polynomial fit of x to y (numpy.polyfit)
:meth:`polyval`
evaluate a vector for a vector of polynomial coeffs (numpy.polyval)
:meth:`save`
save ASCII file - use numpy.savetxt
:meth:`trapz`
trapeziodal integration (trapz(x,y) -> numpy.trapz(y,x))
:meth:`vander`
the Vandermonde matrix (numpy.vander)
""" |
"""automatically manage newlines in repository files
This extension allows you to manage the type of line endings (CRLF or
LF) that are used in the repository and in the local working
directory. That way you can get CRLF line endings on Windows and LF on
Unix/Mac, thereby letting everybody use their OS native line endings.
The extension reads its configuration from a versioned ``.hgeol``
configuration file found in the root of the working copy. The
``.hgeol`` file use the same syntax as all other Mercurial
configuration files. It uses two sections, ``[patterns]`` and
``[repository]``.
The ``[patterns]`` section specifies how line endings should be
converted between the working copy and the repository. The format is
specified by a file pattern. The first match is used, so put more
specific patterns first. The available line endings are ``LF``,
``CRLF``, and ``BIN``.
Files with the declared format of ``CRLF`` or ``LF`` are always
checked out and stored in the repository in that format and files
declared to be binary (``BIN``) are left unchanged. Additionally,
``native`` is an alias for checking out in the platform's default line
ending: ``LF`` on Unix (including Mac OS X) and ``CRLF`` on
Windows. Note that ``BIN`` (do nothing to line endings) is Mercurial's
default behaviour; it is only needed if you need to override a later,
more general pattern.
The optional ``[repository]`` section specifies the line endings to
use for files stored in the repository. It has a single setting,
``native``, which determines the storage line endings for files
declared as ``native`` in the ``[patterns]`` section. It can be set to
``LF`` or ``CRLF``. The default is ``LF``. For example, this means
that on Windows, files configured as ``native`` (``CRLF`` by default)
will be converted to ``LF`` when stored in the repository. Files
declared as ``LF``, ``CRLF``, or ``BIN`` in the ``[patterns]`` section
are always stored as-is in the repository.
Example versioned ``.hgeol`` file::
[patterns]
**.py = native
**.vcproj = CRLF
**.txt = native
Makefile = LF
**.jpg = BIN
[repository]
native = LF
.. note::
The rules will first apply when files are touched in the working
copy, e.g. by updating to null and back to tip to touch all files.
The extension uses an optional ``[eol]`` section read from both the
normal Mercurial configuration files and the ``.hgeol`` file, with the
latter overriding the former. You can use that section to control the
overall behavior. There are three settings:
- ``eol.native`` (default ``os.linesep``) can be set to ``LF`` or
``CRLF`` to override the default interpretation of ``native`` for
checkout. This can be used with :hg:`archive` on Unix, say, to
generate an archive where files have line endings for Windows.
- ``eol.only-consistent`` (default True) can be set to False to make
the extension convert files with inconsistent EOLs. Inconsistent
means that there is both ``CRLF`` and ``LF`` present in the file.
Such files are normally not touched under the assumption that they
have mixed EOLs on purpose.
- ``eol.fix-trailing-newline`` (default False) can be set to True to
ensure that converted files end with a EOL character (either ``\\n``
or ``\\r\\n`` as per the configured patterns).
The extension provides ``cleverencode:`` and ``cleverdecode:`` filters
like the deprecated win32text extension does. This means that you can
disable win32text and enable eol and your filters will still work. You
only need to these filters until you have prepared a ``.hgeol`` file.
The ``win32text.forbid*`` hooks provided by the win32text extension
have been unified into a single hook named ``eol.checkheadshook``. The
hook will lookup the expected line endings from the ``.hgeol`` file,
which means you must migrate to a ``.hgeol`` file first before using
the hook. ``eol.checkheadshook`` only checks heads, intermediate
invalid revisions will be pushed. To forbid them completely, use the
``eol.checkallhook`` hook. These hooks are best used as
``pretxnchangegroup`` hooks.
See :hg:`help patterns` for more information about the glob patterns
used.
""" |
"""
Basic functions used by several sub-packages and
useful to have in the main name-space.
Type Handling
-------------
================ ===================
iscomplexobj Test for complex object, scalar result
isrealobj Test for real object, scalar result
iscomplex Test for complex elements, array result
isreal Test for real elements, array result
imag Imaginary part
real Real part
real_if_close Turns complex number with tiny imaginary part to real
isneginf Tests for negative infinity, array result
isposinf Tests for positive infinity, array result
isnan Tests for nans, array result
isinf Tests for infinity, array result
isfinite Tests for finite numbers, array result
isscalar True if argument is a scalar
nan_to_num Replaces NaN's with 0 and infinities with large numbers
cast Dictionary of functions to force cast to each type
common_type Determine the minimum common type code for a group
of arrays
mintypecode Return minimal allowed common typecode.
================ ===================
Index Tricks
------------
================ ===================
mgrid Method which allows easy construction of N-d
'mesh-grids'
``r_`` Append and construct arrays: turns slice objects into
ranges and concatenates them, for 2d arrays appends rows.
index_exp Konrad Hinsen's index_expression class instance which
can be useful for building complicated slicing syntax.
================ ===================
Useful Functions
----------------
================ ===================
select Extension of where to multiple conditions and choices
extract Extract 1d array from flattened array according to mask
insert Insert 1d array of values into Nd array according to mask
linspace Evenly spaced samples in linear space
logspace Evenly spaced samples in logarithmic space
fix Round x to nearest integer towards zero
mod Modulo mod(x,y) = x % y except keeps sign of y
amax Array maximum along axis
amin Array minimum along axis
ptp Array max-min along axis
cumsum Cumulative sum along axis
prod Product of elements along axis
cumprod Cumluative product along axis
diff Discrete differences along axis
angle Returns angle of complex argument
unwrap Unwrap phase along given axis (1-d algorithm)
sort_complex Sort a complex-array (based on real, then imaginary)
trim_zeros Trim the leading and trailing zeros from 1D array.
vectorize A class that wraps a Python function taking scalar
arguments into a generalized function which can handle
arrays of arguments using the broadcast rules of
numerix Python.
================ ===================
Shape Manipulation
------------------
================ ===================
squeeze Return a with length-one dimensions removed.
atleast_1d Force arrays to be > 1D
atleast_2d Force arrays to be > 2D
atleast_3d Force arrays to be > 3D
vstack Stack arrays vertically (row on row)
hstack Stack arrays horizontally (column on column)
column_stack Stack 1D arrays as columns into 2D array
dstack Stack arrays depthwise (along third dimension)
split Divide array into a list of sub-arrays
hsplit Split into columns
vsplit Split into rows
dsplit Split along third dimension
================ ===================
Matrix (2D Array) Manipulations
-------------------------------
================ ===================
fliplr 2D array with columns flipped
flipud 2D array with rows flipped
rot90 Rotate a 2D array a multiple of 90 degrees
eye Return a 2D array with ones down a given diagonal
diag Construct a 2D array from a vector, or return a given
diagonal from a 2D array.
mat Construct a Matrix
bmat Build a Matrix from blocks
================ ===================
Polynomials
-----------
================ ===================
poly1d A one-dimensional polynomial class
poly Return polynomial coefficients from roots
roots Find roots of polynomial given coefficients
polyint Integrate polynomial
polyder Differentiate polynomial
polyadd Add polynomials
polysub Substract polynomials
polymul Multiply polynomials
polydiv Divide polynomials
polyval Evaluate polynomial at given argument
================ ===================
Import Tricks
-------------
================ ===================
ppimport Postpone module import until trying to use it
ppimport_attr Postpone module import until trying to use its attribute
ppresolve Import postponed module and return it.
================ ===================
Machine Arithmetics
-------------------
================ ===================
machar_single Single precision floating point arithmetic parameters
machar_double Double precision floating point arithmetic parameters
================ ===================
Threading Tricks
----------------
================ ===================
ParallelExec Execute commands in parallel thread.
================ ===================
1D Array Set Operations
-----------------------
Set operations for 1D numeric arrays based on sort() function.
================ ===================
ediff1d Array difference (auxiliary function).
unique Unique elements of an array.
intersect1d Intersection of 1D arrays with unique elements.
setxor1d Set exclusive-or of 1D arrays with unique elements.
in1d Test whether elements in a 1D array are also present in
another array.
union1d Union of 1D arrays with unique elements.
setdiff1d Set difference of 1D arrays with unique elements.
================ ===================
""" |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# adapted from http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
# -thepaul
# This is an implementation of wcwidth() and wcswidth() (defined in
# IEEE Std 1002.1-2001) for Unicode.
#
# http://www.opengroup.org/onlinepubs/007904975/functions/wcwidth.html
# http://www.opengroup.org/onlinepubs/007904975/functions/wcswidth.html
#
# In fixed-width output devices, Latin characters all occupy a single
# "cell" position of equal width, whereas ideographic CJK characters
# occupy two such cells. Interoperability between terminal-line
# applications and (teletype-style) character terminals using the
# UTF-8 encoding requires agreement on which character should advance
# the cursor by how many cell positions. No established formal
# standards exist at present on which Unicode character shall occupy
# how many cell positions on character terminals. These routines are
# a first attempt of defining such behavior based on simple rules
# applied to data provided by the Unicode Consortium.
#
# For some graphical characters, the Unicode standard explicitly
# defines a character-cell width via the definition of the East Asian
# FullWidth (F), Wide (W), Half-width (H), and Narrow (Na) classes.
# In all these cases, there is no ambiguity about which width a
# terminal shall use. For characters in the East Asian Ambiguous (A)
# class, the width choice depends purely on a preference of backward
# compatibility with either historic CJK or Western practice.
# Choosing single-width for these characters is easy to justify as
# the appropriate long-term solution, as the CJK practice of
# displaying these characters as double-width comes from historic
# implementation simplicity (8-bit encoded characters were displayed
# single-width and 16-bit ones double-width, even for Greek,
# Cyrillic, etc.) and not any typographic considerations.
#
# Much less clear is the choice of width for the Not East Asian
# (Neutral) class. Existing practice does not dictate a width for any
# of these characters. It would nevertheless make sense
# typographically to allocate two character cells to characters such
# as for instance EM SPACE or VOLUME INTEGRAL, which cannot be
# represented adequately with a single-width glyph. The following
# routines at present merely assign a single-cell width to all
# neutral characters, in the interest of simplicity. This is not
# entirely satisfactory and should be reconsidered before
# establishing a formal standard in this area. At the moment, the
# decision which Not East Asian (Neutral) characters should be
# represented by double-width glyphs cannot yet be answered by
# applying a simple rule from the Unicode database content. Setting
# up a proper standard for the behavior of UTF-8 character terminals
# will require a careful analysis not only of each Unicode character,
# but also of each presentation form, something the author of these
# routines has avoided to do so far.
#
# http://www.unicode.org/unicode/reports/tr11/
#
# NAME -- 2007-05-26 (Unicode 5.0)
#
# Permission to use, copy, modify, and distribute this software
# for any purpose and without fee is hereby granted. The author
# disclaims all warranties with regard to this software.
#
# Latest C version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
# auxiliary function for binary search in interval table
|
"""Search and retreive information using the EUtils history.
EUtils has two major modes. One uses history while the other uses
database identifiers. This is a high-level interface for working with
the history. You should use this module if you expect to work with
large or an unknown number of identifiers.
See DBIdsClient if you want to get information about a set of known
database identifiers.
>>> from Bio.EUtils import HistoryClient
>>> client = HistoryClient.HistoryClient()
>>> cancer = client.search("cancer")
>>> print len(cancer)
1458353
>>>
That's quite a few hits. Most people would like to see the first few
records then try to refine the search.
>>> print cancer[:5].efetch(retmode = "text", rettype = "docsum").read()
1: NAME reply: Adjuvant therapy for rectal cancer cannot be based on the
results of other surgeons (Br J Surg 2002; 89: 946-947).
Br J Surg. 2003 Jan;90(1):121-122.
PMID: 12520589 [PubMed - as supplied by publisher]
2: NAME NAME therapy for rectal cancer cannot be based on the results of other
surgeons (Br J Surg 2002; 89: 946-947).
Br J Surg. 2003 Jan;90(1):121.
PMID: 12520588 [PubMed - as supplied by publisher]
3: NAME NAME NAME NAME NAME NAME comparison of video-assisted thoracoscopic oesophagectomy and radical lymph
node dissection for squamous cell cancer of the oesophagus with open operation.
Br J Surg. 2003 Jan;90(1):108-13.
PMID: 12520585 [PubMed - in process]
4: NAME NAME NAME NAME evaluation of mucin antigen and E-cadherin expression may help select
patients with gastric cancer suitable for minimally invasive therapy.
Br J Surg. 2003 Jan;90(1):95-101.
PMID: 12520583 [PubMed - in process]
5: NAME NAME NAME NAME NAME NAME of surgical procedure for gastric cancer on quality of life.
Br J Surg. 2003 Jan;90(1):91-4.
PMID: 12520582 [PubMed - in process]
>>>
Now refine the query to publications in the last day
>>> from Bio import EUtils
>>> recent_cancer = client.search("#%s" % (cancer.query_key,),
... daterange = EUtils.WithinNDays(1))
>>> len(recent_cancer)
106
>>>
Still quite a few. What's the last one about?
>>> for k, v in recent_cancer[-1].summary().dataitems.allitems():
... print k, "=", v
...
PubDate = 2002/12/01
Source = Nippon Shokakibyo Gakkai NAME = NAME = [Strategy against cancer in 21 century, with emphasis of cancer prevention and refractory cancer]
Volume = 99
Pages = 1423-7
EntrezDate = 2003/01/10
PubMedId = 12518389
MedlineId = 22406828
Lang = Japanese
PubType =
RecordStatus = PubMed - in process
Issue = 12
SO = 2002 Dec;99(12):1423-7
DOI =
JTA = KJY
ISSN = 0446-6586
PubId =
PubStatus = 4
Status = 6
HasAbstract = 0
ArticleIds = {'MedlineUID': u'22406828', 'PubMedId': u'12518389'}
>>>
Here's an interesting one. Which articles are related to this one but
are not about cancer? First, get the related articles.
>>> neighbors = recent_cancer[-1].neighbor_links()
>>> dbids = neighbors.linksetdbs["pubmed_pubmed"].dbids
>>> len(dbids)
10296
>>>
Upload that back to the server
>>> related_result = client.post(dbids)
>>>
>>> non_cancer = client.search("#%s NOT #%s" % (related_result.query_key,
... cancer.query_key))
>>> len(non_cancer)
4000
>>>
The HistoryClient instance has an attribute named 'query_history'
which stores the searches done so far, keyed by the query_key value
assigned by the server. The history on the server can expire. If
that is detected during a search then previous results are invalidated
and removed from the query_history. Future requests from invalidated
results will raise an error.
If a request is made from a search which has not been invalidated but
whose history has expired then queries like 'summary' will raise an
error. Some other request (like 'dbids') may return success but
contain undefined information.
""" |