comments
stringlengths
2
31.4k
#!/usr/bin/env python # -*- coding: utf-8 -*- # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
""" ============ Array basics ============ Array types and conversions between types ========================================= Numpy supports a much greater variety of numerical types than Python does. This section shows which are available, and how to modify an array's data-type. ========== ========================================================= Data type Description ========== ========================================================= bool Boolean (True or False) stored as a byte int Platform integer (normally either ``int32`` or ``int64``) int8 Byte (-128 to 127) int16 Integer (-32768 to 32767) int32 Integer (-2147483648 to 2147483647) int64 Integer (9223372036854775808 to 9223372036854775807) uint8 Unsigned integer (0 to 255) uint16 Unsigned integer (0 to 65535) uint32 Unsigned integer (0 to 4294967295) uint64 Unsigned integer (0 to 18446744073709551615) float Shorthand for ``float64``. float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa float64 Double precision float: sign bit, 11 bits exponent, 52 bits mantissa complex Shorthand for ``complex128``. complex64 Complex number, represented by two 32-bit floats (real and imaginary components) complex128 Complex number, represented by two 64-bit floats (real and imaginary components) ========== ========================================================= Numpy numerical types are instances of ``dtype`` (data-type) objects, each having unique characteristics. Once you have imported NumPy using :: >>> import numpy as np the dtypes are available as ``np.bool``, ``np.float32``, etc. Advanced types, not listed in the table above, are explored in section :ref:`structured_arrays`. There are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory). Some types, such as ``int`` and ``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit vs. 64-bit machines). This should be taken into account when interfacing with low-level code (such as C or Fortran) where the raw memory is addressed. Data-types can be used as functions to convert python numbers to array scalars (see the array scalar section for an explanation), python sequences of numbers to arrays of that type, or as arguments to the dtype keyword that many numpy functions or methods accept. Some examples:: >>> import numpy as np >>> x = np.float32(1.0) >>> x 1.0 >>> y = np.int_([1,2,4]) >>> y array([1, 2, 4]) >>> z = np.arange(3, dtype=np.uint8) >>> z array([0, 1, 2], dtype=uint8) Array types can also be referred to by character codes, mostly to retain backward compatibility with older packages such as Numeric. Some documentation may still refer to these, for example:: >>> np.array([1, 2, 3], dtype='f') array([ 1., 2., 3.], dtype=float32) We recommend using dtype objects instead. To convert the type of an array, use the .astype() method (preferred) or the type itself as a function. For example: :: >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE array([ 0., 1., 2.]) >>> np.int8(z) array([0, 1, 2], dtype=int8) Note that, above, we use the *Python* float object as a dtype. NumPy knows that ``int`` refers to ``np.int``, ``bool`` means ``np.bool`` and that ``float`` is ``np.float``. The other data-types do not have Python equivalents. To determine the type of an array, look at the dtype attribute:: >>> z.dtype dtype('uint8') dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also be used indirectly to query properties of the type, such as whether it is an integer:: >>> d = np.dtype(int) >>> d dtype('int32') >>> np.issubdtype(d, int) True >>> np.issubdtype(d, float) False Array Scalars ============= Numpy generally returns elements of arrays as array scalars (a scalar with an associated dtype). Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``). The primary advantage of using array scalars is that they preserve the array type (Python may not have a matching scalar type available, e.g. ``int16``). Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. NumPy scalars also have many of the same methods arrays do. """
""" TestCmd.py: a testing framework for commands and scripts. The TestCmd module provides a framework for portable automated testing of executable commands and scripts (in any language, not just Python), especially commands and scripts that require file system interaction. In addition to running tests and evaluating conditions, the TestCmd module manages and cleans up one or more temporary workspace directories, and provides methods for creating files and directories in those workspace directories from in-line data, here-documents), allowing tests to be completely self-contained. A TestCmd environment object is created via the usual invocation: import TestCmd test = TestCmd.TestCmd() There are a bunch of keyword arguments available at instantiation: test = TestCmd.TestCmd(description = 'string', program = 'program_or_script_to_test', interpreter = 'script_interpreter', workdir = 'prefix', subdir = 'subdir', verbose = Boolean, match = default_match_function, diff = default_diff_function, combine = Boolean) There are a bunch of methods that let you do different things: test.verbose_set(1) test.description_set('string') test.program_set('program_or_script_to_test') test.interpreter_set('script_interpreter') test.interpreter_set(['script_interpreter', 'arg']) test.workdir_set('prefix') test.workdir_set('') test.workpath('file') test.workpath('subdir', 'file') test.subdir('subdir', ...) test.rmdir('subdir', ...) test.write('file', "contents\n") test.write(['subdir', 'file'], "contents\n") test.read('file') test.read(['subdir', 'file']) test.read('file', mode) test.read(['subdir', 'file'], mode) test.writable('dir', 1) test.writable('dir', None) test.preserve(condition, ...) test.cleanup(condition) test.command_args(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program') test.run(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', chdir = 'directory_to_chdir_to', stdin = 'input to feed to the program\n') universal_newlines = True) p = test.start(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', universal_newlines = None) test.finish(self, p) test.pass_test() test.pass_test(condition) test.pass_test(condition, function) test.fail_test() test.fail_test(condition) test.fail_test(condition, function) test.fail_test(condition, function, skip) test.no_result() test.no_result(condition) test.no_result(condition, function) test.no_result(condition, function, skip) test.stdout() test.stdout(run) test.stderr() test.stderr(run) test.symlink(target, link) test.banner(string) test.banner(string, width) test.diff(actual, expected) test.match(actual, expected) test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n") test.match_exact(["actual 1\n", "actual 2\n"], ["expected 1\n", "expected 2\n"]) test.match_re("actual 1\nactual 2\n", regex_string) test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes) test.match_re_dotall("actual 1\nactual 2\n", regex_string) test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes) test.tempdir() test.tempdir('temporary-directory') test.sleep() test.sleep(seconds) test.where_is('foo') test.where_is('foo', 'PATH1:PATH2') test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') test.unlink('file') test.unlink('subdir', 'file') The TestCmd module provides pass_test(), fail_test(), and no_result() unbound functions that report test results for use with the Aegis change management system. These methods terminate the test immediately, reporting PASSED, FAILED, or NO RESULT respectively, and exiting with status 0 (success), 1 or 2 respectively. This allows for a distinction between an actual failed test and a test that could not be properly evaluated because of an external condition (such as a full file system or incorrect permissions). import TestCmd TestCmd.pass_test() TestCmd.pass_test(condition) TestCmd.pass_test(condition, function) TestCmd.fail_test() TestCmd.fail_test(condition) TestCmd.fail_test(condition, function) TestCmd.fail_test(condition, function, skip) TestCmd.no_result() TestCmd.no_result(condition) TestCmd.no_result(condition, function) TestCmd.no_result(condition, function, skip) The TestCmd module also provides unbound functions that handle matching in the same way as the match_*() methods described above. import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_exact) test = TestCmd.TestCmd(match = TestCmd.match_re) test = TestCmd.TestCmd(match = TestCmd.match_re_dotall) The TestCmd module provides unbound functions that can be used for the "diff" argument to TestCmd.TestCmd instantiation: import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_re, diff = TestCmd.diff_re) test = TestCmd.TestCmd(diff = TestCmd.simple_diff) The "diff" argument can also be used with standard difflib functions: import difflib test = TestCmd.TestCmd(diff = difflib.context_diff) test = TestCmd.TestCmd(diff = difflib.unified_diff) Lastly, the where_is() method also exists in an unbound function version. import TestCmd TestCmd.where_is('foo') TestCmd.where_is('foo', 'PATH1:PATH2') TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') """
""" ============================= Byteswapping and byte order ============================= Introduction to byte ordering and ndarrays ========================================== The ``ndarray`` is an object that provide a python array interface to data in memory. It often happens that the memory that you want to view with an array is not of the same byte ordering as the computer on which you are running Python. For example, I might be working on a computer with a little-endian CPU - such as an Intel Pentium, but I have loaded some data from a file written by a computer that is big-endian. Let's say I have loaded 4 bytes from a file written by a Sun (big-endian) computer. I know that these 4 bytes represent two 16-bit integers. On a big-endian machine, a two-byte integer is stored with the Most Significant Byte (MSB) first, and then the Least Significant Byte (LSB). Thus the bytes are, in memory order: #. MSB integer 1 #. LSB integer 1 #. MSB integer 2 #. LSB integer 2 Let's say the two integers were in fact 1 and 770. Because 770 = 256 * 3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2. The bytes I have loaded from the file would have these contents: >>> big_end_str = chr(0) + chr(1) + chr(3) + chr(2) >>> big_end_str '\\x00\\x01\\x03\\x02' We might want to use an ``ndarray`` to access these integers. In that case, we can create an array around this memory, and tell numpy that there are two integers, and that they are 16 bit and big-endian: >>> import numpy as np >>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_str) >>> big_end_arr[0] 1 >>> big_end_arr[1] 770 Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian' (``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For example, if our data represented a single unsigned 4-byte little-endian integer, the dtype string would be ``<u4``. In fact, why don't we try that? >>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_str) >>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3 True Returning to our ``big_end_arr`` - in this case our underlying data is big-endian (data endianness) and we've set the dtype to match (the dtype is also big-endian). However, sometimes you need to flip these around. Changing byte ordering ====================== As you can imagine from the introduction, there are two ways you can affect the relationship between the byte ordering of the array and the underlying memory it is looking at: * Change the byte-ordering information in the array dtype so that it interprets the undelying data as being in a different byte order. This is the role of ``arr.newbyteorder()`` * Change the byte-ordering of the underlying data, leaving the dtype interpretation as it was. This is what ``arr.byteswap()`` does. The common situations in which you need to change byte ordering are: #. Your data and dtype endianess don't match, and you want to change the dtype so that it matches the data. #. Your data and dtype endianess don't match, and you want to swap the data so that they match the dtype #. Your data and dtype endianess match, but you want the data swapped and the dtype to reflect this Data and dtype endianness don't match, change dtype to match data ----------------------------------------------------------------- We make something where they don't match: >>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_str) >>> wrong_end_dtype_arr[0] 256 The obvious fix for this situation is to change the dtype so it gives the correct endianness: >>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder() >>> fixed_end_dtype_arr[0] 1 Note the the array has not changed in memory: >>> fixed_end_dtype_arr.tobytes() == big_end_str True Data and type endianness don't match, change data to match dtype ---------------------------------------------------------------- You might want to do this if you need the data in memory to be a certain ordering. For example you might be writing the memory out to a file that needs a certain byte ordering. >>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap() >>> fixed_end_mem_arr[0] 1 Now the array *has* changed in memory: >>> fixed_end_mem_arr.tobytes() == big_end_str False Data and dtype endianness match, swap data and dtype ---------------------------------------------------- You may have a correctly specified array dtype, but you need the array to have the opposite byte order in memory, and you want the dtype to match so the array values make sense. In this case you just do both of the previous operations: >>> swapped_end_arr = big_end_arr.byteswap().newbyteorder() >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_str False An easier way of casting the data to a specific dtype and byte ordering can be achieved with the ndarray astype method: >>> swapped_end_arr = big_end_arr.astype('<i2') >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_str False """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be > 1D atleast_2d Force arrays to be > 2D atleast_3d Force arrays to be > 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) stack Stack arrays along a new axis split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Iterators --------- ================ =================== Arrayterator A buffered iterator for big arrays. ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
# # ElementTree # $Id: ElementTree.py 2326 2005-03-17 07:45:21Z USERNAME $ # # light-weight XML support for Python 1.5.2 and later. # # history: # 2001-10-20 fl created (from various sources) # 2001-11-01 fl return root from parse method # 2002-02-16 fl sort attributes in lexical order # 2002-04-06 fl TreeBuilder refactoring, added PythonDoc markup # 2002-05-01 fl finished TreeBuilder refactoring # 2002-07-14 fl added basic namespace support to ElementTree.write # 2002-07-25 fl added QName attribute support # 2002-10-20 fl fixed encoding in write # 2002-11-24 fl changed default encoding to ascii; fixed attribute encoding # 2002-11-27 fl accept file objects or file names for parse/write # 2002-12-04 fl moved XMLTreeBuilder back to this module # 2003-01-11 fl fixed entity encoding glitch for us-ascii # 2003-02-13 fl added XML literal factory # 2003-02-21 fl added ProcessingInstruction/PI factory # 2003-05-11 fl added tostring/fromstring helpers # 2003-05-26 fl added ElementPath support # 2003-07-05 fl added makeelement factory method # 2003-07-28 fl added more well-known namespace prefixes # 2003-08-15 fl fixed typo in ElementTree.findtext (Thomas NAME 2003-09-04 fl fall back on emulator if ElementPath is not installed # 2003-10-31 fl markup updates # 2003-11-15 fl fixed nested namespace bug # 2004-03-28 fl added XMLID helper # 2004-06-02 fl added default support to findtext # 2004-06-08 fl fixed encoding of non-ascii element/attribute names # 2004-08-23 fl take advantage of post-2.1 expat features # 2005-02-01 fl added iterparse implementation # 2005-03-02 fl fixed iterparse support for pre-2.2 versions # # Copyright (c) 1999-2005 by NAME All rights reserved. # # EMAIL # http://www.pythonware.com # # -------------------------------------------------------------------- # The ElementTree toolkit is # # Copyright (c) 1999-2005 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
""" ===================================================== Optimization and root finding (:mod:`scipy.optimize`) ===================================================== .. currentmodule:: scipy.optimize Optimization ============ Local Optimization ------------------ .. autosummary:: :toctree: generated/ minimize - Unified interface for minimizers of multivariate functions minimize_scalar - Unified interface for minimizers of univariate functions OptimizeResult - The optimization result returned by some optimizers OptimizeWarning - The optimization encountered problems The `minimize` function supports the following methods: .. toctree:: optimize.minimize-neldermead optimize.minimize-powell optimize.minimize-cg optimize.minimize-bfgs optimize.minimize-newtoncg optimize.minimize-lbfgsb optimize.minimize-tnc optimize.minimize-cobyla optimize.minimize-slsqp optimize.minimize-dogleg optimize.minimize-trustncg The `minimize_scalar` function supports the following methods: .. toctree:: optimize.minimize_scalar-brent optimize.minimize_scalar-bounded optimize.minimize_scalar-golden The specific optimization method interfaces below in this subsection are not recommended for use in new scripts; all of these methods are accessible via a newer, more consistent interface provided by the functions above. General-purpose multivariate methods: .. autosummary:: :toctree: generated/ fmin - Nelder-Mead Simplex algorithm fmin_powell - Powell's (modified) level set method fmin_cg - Non-linear (Polak-Ribiere) conjugate gradient algorithm fmin_bfgs - Quasi-Newton method (Broydon-Fletcher-Goldfarb-Shanno) fmin_ncg - Line-search Newton Conjugate Gradient Constrained multivariate methods: .. autosummary:: :toctree: generated/ fmin_l_bfgs_b - Zhu, Byrd, and Nocedal's constrained optimizer fmin_tnc - Truncated Newton code fmin_cobyla - Constrained optimization by linear approximation fmin_slsqp - Minimization using sequential least-squares programming differential_evolution - stochastic minimization using differential evolution Univariate (scalar) minimization methods: .. autosummary:: :toctree: generated/ fminbound - Bounded minimization of a scalar function brent - 1-D function minimization using Brent method golden - 1-D function minimization using Golden Section method Equation (Local) Minimizers --------------------------- .. autosummary:: :toctree: generated/ leastsq - Minimize the sum of squares of M equations in N unknowns least_squares - Feature-rich least-squares minimization. nnls - Linear least-squares problem with non-negativity constraint lsq_linear - Linear least-squares problem with bound constraints Global Optimization ------------------- .. autosummary:: :toctree: generated/ basinhopping - Basinhopping stochastic optimizer brute - Brute force searching optimizer differential_evolution - stochastic minimization using differential evolution Rosenbrock function ------------------- .. autosummary:: :toctree: generated/ rosen - The Rosenbrock function. rosen_der - The derivative of the Rosenbrock function. rosen_hess - The Hessian matrix of the Rosenbrock function. rosen_hess_prod - Product of the Rosenbrock Hessian with a vector. Fitting ======= .. autosummary:: :toctree: generated/ curve_fit -- Fit curve to a set of points Root finding ============ Scalar functions ---------------- .. autosummary:: :toctree: generated/ brentq - quadratic interpolation Brent method brenth - Brent method, modified by Harris with hyperbolic extrapolation ridder - Ridder's method bisect - Bisection method newton - Secant method or Newton's method Fixed point finding: .. autosummary:: :toctree: generated/ fixed_point - Single-variable fixed-point solver Multidimensional ---------------- General nonlinear solvers: .. autosummary:: :toctree: generated/ root - Unified interface for nonlinear solvers of multivariate functions fsolve - Non-linear multi-variable equation solver broyden1 - Broyden's first method broyden2 - Broyden's second method The `root` function supports the following methods: .. toctree:: optimize.root-hybr optimize.root-lm optimize.root-broyden1 optimize.root-broyden2 optimize.root-anderson optimize.root-linearmixing optimize.root-diagbroyden optimize.root-excitingmixing optimize.root-krylov optimize.root-dfsane Large-scale nonlinear solvers: .. autosummary:: :toctree: generated/ newton_krylov anderson Simple iterations: .. autosummary:: :toctree: generated/ excitingmixing linearmixing diagbroyden :mod:`Additional information on the nonlinear solvers <scipy.optimize.nonlin>` Linear Programming ================== Simplex Algorithm: .. autosummary:: :toctree: generated/ linprog -- Linear programming using the simplex algorithm linprog_verbose_callback -- Sample callback function for linprog The `linprog` function supports the following methods: .. toctree:: optimize.linprog-simplex Assignment problems: .. autosummary:: :toctree: generated/ linear_sum_assignment -- Solves the linear-sum assignment problem Utilities ========= .. autosummary:: :toctree: generated/ approx_fprime - Approximate the gradient of a scalar function bracket - Bracket a minimum, given two starting points check_grad - Check the supplied derivative using finite differences line_search - Return a step that satisfies the strong Wolfe conditions show_options - Show specific options optimization solvers LbfgsInvHessProduct - Linear operator for L-BFGS approximate inverse Hessian """
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
# -*- coding: utf-8 -*- ############################ Copyrights and license ############################ # # # Copyright 2012 NAME <EMAIL> # # Copyright 2012 NAME <EMAIL> # # Copyright 2012 NAME <EMAIL> # # Copyright 2012 Zearin <EMAIL> # # Copyright 2013 AKFish <EMAIL> # # Copyright 2013 NAME <EMAIL> # # Copyright 2013 NAME <EMAIL> # # Copyright 2013 NAME <EMAIL> # # Copyright 2013 NAME <EMAIL> # # Copyright 2013 NAME <EMAIL> # # Copyright 2013 martinqt <EMAIL> # # Copyright 2014 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 NAME <EMAIL> # # Copyright 2015 edhollandAL <EMAIL> # # Copyright 2016 USERNAME <EMAIL> # # Copyright 2016 NAME <EMAIL> # # Copyright 2016 NAME <EMAIL> # # Copyright 2016 NAME <EMAIL> # # Copyright 2016 Per Øyvind Karlsen <EMAIL> # # Copyright 2016 NAME <EMAIL> # # Copyright 2016 NAME <EMAIL> # # Copyright 2016 NAME <EMAIL> # # Copyright 2016 USERNAME <EMAIL> # # Copyright 2017 NAME <EMAIL> # # Copyright 2017 NAME <EMAIL> # # Copyright 2017 NAME <EMAIL> # # Copyright 2017 NAME <EMAIL> # # Copyright 2017 NAME <EMAIL> # # Copyright 2017 NAME <EMAIL> # # Copyright 2017 NAME <EMAIL> # # Copyright 2017 NAME [Vauxoo] <EMAIL> # # Copyright 2017 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 USERNAME <EMAIL> # # Copyright 2018 USERNAME <EMAIL> # # Copyright 2018 USERNAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # Copyright 2018 NAME <EMAIL> # # # # This file is part of PyGithub. # # http://pygithub.readthedocs.io/ # # # # PyGithub is free software: you can redistribute it and/or modify it under # # the terms of the GNU Lesser General Public License as published by the Free # # Software Foundation, either version 3 of the License, or (at your option) # # any later version. # # # # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY # # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more # # details. # # # # You should have received a copy of the GNU Lesser General Public License # # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. # # # ################################################################################
""" Work out the first ten digits of the sum of the following one-hundred 50-digit numbers. 37107287533902102798797998220837590246510135740250 46376937677490009712648124896970078050417018260538 74324986199524741059474233309513058123726617309629 91942213363574161572522430563301811072406154908250 23067588207539346171171980310421047513778063246676 89261670696623633820136378418383684178734361726757 28112879812849979408065481931592621691275889832738 44274228917432520321923589422876796487670272189318 47451445736001306439091167216856844588711603153276 70386486105843025439939619828917593665686757934951 62176457141856560629502157223196586755079324193331 64906352462741904929101432445813822663347944758178 92575867718337217661963751590579239728245598838407 58203565325359399008402633568948830189458628227828 80181199384826282014278194139940567587151170094390 35398664372827112653829987240784473053190104293586 86515506006295864861532075273371959191420517255829 71693888707715466499115593487603532921714970056938 54370070576826684624621495650076471787294438377604 53282654108756828443191190634694037855217779295145 36123272525000296071075082563815656710885258350721 45876576172410976447339110607218265236877223636045 17423706905851860660448207621209813287860733969412 81142660418086830619328460811191061556940512689692 51934325451728388641918047049293215058642563049483 62467221648435076201727918039944693004732956340691 15732444386908125794514089057706229429197107928209 55037687525678773091862540744969844508330393682126 18336384825330154686196124348767681297534375946515 80386287592878490201521685554828717201219257766954 78182833757993103614740356856449095527097864797581 16726320100436897842553539920931837441497806860984 48403098129077791799088218795327364475675590848030 87086987551392711854517078544161852424320693150332 59959406895756536782107074926966537676326235447210 69793950679652694742597709739166693763042633987085 41052684708299085211399427365734116182760315001271 65378607361501080857009149939512557028198746004375 35829035317434717326932123578154982629742552737307 94953759765105305946966067683156574377167401875275 88902802571733229619176668713819931811048770190271 25267680276078003013678680992525463401061632866526 36270218540497705585629946580636237993140746255962 24074486908231174977792365466257246923322810917141 91430288197103288597806669760892938638285025333403 34413065578016127815921815005561868836468420090470 23053081172816430487623791969842487255036638784583 11487696932154902810424020138335124462181441773470 63783299490636259666498587618221225225512486764533 67720186971698544312419572409913959008952310058822 95548255300263520781532296796249481641953868218774 76085327132285723110424803456124867697064507995236 37774242535411291684276865538926205024910326572967 23701913275725675285653248258265463092207058596522 29798860272258331913126375147341994889534765745501 18495701454879288984856827726077713721403798879715 38298203783031473527721580348144513491373226651381 34829543829199918180278916522431027392251122869539 40957953066405232632538044100059654939159879593635 29746152185502371307642255121183693803580388584903 41698116222072977186158236678424689157993532961922 62467957194401269043877107275048102390895523597457 23189706772547915061505504953922979530901129967519 86188088225875314529584099251203829009407770775672 11306739708304724483816533873502340845647058077308 82959174767140363198008187129011875491310547126581 97623331044818386269515456334926366572897563400500 42846280183517070527831839425882145521227251250327 55121603546981200581762165212827652751691296897789 32238195734329339946437501907836945765883352399886 75506164965184775180738168837861091527357929701337 62177842752192623401942399639168044983993173312731 32924185707147349566916674687634660915035914677504 99518671430235219628894890102423325116913619626622 73267460800591547471830798392868535206946944540724 76841822524674417161514036427982273348055556214818 97142617910342598647204516893989422179826088076852 87783646182799346313767754307809363333018982642090 10848802521674670883215120185883543223812876952786 71329612474782464538636993009049310363619763878039 62184073572399794223406235393808339651327408011116 66627891981488087797941876876144230030984490851411 60661826293682836764744779239180335110989069790714 85786944089552990653640447425576083659976645795096 66024396409905389607120198219976047599490197230297 64913982680032973156037120041377903785566085089252 16730939319872750275468906903707539413042652315011 94809377245048795150954100921645863754710598436791 78639167021187492431995700641917969777599028300699 15368713711936614952811305876380278410754449733078 40789923115535562561142322423255033685442488917353 44889911501440648020369068063960672322193204149535 41503128880339536053299340368006977710650566631954 81234880673210146739058568557934581403627822703280 82616570773948327592232845941706525094512325230608 22918802058777319719839450180888072429661980811197 77158542502016545090413245809786882778948721859617 72107838435069186155435662884062257473692284509516 20849603980134001723930671666823555245252804609722 53503534226472524250874054075591789781264330331690 """
""" Examples ======== :: # Single worker with explicit name and events enabled. $ celeryd-multi start NAME -E # Pidfiles and logfiles are stored in the current directory # by default. Use --pidfile and --logfile argument to change # this. The abbreviation %n will be expanded to the current # node name. $ celeryd-multi start NAME -E --pidfile=/var/run/celery/%n.pid --logfile=/var/log/celery/%n.log # You need to add the same arguments when you restart, # as these are not persisted anywhere. $ celeryd-multi restart NAME -E --pidfile=/var/run/celery/%n.pid --logfile=/var/run/celery/%n.log # To stop the node, you need to specify the same pidfile. $ celeryd-multi stop NAME --pidfile=/var/run/celery/%n.pid # 3 workers, with 3 processes each $ celeryd-multi start 3 -c 3 celeryd -n celeryd1.myhost -c 3 celeryd -n celeryd2.myhost -c 3 celeryd- n celeryd3.myhost -c 3 # start 3 named workers $ celeryd-multi start image video data -c 3 celeryd -n image.myhost -c 3 celeryd -n video.myhost -c 3 celeryd -n data.myhost -c 3 # specify custom hostname $ celeryd-multi start 2 -n worker.example.com -c 3 celeryd -n celeryd1.worker.example.com -c 3 celeryd -n celeryd2.worker.example.com -c 3 # Advanced example starting 10 workers in the background: # * Three of the workers processes the images and video queue # * Two of the workers processes the data queue with loglevel DEBUG # * the rest processes the default' queue. $ celeryd-multi start 10 -l INFO -Q:1-3 images,video -Q:4,5:data -Q default -L:4,5 DEBUG # You can show the commands necessary to start the workers with # the "show" command: $ celeryd-multi show 10 -l INFO -Q:1-3 images,video -Q:4,5:data -Q default -L:4,5 DEBUG # Additional options are added to each celeryd', # but you can also modify the options for ranges of, or specific workers # 3 workers: Two with 3 processes, and one with 10 processes. $ celeryd-multi start 3 -c 3 -c:1 10 celeryd -n celeryd1.myhost -c 10 celeryd -n celeryd2.myhost -c 3 celeryd -n celeryd3.myhost -c 3 # can also specify options for named workers $ celeryd-multi start image video data -c 3 -c:image 10 celeryd -n image.myhost -c 10 celeryd -n video.myhost -c 3 celeryd -n data.myhost -c 3 # ranges and lists of workers in options is also allowed: # (-c:1-3 can also be written as -c:1,2,3) $ celeryd-multi start 5 -c 3 -c:1-3 10 celeryd -n celeryd1.myhost -c 10 celeryd -n celeryd2.myhost -c 10 celeryd -n celeryd3.myhost -c 10 celeryd -n celeryd4.myhost -c 3 celeryd -n celeryd5.myhost -c 3 # lists also works with named workers $ celeryd-multi start foo bar baz xuzzy -c 3 -c:foo,bar,baz 10 celeryd -n foo.myhost -c 10 celeryd -n bar.myhost -c 10 celeryd -n baz.myhost -c 10 celeryd -n xuzzy.myhost -c 3 """
"""Higher-level Query wrapper. There are perhaps too many query APIs in the world. The fundamental API here overloads the 6 comparisons operators to represent filters on property values, and supports AND and OR operations (implemented as functions -- Python's 'and' and 'or' operators cannot be overloaded, and the '&' and '|' operators have a priority that conflicts with the priority of comparison operators). For example: class Employee(Model): name = StringProperty() age = IntegerProperty() rank = IntegerProperty() @classmethod def demographic(cls, min_age, max_age): return cls.query().filter(AND(cls.age >= min_age, cls.age <= max_age)) @classmethod def ranked(cls, rank): return cls.query(cls.rank == rank).order(cls.age) for emp in Employee.seniors(42, 5): print emp.name, emp.age, emp.rank The 'in' operator cannot be overloaded, but is supported through the IN() method. For example: Employee.query().filter(Employee.rank.IN([4, 5, 6])) Sort orders are supported through the order() method; unary minus is overloaded on the Property class to represent a descending order: Employee.query().order(Employee.name, -Employee.age) Besides using AND() and OR(), filters can also be combined by repeatedly calling .filter(): q1 = Employee.query() # A query that returns all employees q2 = q1.filter(Employee.age >= 30) # Only those over 30 q3 = q2.filter(Employee.age < 40) # Only those in their 30s A further shortcut is calling .filter() with multiple arguments; this implies AND(): q1 = Employee.query() # A query that returns all employees q3 = q1.filter(Employee.age >= 30, Employee.age < 40) # Only those in their 30s And finally you can also pass one or more filter expressions directly to the .query() method: q3 = Employee.query(Employee.age >= 30, Employee.age < 40) # Only those in their 30s Query objects are immutable, so these methods always return a new Query object; the above calls to filter() do not affect q1. (On the other hand, operations that are effectively no-ops may return the original Query object.) Sort orders can also be combined this way, and .filter() and .order() calls may be intermixed: q4 = q3.order(-Employee.age) q5 = q4.order(Employee.name) q6 = q5.filter(Employee.rank == 5) Again, multiple .order() calls can be combined: q5 = q3.order(-Employee.age, Employee.name) The simplest way to retrieve Query results is a for-loop: for emp in q3: print emp.name, emp.age Some other methods to run a query and access its results: q.iter() # Return an iterator; same as iter(q) but more flexible q.map(callback) # Call the callback function for each query result q.fetch(N) # Return a list of the first N results q.get() # Return the first result q.count(N) # Return the number of results, with a maximum of N q.fetch_page(N, start_cursor=cursor) # Return (results, cursor, has_more) All of the above methods take a standard set of additional query options, either in the form of keyword arguments such as keys_only=True, or as QueryOptions object passed with options=QueryOptions(...). The most important query options are: keys_only: bool, if set the results are keys instead of entities limit: int, limits the number of results returned offset: int, skips this many results first start_cursor: Cursor, start returning results after this position end_cursor: Cursor, stop returning results after this position batch_size: int, hint for the number of results returned per RPC prefetch_size: int, hint for the number of results in the first RPC produce_cursors: bool, return Cursor objects with the results For additional (obscure) query options and more details on them, including an explanation of Cursors, see datastore_query.py. All of the above methods except for iter() have asynchronous variants as well, which return a Future; to get the operation's ultimate result, yield the Future (when inside a tasklet) or call the Future's get_result() method (outside a tasklet): q.map_async(callback) # Callback may be a task or a plain function q.fetch_async(N) q.get_async() q.count_async(N) q.fetch_page_async(N, start_cursor=cursor) Finally, there's an idiom to efficiently loop over the Query results in a tasklet, properly yielding when appropriate: it = q.iter() while (yield it.has_next_async()): emp = it.next() print emp.name, emp.age """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
""" ============ Array basics ============ Array types and conversions between types ========================================= NumPy supports a much greater variety of numerical types than Python does. This section shows which are available, and how to modify an array's data-type. ========== ========================================================== Data type Description ========== ========================================================== bool_ Boolean (True or False) stored as a byte int_ Default integer type (same as C ``long``; normally either ``int64`` or ``int32``) intc Identical to C ``int`` (normally ``int32`` or ``int64``) intp Integer used for indexing (same as C ``ssize_t``; normally either ``int32`` or ``int64``) int8 Byte (-128 to 127) int16 Integer (-32768 to 32767) int32 Integer (-2147483648 to 2147483647) int64 Integer (-9223372036854775808 to 9223372036854775807) uint8 Unsigned integer (0 to 255) uint16 Unsigned integer (0 to 65535) uint32 Unsigned integer (0 to 4294967295) uint64 Unsigned integer (0 to 18446744073709551615) float_ Shorthand for ``float64``. float16 Half precision float: sign bit, 5 bits exponent, 10 bits mantissa float32 Single precision float: sign bit, 8 bits exponent, 23 bits mantissa float64 Double precision float: sign bit, 11 bits exponent, 52 bits mantissa complex_ Shorthand for ``complex128``. complex64 Complex number, represented by two 32-bit floats (real and imaginary components) complex128 Complex number, represented by two 64-bit floats (real and imaginary components) ========== ========================================================== Additionally to ``intc`` the platform dependent C integer types ``short``, ``long``, ``longlong`` and their unsigned versions are defined. NumPy numerical types are instances of ``dtype`` (data-type) objects, each having unique characteristics. Once you have imported NumPy using :: >>> import numpy as np the dtypes are available as ``np.bool_``, ``np.float32``, etc. Advanced types, not listed in the table above, are explored in section :ref:`structured_arrays`. There are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory). Some types, such as ``int`` and ``intp``, have differing bitsizes, dependent on the platforms (e.g. 32-bit vs. 64-bit machines). This should be taken into account when interfacing with low-level code (such as C or Fortran) where the raw memory is addressed. Data-types can be used as functions to convert python numbers to array scalars (see the array scalar section for an explanation), python sequences of numbers to arrays of that type, or as arguments to the dtype keyword that many numpy functions or methods accept. Some examples:: >>> import numpy as np >>> x = np.float32(1.0) >>> x 1.0 >>> y = np.int_([1,2,4]) >>> y array([1, 2, 4]) >>> z = np.arange(3, dtype=np.uint8) >>> z array([0, 1, 2], dtype=uint8) Array types can also be referred to by character codes, mostly to retain backward compatibility with older packages such as Numeric. Some documentation may still refer to these, for example:: >>> np.array([1, 2, 3], dtype='f') array([ 1., 2., 3.], dtype=float32) We recommend using dtype objects instead. To convert the type of an array, use the .astype() method (preferred) or the type itself as a function. For example: :: >>> z.astype(float) #doctest: +NORMALIZE_WHITESPACE array([ 0., 1., 2.]) >>> np.int8(z) array([0, 1, 2], dtype=int8) Note that, above, we use the *Python* float object as a dtype. NumPy knows that ``int`` refers to ``np.int_``, ``bool`` means ``np.bool_``, that ``float`` is ``np.float_`` and ``complex`` is ``np.complex_``. The other data-types do not have Python equivalents. To determine the type of an array, look at the dtype attribute:: >>> z.dtype dtype('uint8') dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also be used indirectly to query properties of the type, such as whether it is an integer:: >>> d = np.dtype(int) >>> d dtype('int32') >>> np.issubdtype(d, int) True >>> np.issubdtype(d, float) False Array Scalars ============= NumPy generally returns elements of arrays as array scalars (a scalar with an associated dtype). Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., ``int``, ``float``, ``complex``, ``str``, ``unicode``). The primary advantage of using array scalars is that they preserve the array type (Python may not have a matching scalar type available, e.g. ``int16``). Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. NumPy scalars also have many of the same methods arrays do. Extended Precision ================== Python's floating-point numbers are usually 64-bit floating-point numbers, nearly equivalent to ``np.float64``. In some unusual situations it may be useful to use floating-point numbers with more precision. Whether this is possible in numpy depends on the hardware and on the development environment: specifically, x86 machines provide hardware floating-point with 80-bit precision, and while most C compilers provide this as their ``long double`` type, MSVC (standard for Windows builds) makes ``long double`` identical to ``double`` (64 bits). NumPy makes the compiler's ``long double`` available as ``np.longdouble`` (and ``np.clongdouble`` for the complex numbers). You can find out what your numpy provides with``np.finfo(np.longdouble)``. NumPy does not provide a dtype with more precision than C ``long double``s; in particular, the 128-bit IEEE quad precision data type (FORTRAN's ``REAL*16``) is not available. For efficient memory alignment, ``np.longdouble`` is usually stored padded with zero bits, either to 96 or 128 bits. Which is more efficient depends on hardware and development environment; typically on 32-bit systems they are padded to 96 bits, while on 64-bit systems they are typically padded to 128 bits. ``np.longdouble`` is padded to the system default; ``np.float96`` and ``np.float128`` are provided for users who want specific padding. In spite of the names, ``np.float96`` and ``np.float128`` provide only as much precision as ``np.longdouble``, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds. Be warned that even if ``np.longdouble`` offers more precision than python ``float``, it is easy to lose that extra precision, since python often forces values to pass through ``float``. For example, the ``%`` formatting operator requires its arguments to be converted to standard python types, and it is therefore impossible to preserve extended precision even if many decimal places are requested. It can be useful to test your code with the value ``1 + np.finfo(np.longdouble).eps``. """
# vim: set fileencoding=utf-8 : # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the terms and conditions of this license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for further * # * information. * # * * # * If you have received a written license agreement or contract for * # * Covered Software stating terms other than these, you may choose to use * # * and redistribute Covered Software under those terms instead of these. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
""" Purpose: Linear Algebra Parser Based on: SimpleCalc.py example (author NAME in pyparsing-1.3.3 Author: NAME Ellis & Grant, Inc. 2005 License: You may freely use, modify, and distribute this software. Warranty: THIS SOFTWARE HAS NO WARRANTY WHATSOEVER. USE AT YOUR OWN RISK. Notes: Parses infix linear algebra (LA) notation for vectors, matrices, and scalars. Output is C code function calls. The parser can be run as an interactive interpreter or included as module to use for in-place substitution into C files containing LA equations. Supported operations are: OPERATION: INPUT OUTPUT Scalar addition: "a = b+c" "a=(b+c)" Scalar subtraction: "a = b-c" "a=(b-c)" Scalar multiplication: "a = b*c" "a=b*c" Scalar division: "a = b/c" "a=b/c" Scalar exponentiation: "a = b^c" "a=pow(b,c)" Vector scaling: "V3_a = V3_b * c" "vCopy(a,vScale(b,c))" Vector addition: "V3_a = V3_b + V3_c" "vCopy(a,vAdd(b,c))" Vector subtraction: "V3_a = V3_b - V3_c" "vCopy(a,vSubtract(b,c))" Vector dot product: "a = V3_b * V3_c" "a=vDot(b,c)" Vector outer product: "M3_a = V3_b @ V3_c" "a=vOuterProduct(b,c)" Vector magn. squared: "a = V3_b^Mag2" "a=vMagnitude2(b)" Vector magnitude: "a = V3_b^Mag" "a=sqrt(vMagnitude2(b))" Matrix scaling: "M3_a = M3_b * c" "mCopy(a,mScale(b,c))" Matrix addition: "M3_a = M3_b + M3_c" "mCopy(a,mAdd(b,c))" Matrix subtraction: "M3_a = M3_b - M3_c" "mCopy(a,mSubtract(b,c))" Matrix multiplication: "M3_a = M3_b * M3_c" "mCopy(a,mMultiply(b,c))" Matrix by vector mult.: "V3_a = M3_b * V3_c" "vCopy(a,mvMultiply(b,c))" Matrix inversion: "M3_a = M3_b^-1" "mCopy(a,mInverse(b))" Matrix transpose: "M3_a = M3_b^T" "mCopy(a,mTranspose(b))" Matrix determinant: "a = M3_b^Det" "a=mDeterminant(b)" The parser requires the expression to be an equation. Each non-scalar variable must be prefixed with a type tag, 'M3_' for 3x3 matrices and 'V3_' for 3-vectors. For proper compilation of the C code, the variables need to be declared without the prefix as float[3] for vectors and float[3][3] for matrices. The operations do not modify any variables on the right-hand side of the equation. Equations may include nested expressions within parentheses. The allowed binary operators are '+-*/^' for scalars, and '+-*^@' for vectors and matrices with the meanings defined in the table above. Specifying an improper combination of operands, e.g. adding a vector to a matrix, is detected by the parser and results in a Python TypeError Exception. The usual cause of this is omitting one or more tag prefixes. The parser knows nothing about a a variable's C declaration and relies entirely on the type tags. Errors in C declarations are not caught until compile time. Usage: To process LA equations embedded in source files, import this module and pass input and output file objects to the fprocess() function. You can can also invoke the parser from the command line, e.g. 'python LAparser.py', to run a small test suite and enter an interactive loop where you can enter LA equations and see the resulting C code. """
""" ============================= Subclassing ndarray in python ============================= Credits ------- This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses. Introduction ------------ Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ndarrays and object creation ============================ Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: #. Explicit constructor call - as in ``MySubClass(params)``. This is the usual route to Python instance creation. #. View casting - casting an existing ndarray as a given subclass #. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See :ref:`new-from-template` for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. .. _view-casting: View casting ------------ *View casting* is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) <class 'C'> .. _new-from-template: Creating new from template -------------------------- New instances of an ndarray subclass can also come about by a very similar mechanism to :ref:`view-casting`, when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' <class 'C'> >>> v is c_arr # but it's a new instance False The slice is a *view* onto the original ``c_arr`` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (``c_arr.copy()``), creating ufunc output arrays (see also :ref:`array-wrap`), and reducing methods (like ``c_arr.mean()``. Relationship of view casting and new-from-template -------------------------------------------------- These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, :ref:`view-casting` means you have created a new instance of your array type from any potential subclass of ndarray. :ref:`new-from-template` means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. Implications for subclassing ---------------------------- If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also :ref:`view-casting` or :ref:`new-from-template`. Numpy has the machinery to do this, and this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the ``ndarray.__new__`` method for the main work of object initialization, rather then the more usual ``__init__`` method. The second is the use of the ``__array_finalize__`` method to allow subclasses to clean up after the creation of views and new instances from templates. A brief Python primer on ``__new__`` and ``__init__`` ===================================================== ``__new__`` is a standard Python method, and, if present, is called before ``__init__`` when we create a class instance. See the `python __new__ documentation <http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail. For example, consider the following Python code: .. testcode:: class C(object): def __new__(cls, *args): print 'Cls in __new__:', cls print 'Args in __new__:', args return object.__new__(cls, *args) def __init__(self, *args): print 'type(self) in __init__:', type(self) print 'Args in __init__:', args meaning that we get: >>> c = C('hello') Cls in __new__: <class 'C'> Args in __new__: ('hello',) type(self) in __init__: <class 'C'> Args in __init__: ('hello',) When we call ``C('hello')``, the ``__new__`` method gets its own class as first argument, and the passed argument, which is the string ``'hello'``. After python calls ``__new__``, it usually (see below) calls our ``__init__`` method, with the output of ``__new__`` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the ``__new__`` method or the ``__init__`` method, or both, and in fact ndarray does not have an ``__init__`` method, because all the initialization is done in the ``__new__`` method. Why use ``__new__`` rather than just the usual ``__init__``? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: .. testcode:: class D(C): def __new__(cls, *args): print 'D cls is:', cls print 'D args in __new__:', args return C.__new__(C, *args) def __init__(self, *args): # we never get here print 'In D __init__' meaning that: >>> obj = D('hello') D cls is: <class 'D'> D args in __new__: ('hello',) Cls in __new__: <class 'C'> Args in __new__: ('hello',) >>> type(obj) <class 'C'> The definition of ``C`` is the same as before, but for ``D``, the ``__new__`` method returns an instance of class ``C`` rather than ``D``. Note that the ``__init__`` method of ``D`` does not get called. In general, when the ``__new__`` method returns an object of class other than the class in which it is defined, the ``__init__`` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like:: obj = ndarray.__new__(subtype, shape, ... where ``subdtype`` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class ``ndarray``. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray ``__new__`` method knows nothing of what we have done in our own ``__new__`` method in order to set attributes, and so on. (Aside - why not call ``obj = subdtype.__new__(...`` then? Because we may not have a ``__new__`` method with the same call signature). The role of ``__array_finalize__`` ================================== ``__array_finalize__`` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: #. explicit constructor call (``obj = MySubClass(params)``). This will call the usual sequence of ``MySubClass.__new__`` then (if it exists) ``MySubClass.__init__``. #. :ref:`view-casting` #. :ref:`new-from-template` Our ``MySubClass.__new__`` method only gets called in the case of the explicit constructor call, so we can't rely on ``MySubClass.__new__`` or ``MySubClass.__init__`` to deal with the view casting and new-from-template. It turns out that ``MySubClass.__array_finalize__`` *does* get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to ``ndarray.__new__(MySubClass,...)``, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of ``ndarray.__new__(MySubClass,...`` is called, at the C level. The arguments that ``__array_finalize__`` recieves differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: .. testcode:: import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print 'In __new__ with class %s' % cls return np.ndarray.__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print 'In __init__ with class %s' % self.__class__ def __array_finalize__(self, obj): print 'In array_finalize:' print ' self type is %s' % type(self) print ' obj type is %s' % type(obj) Now: >>> # Explicit constructor >>> c = C((10,)) In __new__ with class <class 'C'> In array_finalize: self type is <class 'C'> obj type is <type 'NoneType'> In __init__ with class <class 'C'> >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is <class 'C'> obj type is <type 'numpy.ndarray'> >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is <class 'C'> obj type is <class 'C'> The signature of ``__array_finalize__`` is:: def __array_finalize__(self, obj): ``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our own class (``self``) as well as the object from which the view has been taken (``obj``). As you can see from the output above, the ``self`` is always a newly created instance of our subclass, and the type of ``obj`` differs for the three instance creation methods: * When called from the explicit constructor, ``obj`` is ``None`` * When called from view casting, ``obj`` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, ``obj`` is another instance of our own subclass, that we might use to update the new ``self`` instance. Because ``__array_finalize__`` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. Simple example - adding an extra attribute to ndarray ----------------------------------------------------- .. testcode:: import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything Using the object looks like this: >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) <class 'InfoArray'> >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) <class 'InfoArray'> >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) <class 'InfoArray'> >>> cast_arr.info is None True This class isn't very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to ``np.array`` and return an object. Slightly more realistic example - attribute added to existing array ------------------------------------------------------------------- Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. .. testcode:: import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) So: >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) <class 'RealisticInfoArray'> >>> obj.info 'information' >>> v = obj[1:] >>> type(v) <class 'RealisticInfoArray'> >>> v.info 'information' .. _array-wrap: ``__array_wrap__`` for ufuncs ------------------------------------------------------- ``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy functions, to allow a subclass to set the type of the return value and update attributes and metadata. Let's show how this works with an example. First we make the same subclass as above, but with a different name and some print statements: .. testcode:: import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print 'In __array_finalize__:' print ' self is %s' % repr(self) print ' obj is %s' % repr(obj) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None): print 'In __array_wrap__:' print ' self is %s' % repr(self) print ' arr is %s' % repr(out_arr) # then just call the parent return np.ndarray.__array_wrap__(self, out_arr, context) We run a ufunc on an instance of our new array: >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the input with the highest ``__array_priority__`` value, in this case ``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result of the addition. In turn, the default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``, and called ``__array_finalize__`` - hence the copying of the ``info`` attribute. This has all happened at the C level. But, we could do anything we wanted: .. testcode:: class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None): return 'I lost your data' >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' So, by defining a specific ``__array_wrap__`` method for our subclass, we can tweak the output from ufuncs. The ``__array_wrap__`` method requires ``self``, then an argument - which is the result of the ufunc - and an optional parameter *context*. This parameter is returned by some ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc, domain of the ufunc). ``__array_wrap__`` should return an instance of its containing class. See the masked array subclass for an implementation. In addition to ``__array_wrap__``, which is called on the way out of the ufunc, there is also an ``__array_prepare__`` method which is called on the way into the ufunc, after the output arrays are created but before any computation has been performed. The default implementation does nothing but pass through the array. ``__array_prepare__`` should not attempt to access the array data or resize the array, it is intended for setting the output array type, updating attributes and metadata, and performing any checks based on the input that may be desired before computation begins. Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or subclass thereof or raise an error. Extra gotchas - custom ``__del__`` methods and ndarray.base ----------------------------------------------------------- One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``. The two objects are looking at the same memory. Numpy keeps track of where the data came from for a particular array or view, with the ``base`` attribute: >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the view it derived from >>> v2.base is v1 True In general, if the array owns its own memory, as for ``arr`` in this case, then ``arr.base`` will be None - there are some exceptions to this - see the numpy book for more details. The ``base`` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the ``memmap`` class in ``numpy.core``. """
"""This module tests SyntaxErrors. Here's an example of the sort of thing that is tested. >>> def f(x): ... global x Traceback (most recent call last): SyntaxError: name 'x' is parameter and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that raise these exceptions-- ast.c, compile.c, future.c, pythonrun.c, and symtable.c. The parser itself outlaws a lot of invalid syntax. None of these errors are tested here at the moment. We should add some tests; since there are infinitely many programs with invalid syntax, we would need to be judicious in selecting some. The compiler generates a synthetic module name for code executed by doctest. Since all the code comes from the same module, a suffix like [1] is appended to the module name, As a consequence, changing the order of tests in this module means renumbering all the errors after it. (Maybe we should enable the ellipsis option for these tests.) In ast.c, syntax errors are raised by calling ast_error(). Errors from set_context(): >>> obj.None = 1 Traceback (most recent call last): SyntaxError: invalid syntax >>> None = 1 Traceback (most recent call last): SyntaxError: can't assign to keyword It's a syntax error to assign to the empty tuple. Why isn't it an error to assign to the empty list? It will always raise some error at runtime. >>> () = 1 Traceback (most recent call last): SyntaxError: can't assign to () >>> f() = 1 Traceback (most recent call last): SyntaxError: can't assign to function call >>> del f() Traceback (most recent call last): SyntaxError: can't delete function call >>> a + 1 = 2 Traceback (most recent call last): SyntaxError: can't assign to operator >>> (x for x in x) = 1 Traceback (most recent call last): SyntaxError: can't assign to generator expression >>> 1 = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> "abc" = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> b"" = 1 Traceback (most recent call last): SyntaxError: can't assign to literal >>> `1` = 1 Traceback (most recent call last): SyntaxError: invalid syntax If the left-hand side of an assignment is a list or tuple, an illegal expression inside that contain should still cause a syntax error. This test just checks a couple of cases rather than enumerating all of them. >>> (a, "b", c) = (1, 2, 3) Traceback (most recent call last): SyntaxError: can't assign to literal >>> [a, b, c + 1] = [1, 2, 3] Traceback (most recent call last): SyntaxError: can't assign to operator >>> a if 1 else b = 1 Traceback (most recent call last): SyntaxError: can't assign to conditional expression From compiler_complex_args(): >>> def f(None=1): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_arguments(): >>> def f(x, y=1, z): ... pass Traceback (most recent call last): SyntaxError: non-default argument follows default argument >>> def f(x, None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax >>> def f(*None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax >>> def f(**None): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_funcdef(): >>> def None(x): ... pass Traceback (most recent call last): SyntaxError: invalid syntax From ast_for_call(): >>> def f(it, *varargs): ... return list(it) >>> L = range(10) >>> f(x for x in L) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(x for x in L, 1) Traceback (most recent call last): SyntaxError: Generator expression must be parenthesized if not sole argument >>> f((x for x in L), 1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... i244, i245, i246, i247, i248, i249, i250, i251, i252, ... i253, i254, i255) Traceback (most recent call last): SyntaxError: more than 255 arguments The actual error cases counts positional arguments, keyword arguments, and generator expression arguments separately. This test combines the three. >>> f(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, ... i12, i13, i14, i15, i16, i17, i18, i19, i20, i21, i22, ... i23, i24, i25, i26, i27, i28, i29, i30, i31, i32, i33, ... i34, i35, i36, i37, i38, i39, i40, i41, i42, i43, i44, ... i45, i46, i47, i48, i49, i50, i51, i52, i53, i54, i55, ... i56, i57, i58, i59, i60, i61, i62, i63, i64, i65, i66, ... i67, i68, i69, i70, i71, i72, i73, i74, i75, i76, i77, ... i78, i79, i80, i81, i82, i83, i84, i85, i86, i87, i88, ... i89, i90, i91, i92, i93, i94, i95, i96, i97, i98, i99, ... i100, i101, i102, i103, i104, i105, i106, i107, i108, ... i109, i110, i111, i112, i113, i114, i115, i116, i117, ... i118, i119, i120, i121, i122, i123, i124, i125, i126, ... i127, i128, i129, i130, i131, i132, i133, i134, i135, ... i136, i137, i138, i139, i140, i141, i142, i143, i144, ... i145, i146, i147, i148, i149, i150, i151, i152, i153, ... i154, i155, i156, i157, i158, i159, i160, i161, i162, ... i163, i164, i165, i166, i167, i168, i169, i170, i171, ... i172, i173, i174, i175, i176, i177, i178, i179, i180, ... i181, i182, i183, i184, i185, i186, i187, i188, i189, ... i190, i191, i192, i193, i194, i195, i196, i197, i198, ... i199, i200, i201, i202, i203, i204, i205, i206, i207, ... i208, i209, i210, i211, i212, i213, i214, i215, i216, ... i217, i218, i219, i220, i221, i222, i223, i224, i225, ... i226, i227, i228, i229, i230, i231, i232, i233, i234, ... i235, i236, i237, i238, i239, i240, i241, i242, i243, ... (x for x in i244), i245, i246, i247, i248, i249, i250, i251, ... i252=1, i253=1, i254=1, i255=1) Traceback (most recent call last): SyntaxError: more than 255 arguments >>> f(lambda x: x[0] = 3) Traceback (most recent call last): SyntaxError: lambda cannot contain assignment The grammar accepts any test (basically, any expression) in the keyword slot of a call site. Test a few different options. >>> f(x()=2) Traceback (most recent call last): SyntaxError: keyword can't be an expression >>> f(a or b=1) Traceback (most recent call last): SyntaxError: keyword can't be an expression >>> f(x.y=1) Traceback (most recent call last): SyntaxError: keyword can't be an expression More set_context(): >>> (x for x in x) += 1 Traceback (most recent call last): SyntaxError: can't assign to generator expression >>> None += 1 Traceback (most recent call last): SyntaxError: can't assign to keyword >>> f() += 1 Traceback (most recent call last): SyntaxError: can't assign to function call Test continue in finally in weird combinations. continue in for loop under finally should be ok. >>> def test(): ... try: ... pass ... finally: ... for abc in range(10): ... continue ... print(abc) >>> test() 9 Start simple, a continue in a finally should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause This is essentially a continue in a finally which should not be allowed. >>> def test(): ... for abc in range(10): ... try: ... pass ... finally: ... try: ... continue ... except: ... pass Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: ... pass ... finally: ... try: ... continue ... finally: ... pass Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause >>> def foo(): ... for a in (): ... try: pass ... finally: ... try: ... pass ... except: ... continue Traceback (most recent call last): ... SyntaxError: 'continue' not supported inside 'finally' clause There is one test for a break that is not in a loop. The compiler uses a single data structure to keep track of try-finally and loops, so we need to be sure that a break is actually inside a loop. If it isn't, there should be a syntax error. >>> try: ... print(1) ... break ... print(2) ... finally: ... print(3) Traceback (most recent call last): ... SyntaxError: 'break' outside loop This should probably raise a better error than a SystemError (or none at all). In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 >>> while 1: ... while 2: ... while 3: ... while 4: ... while 5: ... while 6: ... while 8: ... while 9: ... while 10: ... while 11: ... while 12: ... while 13: ... while 14: ... while 15: ... while 16: ... while 17: ... while 18: ... while 19: ... while 20: ... while 21: ... while 22: ... break Traceback (most recent call last): ... SystemError: too many statically nested blocks Misuse of the nonlocal statement can lead to a few unique syntax errors. >>> def f(x): ... nonlocal x Traceback (most recent call last): ... SyntaxError: name 'x' is parameter and nonlocal >>> def f(): ... global x ... nonlocal x Traceback (most recent call last): ... SyntaxError: name 'x' is nonlocal and global >>> def f(): ... nonlocal x Traceback (most recent call last): ... SyntaxError: no binding for nonlocal 'x' found From SF bug #1705365 >>> nonlocal x Traceback (most recent call last): ... SyntaxError: nonlocal declaration not allowed at module level TODO(jhylton): Figure out how to test SyntaxWarning with doctest. ## >>> def f(x): ## ... def f(): ## ... print(x) ## ... nonlocal x ## Traceback (most recent call last): ## ... ## SyntaxWarning: name 'x' is assigned to before nonlocal declaration ## >>> def f(): ## ... x = 1 ## ... nonlocal x ## Traceback (most recent call last): ## ... ## SyntaxWarning: name 'x' is assigned to before nonlocal declaration This tests assignment-context; there was a bug in Python 2.5 where compiling a complex 'if' (one with 'elif') would fail to notice an invalid suite, leading to spurious errors. >>> if 1: ... x() = 1 ... elif 1: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... x() = 1 ... elif 1: ... pass ... else: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... x() = 1 ... else: ... pass Traceback (most recent call last): ... SyntaxError: can't assign to function call >>> if 1: ... pass ... elif 1: ... pass ... else: ... x() = 1 Traceback (most recent call last): ... SyntaxError: can't assign to function call Make sure that the old "raise X, Y[, Z]" form is gone: >>> raise X, Y Traceback (most recent call last): ... SyntaxError: invalid syntax >>> raise X, Y, Z Traceback (most recent call last): ... SyntaxError: invalid syntax >>> f(a=23, a=234) Traceback (most recent call last): ... SyntaxError: keyword argument repeated >>> del () Traceback (most recent call last): SyntaxError: can't delete () >>> {1, 2, 3} = 42 Traceback (most recent call last): SyntaxError: can't assign to literal Corner-cases that used to fail to raise the correct error: >>> def f(*, x=lambda __debug__:0): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(*args:(lambda __debug__:0)): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(**kwargs:(lambda __debug__:0)): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> with (lambda *:0): pass Traceback (most recent call last): SyntaxError: named arguments must follow bare * Corner-cases that used to crash: >>> def f(**__debug__): pass Traceback (most recent call last): SyntaxError: assignment to keyword >>> def f(*xx, __debug__): pass Traceback (most recent call last): SyntaxError: assignment to keyword """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to read all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <EMAIL>, 2012-2013 # Copyright (c), NAME <EMAIL>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
# # =============================================================================== # # Copyright 2015 NAME # # # Licensed under the Apache License, Version 2.0 (the "License"); # # you may not use this file except in compliance with the License. # # You may obtain a copy of the License at # # # # http://www.apache.org/licenses/LICENSE-2.0 # # # # Unless required by applicable law or agreed to in writing, software # # distributed under the License is distributed on an "AS IS" BASIS, # # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # # See the License for the specific language governing permissions and # # limitations under the License. # # =============================================================================== # # # ============= enthought library imports ======================= # from itertools import groupby # # from traits.api import HasTraits, Str, Bool, Float # from traitsui.api import View, UItem, Item, VGroup, HGroup # # ============= standard library imports ======================== # # ============= local library imports ========================== # # from pychron.pipeline.nodes.data import DVCNode # # # class AnalysisMetadataOption(HasTraits): # sample = Str(analysis=True) # material = Str(analysis=True) # project = Str(analysis=True) # comment = Str(analysis=True) # weight = Float(extraction=True) # identifier = Float(identifier=True) # aliquot = Float(identifier=True) # # sample_enabled = Bool(False) # material_enabled = Bool(False) # project_enabled = Bool(False) # comment_enabled = Bool(False) # weight_enabled = Bool(False) # identifier_enabled = Bool(False) # aliquot_enabled = Bool(False) # # def traits_view(self): # v = View(VGroup(HGroup(Item('sample_enabled', label='Sample'), # UItem('sample', enabled_when='sample_enabled')), # HGroup(Item('material_enabled', label='Material'), # UItem('material', enabled_when='material_enabled')), # HGroup(Item('project_enabled', label='Project'), # UItem('project', enabled_when='project_enabled')), # HGroup(Item('comment_enabled', label='Comment'), # UItem('comment', enabled_when='comment_enabled')), # HGroup(Item('weight_enabled', label='Weight (mg)'), # UItem('weight', enabled_when='weight_enabled'))), # buttons=['OK', 'Cancel']) # return v # # def get_edit_dict(self): # am = {k: getattr(self, k) for k in self.traits(analysis=True) if getattr(self, '{}_enabled'.format(k))} # ext = {k: getattr(self, k) for k in self.traits(extraction=True) if getattr(self, '{}_enabled'.format(k))} # idn = {k: getattr(self, k) for k in self.traits(identifier=True) if getattr(self, '{}_enabled'.format(k))} # return am, ext, idn # # # class AnalysisMetadataNode(DVCNode): # options_klass = AnalysisMetadataOption # name = 'Analysis Metadata' # # def configure(self, *args, **kw): # if self.unknowns: # unk = self.unknowns[0] # self.options.sample = unk.sample # self.options.material = unk.material # self.options.project = unk.project # self.options.comment = unk.comment # self.options.weight = unk.weight # self.options.identifier = unk.identifier # self.options.aliquot = unk.aliquot # # return super(AnalysisMetadataNode, self).configure(*args, **kw) # # def run(self, state): # dvc = self.dvc # # am_dict, ext_dict, idn_dict = self.options.get_edit_dict() # if am_dict or ext_dict or idn_dict: # def key(xi): # return xi.repository_identifier # # for repo_id, ans in groupby(sorted(self.unknowns, key=key), key=key): # # dvc.pull_repository(repo_id) # for ai in self.unknowns: # dvc.analysis_metadata_edit(ai.uuid, ai.record_id, ai.repository_identifier, am_dict, ext_dict, # idn_dict) # # dvc.repository_commit(repo_id, 'Analysis Metadata edits') # # # ============= EOF =============================================
# #!/usr/bin/env python # # """ # @package mi.idk.dataset.test.test_package_driver # @file mi.idk/dataset/test/test_package_driver.py # @author NAME @brief test package driver object # """ # # __author__ = 'Emily NAME __license__ = 'Apache 2.0' # # from mi.core.unit_test import MiUnitTest # # from mi.core.log import get_logger ; log = get_logger() # from nose.plugins.attrib import attr # import unittest # # import sys # import os # import re # import pkg_resources # from os.path import exists # # from mi.idk.config import Config # # from mi.idk.dataset.metadata import Metadata # from mi.idk.dataset.package_driver import PackageDriver # # MI_BASE_DIR = "mi/dataset/driver" # DRIVER_DIR = "fake_platform/fake_driver" # CONSTRUCTOR = "FakeDriver" # METADATA_FILE = "metadata.yml" # # @attr('UNIT', group='mi') # class TestPackageDriver(MiUnitTest): # # def setUp(self): # """ # Setup the test case # """ # # create the expected file and directories to go with the metadata # self.createFakeDriver() # # def createMetadataFile(self, version="0.2.2"): # """ # Copied from test_metadata # """ # # full_driver_path = "%s/%s/%s" % (Config().base_dir(), MI_BASE_DIR, DRIVER_DIR) # if(not exists(full_driver_path)): # os.makedirs(full_driver_path) # md_file_path = "%s/%s" % (full_driver_path, METADATA_FILE) # md_file = open(md_file_path, 'w') # # md_file.write("driver_metadata:\n") # md_file.write(" author: NAME path_str = " driver_path: %s\n" % DRIVER_DIR # md_file.write(path_str) # md_file.write(" driver_name: fake_driver\n") # md_file.write(" email: EMAIL md_file.write(" release_notes: some note\n") # version_str = " version: %s\n" % version # md_file.write(version_str) # constr_str = " constructor: %s\n" % CONSTRUCTOR # md_file.write(constr_str) # # md_file.close() # # current_dsa_path = Config().idk_config_dir() + "/current_dsa.yml" # log.info("linking %s to %s", md_file_path, current_dsa_path) # # exists doesn't catch when this link is broken but still there, # # need to figure out how to find and delete # if exists(current_dsa_path): # os.remove(current_dsa_path) # # os.symlink(md_file_path, current_dsa_path) # # def createFakeDriver(self): # """ # Write a fake driver # """ # full_driver_path = "%s/%s/%s" % (Config().base_dir(), MI_BASE_DIR, DRIVER_DIR) # if(not exists(full_driver_path)): # os.makedirs(full_driver_path) # # fake_driver_path = "%s/%s/%s/driver.py" % (Config().base_dir(), MI_BASE_DIR, DRIVER_DIR) # # # create a fake driver file # if not exists(fake_driver_path): # # write a make driver which just has the class constructor in it # fake_driver_file = open(fake_driver_path, 'w') # log.info("creating fake driver %s", fake_driver_path) # fake_driver_file.write("from mi.core.log import get_logger ; log = get_logger()\n\n") # class_line = "class %s(object):\n\n" % CONSTRUCTOR # fake_driver_file.write(class_line) # fake_driver_file.write(" def sayHello(self):\n") # fake_driver_file.write(" log.info('Hello from Dataset Driver')\n") # fake_driver_file.close() # # # make sure we have __init__.py in both dirs # init_path = "%s/%s/%s/__init__.py" % (Config().base_dir(), MI_BASE_DIR, "fake_platform") # touch = open(init_path, "w") # touch.close() # init_path = "%s/%s/%s/__init__.py" % (Config().base_dir(), MI_BASE_DIR, DRIVER_DIR) # touch = open(init_path, "w") # touch.close() # # else: # log.info("fake driver already initialized %s", fake_driver_path) # # # make sure there is a test directory # test_dir_path = "%s/%s/%s/test" % (Config().base_dir(), MI_BASE_DIR, DRIVER_DIR) # if not exists(test_dir_path): # os.makedirs(test_dir_path) # # # make sure there is a resource directory # resource_dir_path = "%s/%s/%s/resource" % (Config().base_dir(), MI_BASE_DIR, DRIVER_DIR) # if not exists(resource_dir_path): # os.makedirs(resource_dir_path) # # # the resource directory must contain strings.yml # resource_path = "%s/strings.yml" % resource_dir_path # if not exists(resource_path): # touch = open(resource_path, "w") # touch.close() # # test_driver_path = "%s/test_driver.py" % test_dir_path # if not exists(test_driver_path): # fake_driver_test = open(test_driver_path, 'w') # log.info("creating fake test driver %s", test_driver_path) # fake_driver_test.close() # # make sure we have an __init__.py # init_path = "%s/__init__.py" % test_dir_path # touch = open(init_path, "w") # touch.close() # else: # log.info("fake test driver test already initialized") # # #def test_package_driver(self): # """ # # make a metadata file is initialized, otherwise it might not exist # self.createMetadataFile() # # # create the metadata so we can use it for opening the egg # metadata1 = Metadata() # # package_driver = PackageDriver() # package_driver.run() # # # overwrite the original metadata file for the same driver and change the version # self.createMetadataFile("0.2.5") # # # create the metadata so we can use it for opening the egg # metadata2 = Metadata() # # # run package driver again to create the new driver version # package_driver = PackageDriver() # package_driver.run() # # log.info("Both driver eggs created") # # # load the first driver # cotr1 = self.load_egg(metadata1) # loaded_driver = cotr1() # loaded_driver.sayHello() # log.info("First driver done saying hello") # # # load the second driver # cotr2 = self.load_egg(metadata2) # loaded_driver2 = cotr2() # loaded_driver2.sayHello() # log.info("Second driver done saying hello") # # # just print out some entry point info # for x in pkg_resources.iter_entry_points('drivers.dataset.fake_driver'): # log.info("entry point:%s", x) # """ # # @unittest.skip("requires enter be pressed to run.") # def test_package_driver_real(self): # """ # Test with real hypm ctd driver code # """ # # # link current metadata dsa file to a real driver, the ctd # current_dsa_path = Config().idk_config_dir() + "/current_dsa.yml" # ctd_md_path = "%s/%s/hypm/ctd/metadata.yml" % (Config().base_dir(), MI_BASE_DIR) # log.info("linking %s to %s", ctd_md_path, current_dsa_path) # # exists doesn't catch when this link is broken but still there, # # need to figure out how to find and delete # if exists(current_dsa_path): # os.remove(current_dsa_path) # log.error(current_dsa_path) # os.symlink(ctd_md_path, current_dsa_path) # # # create the metadata so we can use it for opening the egg # metadata = Metadata() # # # create the egg with the package driver # package_driver = PackageDriver() # package_driver.run() # # startup_config = { # 'harvester': # { # 'directory': '/tmp/dsatest', # 'pattern': '*.txt', # 'frequency': 1, # }, # 'parser': {} # } # # # load the driver # cotr = self.load_egg(metadata) # # need to load with the right number of arguments # egg_driver = cotr(startup_config, None, None, None, None) # log.info("driver loaded") # # # def load_egg(self, metadata): # # use metadata to initialize egg name and location, but only the egg name # # should be used to figure out how to load the entry point # egg_name = "%s-%s-py2.7.egg" % (metadata.driver_name_versioned, metadata.version) # egg_cache_dir = "/tmp/%s/dist" % metadata.driver_name_versioned # pkg_resources.working_set.add_entry(egg_cache_dir + '/' + egg_name) # first_dash_idx = egg_name.find('-') # second_dash_idx = egg_name[first_dash_idx+1:].find('-') # first_under_idx = egg_name.find('_') # driver_name = egg_name[first_under_idx+1:first_dash_idx] # log.debug("found driver name %s", driver_name) # digit_match = re.search("\d", driver_name) # if digit_match: # first_digit = digit_match.start() # short_driver = driver_name[:first_digit-1] # else: # short_driver = driver_name # log.debug("found short driver name %s", short_driver) # version = egg_name[first_dash_idx+1:(second_dash_idx+first_dash_idx+1)] # log.debug("found version %s", version) # entry_point = 'driver-' + version # group_name = 'drivers.dataset.' + short_driver # log.info("entry point %s, group_name %s", entry_point, group_name) # cotr = pkg_resources.load_entry_point('driver_' + driver_name, group_name, entry_point) # log.info("loaded entry point") # return cotr # # # # # # #
#!/usr/bin/env python # Try to determine how much RAM is currently being used per program. # Note per _program_, not per process. So for example this script # will report RAM used by all httpd process together. In detail it reports: # sum(private RAM for program processes) + sum(Shared RAM for program processes) # The shared RAM is problematic to calculate, and this script automatically # selects the most accurate method available for your kernel. # Licence: LGPLv2 # Author: EMAIL Source: http://www.pixelbeat.org/scripts/ps_mem.py # V1.0 06 Jul 2005 Initial release # V1.1 11 Aug 2006 root permission required for accuracy # V1.2 08 Nov 2006 Add total to output # Use KiB,MiB,... for units rather than K,M,... # V1.3 22 Nov 2006 Ignore shared col from /proc/$pid/statm for # 2.6 kernels up to and including 2.6.9. # There it represented the total file backed extent # V1.4 23 Nov 2006 Remove total from output as it's meaningless # (the shared values overlap with other programs). # Display the shared column. This extra info is # useful, especially as it overlaps between programs. # V1.5 26 Mar 2007 Remove redundant recursion from human() # V1.6 05 Jun 2007 Also report number of processes with a given name. # Patch from EMAIL V1.7 20 Sep 2007 Use PSS from /proc/$pid/smaps if available, which # fixes some over-estimation and allows totalling. # Enumerate the PIDs directly rather than using ps, # which fixes the possible race between reading # RSS with ps, and shared memory with this program. # Also we can show non truncated command names. # V1.8 28 Sep 2007 More accurate matching for stats in /proc/$pid/smaps # as otherwise could match libraries causing a crash. # Patch from EMAIL V1.9 20 Feb 2008 Fix invalid values reported when PSS is available. # Reported by NAME <EMAIL> # V3.3 24 Jun 2014 # http://github.com/pixelb/scripts/commits/master/scripts/ps_mem.py # Notes: # # All interpreted programs where the interpreter is started # by the shell or with env, will be merged to the interpreter # (as that's what's given to exec). For e.g. all python programs # starting with "#!/usr/bin/env python" will be grouped under python. # You can change this by using the full command line but that will # have the undesirable affect of splitting up programs started with # differing parameters (for e.g. mingetty tty[1-6]). # # For 2.6 kernels up to and including 2.6.13 and later 2.4 redhat kernels # (rmap vm without smaps) it can not be accurately determined how many pages # are shared between processes in general or within a program in our case: # http://lkml.org/lkml/2005/7/6/250 # A warning is printed if overestimation is possible. # In addition for 2.6 kernels up to 2.6.9 inclusive, the shared # value in /proc/$pid/statm is the total file-backed extent of a process. # We ignore that, introducing more overestimation, again printing a warning. # Since kernel 2.6.23-rc8-mm1 PSS is available in smaps, which allows # us to calculate a more accurate value for the total RAM used by programs. # # Programs that use CLONE_VM without CLONE_THREAD are discounted by assuming # they're the only programs that have the same /proc/$PID/smaps file for # each instance. This will fail if there are multiple real instances of a # program that then use CLONE_VM without CLONE_THREAD, or if a clone changes # its memory map while we're checksumming each /proc/$PID/smaps. # # I don't take account of memory allocated for a program # by other programs. For e.g. memory used in the X server for # a program could be determined, but is not. # # FreeBSD is supported if linprocfs is mounted at /compat/linux/proc/ # FreeBSD 8.0 supports up to a level of Linux 2.6.16
""" Interface to :mod:`~pySPACE.environments.chains.node_chain` using the :class:`~pySPACE.environments.chains.node_chain.BenchmarkNodeChain` .. image:: ../../graphics/node_chain.png :width: 500 A process consists of applying the given `node_chain` on an input for a certain parameters setting. Factory methods are provided that create this operation based on a specification in one or more YAML files. .. note:: Complete lists of available nodes and namings can be found at: :ref:`node_list`. .. seealso:: - :mod:`~pySPACE.missions.nodes` - :ref:`node_list` - :mod:`~pySPACE.environments.chains.node_chain` Specification file Parameters +++++++++++++++++++++++++++++ type ---- Specifies which operation is used. For this operation you have to use *node_chain*. (*obligatory, node_chain*) input_path ---------- Path name relative to the *storage* in your configuration file. The input has to fulfill certain rules, specified in the fitting: :mod:`~pySPACE.resources.dataset_defs` subpackage. For each dataset in the input and each parameter combination you get an own result. (*obligatory*) templates --------- List of node chain file templates which shall be evaluated. The final node chain is constructed, by substituting specified parameters of the operation spec file. Therefore you should use the format *__PARAMETER__*. Already fixed and usable keywords are *__INPUT_DATASET__* and *__RESULT_DIRECTORY__*. The spec file name may include a path and it is searched relative the folder *node_chains* in the specs folder, given in your configuration file you used, when starting :ref:`pySPACE` (:mod:`pySPACE.run.launch`). If no template is given, the software tries to use the parameters *node_chain*, which simply includes the *node_chain* description. .. todo:: Check that parameter fulfills format rule! (*recommended, alternative: `node_chain`*) node_chain ---------- Instead of using a template you can use your single node chain directly in the operation spec. (*optional, default: `templates`*) runs ---- Number of repetitions for each running process. The run number is given to the process, to use it for *choosing* random components, especially when using cross-validation. (*optional, default: 1*) parameter_ranges ---------------- Dictionary with parameter names as keys and a list of values it should be replaced with. Finally a grid of all possible combinations is build and afterwards only those remain fulfilling the constraints. (*mandatory if parameter_setting is not used*) parameter_setting ----------------- If you do not use the *parameter_ranges*, this is a list of dictionaries giving the parameter combinations, you want to test the operation on. (*mandatory if parameter_ranges is not used*) constraints ----------- List of strings, where the parameters values are replaced and which is afterwards evaluated to check if is *True*. If it is not, the parameter is rejected. (*optional, default: []*) old_parameter_constraints ------------------------- Same as *constraints*, but here parameters of earlier operation calls, e.g. windowing, can be used in the constraint def. hide_parameters --------------- Normally each parameter is added in the format *{__PARAMETER__: value}* added to the *__RESULT_DIRECTORY__*. Every parameter specified in the list *hide_parameters* is not added to the folder name. Be careful not to have the same name for different results. (*optional, default: []*) storage_format -------------- Some datsets give the opportunity to choose between different formats for storing the result. The choice can be specified here. For details look at the :mod:`~pySPACE.resources.dataset_defs` documentation. If the format is not specified, the default of the dataset is used. (*optional, default: default of dataset*) store_node_chain ---------------- option to save as pickle file the total state of the node chain after the processing; separately for each split in cross validation if existing (*optional, default: False*) compression ----------- If your result is a classification summary, all the created sub-folders are zipped with the zipfile module, since they are normally not needed anymore, but you may have numerous folders, making coping difficult. To switch this of, use the value *False* and if you want no compression, use *1*. If all the sub-folders (except a single one) should be deleted, set compression to ``delete``. (*optional, default: 8*) Exemplary Call ++++++++++++++ .. code-block:: yaml type: node_chain input_path: "my_data" runs : 2 templates : ["example_node_chain1.yaml","example_node_chain2.yaml","example_node_chain3.yaml"] .. code-block:: yaml type: node_chain input_path: "my_data" runs : 2 parameter_ranges : __prob__ : eval(range(1,4)) node_chain : - node: FeatureVectorSource - node: CrossValidationSplitter parameters: splits : 5 - node: GaussianFeatureNormalization - node: LinearDiscriminantAnalysisClassifier parameters: class_labels: ["Standard","Target"] prior_probability : [1,__prob__] - node: ThresholdOptimization parameters: metric : Balanced_accuracy - node: Classification_Performance_Sink parameters: ir_class: "Target" .. code-block:: yaml type: node_chain input_path: "my_data" runs : 2 templates : ["example_node_chain.yaml"] parameter_setting : - __prob__ : 1 - __prob__ : 2 - __prob__ : 3 Here `example_node_chain.yaml` is defined in the node chain specification folder as .. literalinclude:: ../../examples/specs/node_chains/example_node_chain.yaml :language: yaml """
""" Limits ====== Implemented according to the PhD thesis http://www.cybertester.com/data/gruntz.pdf, which contains very thorough descriptions of the algorithm including many examples. We summarize here the gist of it. All functions are sorted according to how rapidly varying they are at infinity using the following rules. Any two functions f and g can be compared using the properties of L: L=lim log|f(x)| / log|g(x)| (for x -> oo) We define >, < ~ according to:: 1. f > g .... L=+-oo we say that: - f is greater than any power of g - f is more rapidly varying than g - f goes to infinity/zero faster than g 2. f < g .... L=0 we say that: - f is lower than any power of g 3. f ~ g .... L!=0, +-oo we say that: - both f and g are bounded from above and below by suitable integral powers of the other Examples ======== :: 2 < x < exp(x) < exp(x**2) < exp(exp(x)) 2 ~ 3 ~ -5 x ~ x**2 ~ x**3 ~ 1/x ~ x**m ~ -x exp(x) ~ exp(-x) ~ exp(2x) ~ exp(x)**2 ~ exp(x+exp(-x)) f ~ 1/f So we can divide all the functions into comparability classes (x and x^2 belong to one class, exp(x) and exp(-x) belong to some other class). In principle, we could compare any two functions, but in our algorithm, we don't compare anything below the class 2~3~-5 (for example log(x) is below this), so we set 2~3~-5 as the lowest comparability class. Given the function f, we find the list of most rapidly varying (mrv set) subexpressions of it. This list belongs to the same comparability class. Let's say it is {exp(x), exp(2x)}. Using the rule f ~ 1/f we find an element "w" (either from the list or a new one) from the same comparability class which goes to zero at infinity. In our example we set w=exp(-x) (but we could also set w=exp(-2x) or w=exp(-3x) ...). We rewrite the mrv set using w, in our case {1/w, 1/w^2}, and substitute it into f. Then we expand f into a series in w:: f = c0*w^e0 + c1*w^e1 + ... + O(w^en), where e0<e1<...<en, c0!=0 but for x->oo, lim f = lim c0*w^e0, because all the other terms go to zero, because w goes to zero faster than the ci and ei. So:: for e0>0, lim f = 0 for e0<0, lim f = +-oo (the sign depends on the sign of c0) for e0=0, lim f = lim c0 We need to recursively compute limits at several places of the algorithm, but as is shown in the PhD thesis, it always finishes. Important functions from the implementation: compare(a, b, x) compares "a" and "b" by computing the limit L. mrv(e, x) returns list of most rapidly varying (mrv) subexpressions of "e" rewrite(e, Omega, x, wsym) rewrites "e" in terms of w leadterm(f, x) returns the lowest power term in the series of f mrv_leadterm(e, x) returns the lead term (c0, e0) for e limitinf(e, x) computes lim e (for x->oo) limit(e, z, z0) computes any limit by converting it to the case x->oo All the functions are really simple and straightforward except rewrite(), which is the most difficult/complex part of the algorithm. When the algorithm fails, the bugs are usually in the series expansion (i.e. in SymPy) or in rewrite. This code is almost exact rewrite of the Maple code inside the Gruntz thesis. Debugging --------- Because the gruntz algorithm is highly recursive, it's difficult to figure out what went wrong inside a debugger. Instead, turn on nice debug prints by defining the environment variable SYMPY_DEBUG. For example: [user@localhost]: SYMPY_DEBUG=True ./bin/isympy In [1]: limit(sin(x)/x, x, 0) limitinf(_x*sin(1/_x), _x) = 1 +-mrv_leadterm(_x*sin(1/_x), _x) = (1, 0) | +-mrv(_x*sin(1/_x), _x) = set([_x]) | | +-mrv(_x, _x) = set([_x]) | | +-mrv(sin(1/_x), _x) = set([_x]) | | +-mrv(1/_x, _x) = set([_x]) | | +-mrv(_x, _x) = set([_x]) | +-mrv_leadterm(exp(_x)*sin(exp(-_x)), _x, set([exp(_x)])) = (1, 0) | +-rewrite(exp(_x)*sin(exp(-_x)), set([exp(_x)]), _x, _w) = (1/_w*sin(_w), -_x) | +-sign(_x, _x) = 1 | +-mrv_leadterm(1, _x) = (1, 0) +-sign(0, _x) = 0 +-limitinf(1, _x) = 1 And check manually which line is wrong. Then go to the source code and debug this function to figure out the exact problem. """
""" ======================================= Signal processing (:mod:`scipy.signal`) ======================================= Convolution =========== .. autosummary:: :toctree: generated/ convolve -- N-dimensional convolution. correlate -- N-dimensional correlation. fftconvolve -- N-dimensional convolution using the FFT. convolve2d -- 2-dimensional convolution (more options). correlate2d -- 2-dimensional correlation (more options). sepfir2d -- Convolve with a 2-D separable FIR filter. B-splines ========= .. autosummary:: :toctree: generated/ bspline -- B-spline basis function of order n. cubic -- B-spline basis function of order 3. quadratic -- B-spline basis function of order 2. gauss_spline -- Gaussian approximation to the B-spline basis function. cspline1d -- Coefficients for 1-D cubic (3rd order) B-spline. qspline1d -- Coefficients for 1-D quadratic (2nd order) B-spline. cspline2d -- Coefficients for 2-D cubic (3rd order) B-spline. qspline2d -- Coefficients for 2-D quadratic (2nd order) B-spline. cspline1d_eval -- Evaluate a cubic spline at the given points. qspline1d_eval -- Evaluate a quadratic spline at the given points. spline_filter -- Smoothing spline (cubic) filtering of a rank-2 array. Filtering ========= .. autosummary:: :toctree: generated/ order_filter -- N-dimensional order filter. medfilt -- N-dimensional median filter. medfilt2d -- 2-dimensional median filter (faster). wiener -- N-dimensional wiener filter. symiirorder1 -- 2nd-order IIR filter (cascade of first-order systems). symiirorder2 -- 4th-order IIR filter (cascade of second-order systems). lfilter -- 1-dimensional FIR and IIR digital linear filtering. lfiltic -- Construct initial conditions for `lfilter`. lfilter_zi -- Compute an initial state zi for the lfilter function that -- corresponds to the steady state of the step response. filtfilt -- A forward-backward filter. savgol_filter -- Filter a signal using the Savitzky-Golay filter. deconvolve -- 1-d deconvolution using lfilter. sosfilt -- 1-dimensional IIR digital linear filtering using -- a second-order sections filter representation. sosfilt_zi -- Compute an initial state zi for the sosfilt function that -- corresponds to the steady state of the step response. sosfiltfilt -- A forward-backward filter for second-order sections. hilbert -- Compute 1-D analytic signal, using the Hilbert transform. hilbert2 -- Compute 2-D analytic signal, using the Hilbert transform. decimate -- Downsample a signal. detrend -- Remove linear and/or constant trends from data. resample -- Resample using Fourier method. resample_poly -- Resample using polyphase filtering method. upfirdn -- Upsample, apply FIR filter, downsample. Filter design ============= .. autosummary:: :toctree: generated/ bilinear -- Digital filter from an analog filter using -- the bilinear transform. findfreqs -- Find array of frequencies for computing filter response. firls -- FIR filter design using least-squares error minimization. firwin -- Windowed FIR filter design, with frequency response -- defined as pass and stop bands. firwin2 -- Windowed FIR filter design, with arbitrary frequency -- response. freqs -- Analog filter frequency response. freqz -- Digital filter frequency response. group_delay -- Digital filter group delay. iirdesign -- IIR filter design given bands and gains. iirfilter -- IIR filter design given order and critical frequencies. kaiser_atten -- Compute the attenuation of a Kaiser FIR filter, given -- the number of taps and the transition width at -- discontinuities in the frequency response. kaiser_beta -- Compute the Kaiser parameter beta, given the desired -- FIR filter attenuation. kaiserord -- Design a Kaiser window to limit ripple and width of -- transition region. savgol_coeffs -- Compute the FIR filter coefficients for a Savitzky-Golay -- filter. remez -- Optimal FIR filter design. unique_roots -- Unique roots and their multiplicities. residue -- Partial fraction expansion of b(s) / a(s). residuez -- Partial fraction expansion of b(z) / a(z). invres -- Inverse partial fraction expansion for analog filter. invresz -- Inverse partial fraction expansion for digital filter. BadCoefficients -- Warning on badly conditioned filter coefficients Lower-level filter design functions: .. autosummary:: :toctree: generated/ abcd_normalize -- Check state-space matrices and ensure they are rank-2. band_stop_obj -- Band Stop Objective Function for order minimization. besselap -- Return (z,p,k) for analog prototype of Bessel filter. buttap -- Return (z,p,k) for analog prototype of Butterworth filter. cheb1ap -- Return (z,p,k) for type I Chebyshev filter. cheb2ap -- Return (z,p,k) for type II Chebyshev filter. cmplx_sort -- Sort roots based on magnitude. ellipap -- Return (z,p,k) for analog prototype of elliptic filter. lp2bp -- Transform a lowpass filter prototype to a bandpass filter. lp2bs -- Transform a lowpass filter prototype to a bandstop filter. lp2hp -- Transform a lowpass filter prototype to a highpass filter. lp2lp -- Transform a lowpass filter prototype to a lowpass filter. normalize -- Normalize polynomial representation of a transfer function. Matlab-style IIR filter design ============================== .. autosummary:: :toctree: generated/ butter -- Butterworth buttord cheby1 -- Chebyshev Type I cheb1ord cheby2 -- Chebyshev Type II cheb2ord ellip -- Elliptic (Cauer) ellipord bessel -- Bessel (no order selection available -- try butterod) Continuous-Time Linear Systems ============================== .. autosummary:: :toctree: generated/ lti -- Continuous-time linear time invariant system base class. StateSpace -- Linear time invariant system in state space form. TransferFunction -- Linear time invariant system in transfer function form. ZerosPolesGain -- Linear time invariant system in zeros, poles, gain form. lsim -- continuous-time simulation of output to linear system. lsim2 -- like lsim, but `scipy.integrate.odeint` is used. impulse -- impulse response of linear, time-invariant (LTI) system. impulse2 -- like impulse, but `scipy.integrate.odeint` is used. step -- step response of continous-time LTI system. step2 -- like step, but `scipy.integrate.odeint` is used. freqresp -- frequency response of a continuous-time LTI system. bode -- Bode magnitude and phase data (continuous-time LTI). Discrete-Time Linear Systems ============================ .. autosummary:: :toctree: generated/ dlti -- Discrete-time linear time invariant system base class. StateSpace -- Linear time invariant system in state space form. TransferFunction -- Linear time invariant system in transfer function form. ZerosPolesGain -- Linear time invariant system in zeros, poles, gain form. dlsim -- simulation of output to a discrete-time linear system. dimpulse -- impulse response of a discrete-time LTI system. dstep -- step response of a discrete-time LTI system. dfreqresp -- frequency response of a discrete-time LTI system. dbode -- Bode magnitude and phase data (discrete-time LTI). LTI Representations =================== .. autosummary:: :toctree: generated/ tf2zpk -- transfer function to zero-pole-gain. tf2sos -- transfer function to second-order sections. tf2ss -- transfer function to state-space. zpk2tf -- zero-pole-gain to transfer function. zpk2sos -- zero-pole-gain to second-order sections. zpk2ss -- zero-pole-gain to state-space. ss2tf -- state-pace to transfer function. ss2zpk -- state-space to pole-zero-gain. sos2zpk -- second-order sections to zero-pole-gain. sos2tf -- second-order sections to transfer function. cont2discrete -- continuous-time to discrete-time LTI conversion. place_poles -- pole placement. Waveforms ========= .. autosummary:: :toctree: generated/ chirp -- Frequency swept cosine signal, with several freq functions. gausspulse -- Gaussian modulated sinusoid max_len_seq -- Maximum length sequence sawtooth -- Periodic sawtooth square -- Square wave sweep_poly -- Frequency swept cosine signal; freq is arbitrary polynomial Window functions ================ .. autosummary:: :toctree: generated/ get_window -- Return a window of a given length and type. barthann -- Bartlett-Hann window bartlett -- Bartlett window blackman -- Blackman window blackmanharris -- Minimum 4-term Blackman-Harris window bohman -- Bohman window boxcar -- Boxcar window chebwin -- Dolph-Chebyshev window cosine -- Cosine window exponential -- Exponential window flattop -- Flat top window gaussian -- Gaussian window general_gaussian -- Generalized Gaussian window hamming -- Hamming window hann -- Hann window hanning -- Hann window kaiser -- Kaiser window nuttall -- Nuttall's minimum 4-term Blackman-Harris window parzen -- Parzen window slepian -- Slepian window triang -- Triangular window tukey -- Tukey window Wavelets ======== .. autosummary:: :toctree: generated/ cascade -- compute scaling function and wavelet from coefficients daub -- return low-pass morlet -- Complex Morlet wavelet. qmf -- return quadrature mirror filter from low-pass ricker -- return ricker wavelet cwt -- perform continuous wavelet transform Peak finding ============ .. autosummary:: :toctree: generated/ find_peaks_cwt -- Attempt to find the peaks in the given 1-D array argrelmin -- Calculate the relative minima of data argrelmax -- Calculate the relative maxima of data argrelextrema -- Calculate the relative extrema of data Spectral Analysis ================= .. autosummary:: :toctree: generated/ periodogram -- Compute a (modified) periodogram welch -- Compute a periodogram using Welch's method csd -- Compute the cross spectral density, using Welch's method coherence -- Compute the magnitude squared coherence, using Welch's method spectrogram -- Compute the spectrogram lombscargle -- Computes the Lomb-Scargle periodogram vectorstrength -- Computes the vector strength """
# #!/usr/bin/env python # # -*- coding: utf-8 -*- # import os # import math # import mapnik # import sys # from utilities import execution_path, run_all # from nose.tools import * # def setup(): # # All of the paths used are relative, if we run the tests # # from another directory we need to chdir() # os.chdir(execution_path('.')) # class PointDatasource(mapnik.PythonDatasource): # def __init__(self): # super(PointDatasource, self).__init__( # geometry_type = mapnik.DataGeometryType.Point, # envelope = mapnik.Box2d(0,-10,100,110), # data_type = mapnik.DataType.Vector # ) # def features(self, query): # return mapnik.PythonDatasource.wkt_features( # keys = ('label',), # features = ( # ( 'POINT (5 6)', { 'label': 'foo-bar'} ), # ( 'POINT (60 50)', { 'label': 'buzz-quux'} ), # ) # ) # class ConcentricCircles(object): # def __init__(self, centre, bounds, step=1): # self.centre = centre # self.bounds = bounds # self.step = step # class Iterator(object): # def __init__(self, container): # self.container = container # centre = self.container.centre # bounds = self.container.bounds # step = self.container.step # self.radius = step # def next(self): # points = [] # for alpha in xrange(0, 361, 5): # x = math.sin(math.radians(alpha)) * self.radius + self.container.centre[0] # y = math.cos(math.radians(alpha)) * self.radius + self.container.centre[1] # points.append('%s %s' % (x,y)) # circle = 'POLYGON ((' + ','.join(points) + '))' # # has the circle grown so large that the boundary is entirely within it? # tl = (self.container.bounds.maxx, self.container.bounds.maxy) # tr = (self.container.bounds.maxx, self.container.bounds.maxy) # bl = (self.container.bounds.minx, self.container.bounds.miny) # br = (self.container.bounds.minx, self.container.bounds.miny) # def within_circle(p): # delta_x = p[0] - self.container.centre[0] # delta_y = p[0] - self.container.centre[0] # return delta_x*delta_x + delta_y*delta_y < self.radius*self.radius # if all(within_circle(p) for p in (tl,tr,bl,br)): # raise StopIteration() # self.radius += self.container.step # return ( circle, { } ) # def __iter__(self): # return ConcentricCircles.Iterator(self) # class CirclesDatasource(mapnik.PythonDatasource): # def __init__(self, centre_x=-20, centre_y=0, step=10): # super(CirclesDatasource, self).__init__( # geometry_type = mapnik.DataGeometryType.Polygon, # envelope = mapnik.Box2d(-180, -90, 180, 90), # data_type = mapnik.DataType.Vector # ) # # note that the plugin loader will set all arguments to strings and will not try to parse them # centre_x = int(centre_x) # centre_y = int(centre_y) # step = int(step) # self.centre_x = centre_x # self.centre_y = centre_y # self.step = step # def features(self, query): # centre = (self.centre_x, self.centre_y) # return mapnik.PythonDatasource.wkt_features( # keys = (), # features = ConcentricCircles(centre, query.bbox, self.step) # ) # if 'python' in mapnik.DatasourceCache.plugin_names(): # # make sure we can load from ourself as a module # sys.path.append(execution_path('.')) # def test_python_point_init(): # ds = mapnik.Python(factory='python_plugin_test:PointDatasource') # e = ds.envelope() # assert_almost_equal(e.minx, 0, places=7) # assert_almost_equal(e.miny, -10, places=7) # assert_almost_equal(e.maxx, 100, places=7) # assert_almost_equal(e.maxy, 110, places=7) # def test_python_circle_init(): # ds = mapnik.Python(factory='python_plugin_test:CirclesDatasource') # e = ds.envelope() # assert_almost_equal(e.minx, -180, places=7) # assert_almost_equal(e.miny, -90, places=7) # assert_almost_equal(e.maxx, 180, places=7) # assert_almost_equal(e.maxy, 90, places=7) # def test_python_circle_init_with_args(): # ds = mapnik.Python(factory='python_plugin_test:CirclesDatasource', centre_x=40, centre_y=7) # e = ds.envelope() # assert_almost_equal(e.minx, -180, places=7) # assert_almost_equal(e.miny, -90, places=7) # assert_almost_equal(e.maxx, 180, places=7) # assert_almost_equal(e.maxy, 90, places=7) # def test_python_point_rendering(): # m = mapnik.Map(512,512) # mapnik.load_map(m,'../data/python_plugin/python_point_datasource.xml') # m.zoom_all() # im = mapnik.Image(512,512) # mapnik.render(m,im) # actual = '/tmp/mapnik-python-point-render1.png' # expected = 'images/support/mapnik-python-point-render1.png' # im.save(actual) # expected_im = mapnik.Image.open(expected) # eq_(im.tostring('png32'),expected_im.tostring('png32'), # 'failed comparing actual (%s) and expected (%s)' % (actual,'tests/python_tests/'+ expected)) # def test_python_circle_rendering(): # m = mapnik.Map(512,512) # mapnik.load_map(m,'../data/python_plugin/python_circle_datasource.xml') # m.zoom_all() # im = mapnik.Image(512,512) # mapnik.render(m,im) # actual = '/tmp/mapnik-python-circle-render1.png' # expected = 'images/support/mapnik-python-circle-render1.png' # im.save(actual) # expected_im = mapnik.Image.open(expected) # eq_(im.tostring('png32'),expected_im.tostring('png32'), # 'failed comparing actual (%s) and expected (%s)' % (actual,'tests/python_tests/'+ expected)) # if __name__ == "__main__": # setup() # run_all(eval(x) for x in dir() if x.startswith("test_"))
# The following CSI codes supported by xcode are not tested. # Query ReGIS/Sixel attributes: CSI ? Pi ; Pa ; P vS # Initiate highlight mouse tracking: CSI Ps ; Ps ; Ps ; Ps ; Ps T # Media Copy (MC): CSI Pm i # Media Copy (MC, DEC-specific): CSI ? Pm i # Character Attributes (SGR): CSI Pm m # Disable modifiers: CSI > Ps n # Set pointer mode: CSI > Ps p # Load LEDs (DECLL): CSI Ps q # Set cursor style (DECSCUSR): CIS Ps SP q # Select character protection attribute (DECSCA): CSI Ps " q [This is already tested by DECSED and DECSEL] # Window manipulation: CSI Ps; Ps; Ps t # Reverse Attributes in Rectangular Area (DECRARA): CSI Pt ; Pl ; Pb ; Pr ; Ps $ t # Set warning bell volume (DECSWBV): CSI Ps SP t # Set margin-bell volume (DECSMBV): CSI Ps SP u # Enable Filter Rectangle (DECEFR): CSI Pt ; Pl ; Pb ; Pr ' w # Request Terminal Parameters (DECREQTPARM): CSI Ps x # Select Attribute Change Extent (DECSACE): CSI Ps * x # Request Checksum of Rectangular Area (DECRQCRA): CSI Pi ; Pg ; Pt ; Pl ; Pb ; Pr * y # Select Locator Events (DECSLE): CSI Pm ' { # Request Locator Position (DECRQLP): CSI PS ' | # ESC SP L Set ANSI conformance level 1 (dpANS X3.134.1). # ESC SP M Set ANSI conformance level 2 (dpANS X3.134.1). # ESC SP N Set ANSI conformance level 3 (dpANS X3.134.1). # In xterm, all these do is fiddle with character sets, which are not testable. # ESC # 3 DEC double-height line, top half (DECDHL). # ESC # 4 DEC double-height line, bottom half (DECDHL). # ESC # 5 DEC single-width line (DECSWL). # ESC # 6 DEC double-width line (DECDWL). # Double-width affects display only and is generally not introspectable. Wrap # doesn't work so there's no way to tell where the cursor is visually. # ESC % @ Select default character set. That is ISO 8859-1 (ISO 2022). # ESC % G Select UTF-8 character set (ISO 2022). # ESC ( C Designate G0 Character Set (ISO 2022, VT100). # ESC ) C Designate G1 Character Set (ISO 2022, VT100). # ESC * C Designate G2 Character Set (ISO 2022, VT220). # ESC + C Designate G3 Character Set (ISO 2022, VT220). # ESC - C Designate G1 Character Set (VT300). # ESC . C Designate G2 Character Set (VT300). # ESC / C Designate G3 Character Set (VT300). # Character set stuff is not introspectable. # Shift in (SI): ^O # Shift out (SO): ^N # Space (SP): 0x20 # Tab (TAB): 0x09 [tested in HTS] # ESC = Application Keypad (DECKPAM). # ESC > Normal Keypad (DECKPNM). # ESC F Cursor to lower left corner of screen. This is enabled by the # hpLowerleftBugCompat resource. (Not worth testing as it's off by # default, and silly regardless) # ESC l Memory Lock (per HP terminals). Locks memory above the cursor. # ESC m Memory Unlock (per HP terminals). # ESC n Invoke the G2 Character Set as GL (LS2). # ESC o Invoke the G3 Character Set as GL (LS3). # ESC | Invoke the G3 Character Set as GR (LS3R). # ESC } Invoke the G2 Character Set as GR (LS2R). # ESC ~ Invoke the G1 Character Set as GR (LS1R). # DCS + p Pt ST Set Termcap/Terminfo Data # DCS + q Pt ST Request Termcap/Terminfo String # The following OSC commands are tested in xterm_winops and don't have their own test: # Ps = 0 -> Change Icon Name and Window Title to Pt. # Ps = 1 -> Change Icon Name to Pt. # Ps = 2 -> Change Window Title to Pt. # This test is too ill-defined and X-specific, and is not tested: # Ps = 3 -> Set X property on top-level window. Pt should be # in the form "prop=value", or just "prop" to delete the prop- # erty # No introspection for whether special color are enabled/disabled: # Ps = 6 ; c; f -> Enable/disable Special Color Number c. The # second parameter tells xterm to enable the corresponding color # mode if nonzero, disable it if zero. # Off by default, obvious security issues: # Ps = 4 6 -> Change Log File to Pt. (This is normally dis- # abled by a compile-time option). # No introspection for fonts: # Ps = 5 0 -> Set Font to Pt. # No-op: # Ps = 5 1 -> reserved for Emacs shell.
""" ============================= Byteswapping and byte order ============================= Introduction to byte ordering and ndarrays ========================================== The ``ndarray`` is an object that provide a python array interface to data in memory. It often happens that the memory that you want to view with an array is not of the same byte ordering as the computer on which you are running Python. For example, I might be working on a computer with a little-endian CPU - such as an Intel Pentium, but I have loaded some data from a file written by a computer that is big-endian. Let's say I have loaded 4 bytes from a file written by a Sun (big-endian) computer. I know that these 4 bytes represent two 16-bit integers. On a big-endian machine, a two-byte integer is stored with the Most Significant Byte (MSB) first, and then the Least Significant Byte (LSB). Thus the bytes are, in memory order: #. MSB integer 1 #. LSB integer 1 #. MSB integer 2 #. LSB integer 2 Let's say the two integers were in fact 1 and 770. Because 770 = 256 * 3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2. The bytes I have loaded from the file would have these contents: >>> big_end_str = chr(0) + chr(1) + chr(3) + chr(2) >>> big_end_str '\\x00\\x01\\x03\\x02' We might want to use an ``ndarray`` to access these integers. In that case, we can create an array around this memory, and tell numpy that there are two integers, and that they are 16 bit and big-endian: >>> import numpy as np >>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_str) >>> big_end_arr[0] 1 >>> big_end_arr[1] 770 Note the array ``dtype`` above of ``>i2``. The ``>`` means 'big-endian' (``<`` is little-endian) and ``i2`` means 'signed 2-byte integer'. For example, if our data represented a single unsigned 4-byte little-endian integer, the dtype string would be ``<u4``. In fact, why don't we try that? >>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_str) >>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3 True Returning to our ``big_end_arr`` - in this case our underlying data is big-endian (data endianness) and we've set the dtype to match (the dtype is also big-endian). However, sometimes you need to flip these around. .. warning:: Scalars currently do not include byte order information, so extracting a scalar from an array will return an integer in native byte order. Hence: >>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder True Changing byte ordering ====================== As you can imagine from the introduction, there are two ways you can affect the relationship between the byte ordering of the array and the underlying memory it is looking at: * Change the byte-ordering information in the array dtype so that it interprets the undelying data as being in a different byte order. This is the role of ``arr.newbyteorder()`` * Change the byte-ordering of the underlying data, leaving the dtype interpretation as it was. This is what ``arr.byteswap()`` does. The common situations in which you need to change byte ordering are: #. Your data and dtype endianess don't match, and you want to change the dtype so that it matches the data. #. Your data and dtype endianess don't match, and you want to swap the data so that they match the dtype #. Your data and dtype endianess match, but you want the data swapped and the dtype to reflect this Data and dtype endianness don't match, change dtype to match data ----------------------------------------------------------------- We make something where they don't match: >>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_str) >>> wrong_end_dtype_arr[0] 256 The obvious fix for this situation is to change the dtype so it gives the correct endianness: >>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder() >>> fixed_end_dtype_arr[0] 1 Note the the array has not changed in memory: >>> fixed_end_dtype_arr.tobytes() == big_end_str True Data and type endianness don't match, change data to match dtype ---------------------------------------------------------------- You might want to do this if you need the data in memory to be a certain ordering. For example you might be writing the memory out to a file that needs a certain byte ordering. >>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap() >>> fixed_end_mem_arr[0] 1 Now the array *has* changed in memory: >>> fixed_end_mem_arr.tobytes() == big_end_str False Data and dtype endianness match, swap data and dtype ---------------------------------------------------- You may have a correctly specified array dtype, but you need the array to have the opposite byte order in memory, and you want the dtype to match so the array values make sense. In this case you just do both of the previous operations: >>> swapped_end_arr = big_end_arr.byteswap().newbyteorder() >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_str False An easier way of casting the data to a specific dtype and byte ordering can be achieved with the ndarray astype method: >>> swapped_end_arr = big_end_arr.astype('<i2') >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_str False """
#!/usr/bin/env python # Try to determine how much RAM is currently being used per program. # Note per _program_, not per process. So for example this script # will report RAM used by all httpd process together. In detail it reports: # sum(private RAM for program processes) + sum(Shared RAM for program processes) # The shared RAM is problematic to calculate, and this script automatically # selects the most accurate method available for your kernel. # Licence: LGPLv2 # Author: EMAIL Source: http://www.pixelbeat.org/scripts/ps_mem.py # V1.0 06 Jul 2005 Initial release # V1.1 11 Aug 2006 root permission required for accuracy # V1.2 08 Nov 2006 Add total to output # Use KiB,MiB,... for units rather than K,M,... # V1.3 22 Nov 2006 Ignore shared col from /proc/$pid/statm for # 2.6 kernels up to and including 2.6.9. # There it represented the total file backed extent # V1.4 23 Nov 2006 Remove total from output as it's meaningless # (the shared values overlap with other programs). # Display the shared column. This extra info is # useful, especially as it overlaps between programs. # V1.5 26 Mar 2007 Remove redundant recursion from human() # V1.6 05 Jun 2007 Also report number of processes with a given name. # Patch from EMAIL V1.7 20 Sep 2007 Use PSS from /proc/$pid/smaps if available, which # fixes some over-estimation and allows totalling. # Enumerate the PIDs directly rather than using ps, # which fixes the possible race between reading # RSS with ps, and shared memory with this program. # Also we can show non truncated command names. # V1.8 28 Sep 2007 More accurate matching for stats in /proc/$pid/smaps # as otherwise could match libraries causing a crash. # Patch from EMAIL V1.9 20 Feb 2008 Fix invalid values reported when PSS is available. # Reported by NAME <EMAIL> # V3.5 29 Sep 2015 # http://github.com/pixelb/scripts/commits/master/scripts/ps_mem.py # Notes: # # All interpreted programs where the interpreter is started # by the shell or with env, will be merged to the interpreter # (as that's what's given to exec). For e.g. all python programs # starting with "#!/usr/bin/env python" will be grouped under python. # You can change this by using the full command line but that will # have the undesirable affect of splitting up programs started with # differing parameters (for e.g. mingetty tty[1-6]). # # For 2.6 kernels up to and including 2.6.13 and later 2.4 redhat kernels # (rmap vm without smaps) it can not be accurately determined how many pages # are shared between processes in general or within a program in our case: # http://lkml.org/lkml/2005/7/6/250 # A warning is printed if overestimation is possible. # In addition for 2.6 kernels up to 2.6.9 inclusive, the shared # value in /proc/$pid/statm is the total file-backed extent of a process. # We ignore that, introducing more overestimation, again printing a warning. # Since kernel 2.6.23-rc8-mm1 PSS is available in smaps, which allows # us to calculate a more accurate value for the total RAM used by programs. # # Programs that use CLONE_VM without CLONE_THREAD are discounted by assuming # they're the only programs that have the same /proc/$PID/smaps file for # each instance. This will fail if there are multiple real instances of a # program that then use CLONE_VM without CLONE_THREAD, or if a clone changes # its memory map while we're checksumming each /proc/$PID/smaps. # # I don't take account of memory allocated for a program # by other programs. For e.g. memory used in the X server for # a program could be determined, but is not. # # FreeBSD is supported if linprocfs is mounted at /compat/linux/proc/ # FreeBSD 8.0 supports up to a level of Linux 2.6.16
""" Tools for 3DCGA (g3c) 3DCGA Tools ========================================================== Generation Methods -------------------- .. autosummary:: :toctree: generated/ random_bivector standard_point_pair_at_origin random_point_pair_at_origin random_point_pair standard_line_at_origin random_line_at_origin random_line random_circle_at_origin random_circle random_sphere_at_origin random_sphere random_plane_at_origin random_plane generate_n_clusters generate_random_object_cluster random_translation_rotor random_rotation_translation_rotor random_conformal_point generate_dilation_rotor generate_translation_rotor Geometry Methods -------------------- .. autosummary:: :toctree: generated/ intersect_line_and_plane_to_point val_intersect_line_and_plane_to_point quaternion_and_vector_to_rotor get_center_from_sphere get_radius_from_sphere point_pair_to_end_points val_point_pair_to_end_points get_circle_in_euc circle_to_sphere line_to_point_and_direction get_plane_origin_distance get_plane_normal get_nearest_plane_point val_convert_2D_polar_line_to_conformal_line convert_2D_polar_line_to_conformal_line val_convert_2D_point_to_conformal convert_2D_point_to_conformal val_distance_point_to_line distance_polar_line_to_euc_point_2d midpoint_between_lines val_midpoint_between_lines midpoint_of_line_cluster val_midpoint_of_line_cluster val_midpoint_of_line_cluster_grad get_line_intersection val_get_line_intersection project_points_to_plane project_points_to_sphere project_points_to_circle project_points_to_line iterative_closest_points_on_circles closest_point_on_line_from_circle closest_point_on_circle_from_line iterative_closest_points_circle_line iterative_furthest_points_on_circles sphere_beyond_plane sphere_behind_plane Misc -------------------- .. autosummary:: :toctree: generated/ meet_val meet normalise_n_minus_1 val_normalise_n_minus_1 val_apply_rotor apply_rotor val_apply_rotor_inv apply_rotor_inv euc_dist mult_with_ninf val_norm norm val_normalised normalised val_up fast_up val_normalInv val_homo val_down fast_down dual_func fast_dual disturb_object project_val get_line_reflection_matrix val_get_line_reflection_matrix val_truncated_get_line_reflection_matrix interpret_multivector_as_object normalise_TR_to_unit_T scale_TR_translation val_unsign_sphere Root Finding -------------------- .. autosummary:: :toctree: generated/ dorst_norm_val check_sigma_for_positive_root_val check_sigma_for_positive_root check_sigma_for_negative_root_val check_sigma_for_negative_root check_infinite_roots_val check_infinite_roots positive_root_val negative_root_val positive_root negative_root general_root_val general_root val_annihilate_k annihilate_k pos_twiddle_root_val neg_twiddle_root_val pos_twiddle_root neg_twiddle_root square_roots_of_rotor n_th_rotor_root interp_objects_root general_object_interpolation average_objects val_average_objects_with_weights val_average_objects rotor_between_objects val_rotor_between_objects_root val_rotor_between_objects_explicit calculate_S_over_mu val_rotor_between_lines rotor_between_lines rotor_between_planes val_rotor_rotor_between_planes Submodules -------------------- .. autosummary:: :toctree: generated/ object_fitting """
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <EMAIL>, 2012-2013 # Copyright (c), NAME <EMAIL>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
{ 'name': 'Web', 'category': 'Hidden', 'version': 'IP_ADDRESS', 'description': """ OpenERP Web core module. ======================== This module provides the core of the OpenERP Web Client. """, 'depends': [], 'auto_install': True, 'post_load': 'wsgi_postload', 'js' : [ "static/src/fixbind.js", "static/lib/datejs/globalization/en-US.js", "static/lib/datejs/core.js", "static/lib/datejs/parser.js", "static/lib/datejs/sugarpak.js", "static/lib/datejs/extras.js", "static/lib/jquery/jquery-1.8.3.js", "static/lib/jquery.MD5/jquery.md5.js", "static/lib/jquery.form/jquery.form.js", "static/lib/jquery.validate/jquery.validate.js", "static/lib/jquery.ba-bbq/jquery.ba-bbq.js", "static/lib/spinjs/spin.js", "static/lib/jquery.autosize/jquery.autosize.js", "static/lib/jquery.blockUI/jquery.blockUI.js", "static/lib/jquery.placeholder/jquery.placeholder.js", "static/lib/jquery.ui/js/jquery-ui-1.9.1.custom.js", "static/lib/jquery.ui.timepicker/js/jquery-ui-timepicker-addon.js", "static/lib/jquery.ui.notify/js/jquery.notify.js", "static/lib/jquery.deferred-queue/jquery.deferred-queue.js", "static/lib/jquery.scrollTo/jquery.scrollTo-min.js", "static/lib/jquery.tipsy/jquery.tipsy.js", "static/lib/jquery.textext/jquery.textext.js", "static/lib/jquery.timeago/jquery.timeago.js", "static/lib/qweb/qweb2.js", "static/lib/underscore/underscore.js", "static/lib/underscore/underscore.string.js", "static/lib/backbone/backbone.js", "static/lib/cleditor/jquery.cleditor.js", "static/lib/py.js/lib/py.js", "static/src/js/boot.js", "static/src/js/testing.js", "static/src/js/pyeval.js", "static/src/js/corelib.js", "static/src/js/coresetup.js", "static/src/js/dates.js", "static/src/js/formats.js", "static/src/js/chrome.js", "static/src/js/views.js", "static/src/js/data.js", "static/src/js/data_export.js", "static/src/js/search.js", "static/src/js/view_form.js", "static/src/js/view_list.js", "static/src/js/view_list_editable.js", "static/src/js/view_tree.js", ], 'css' : [ "static/lib/jquery.ui.bootstrap/css/custom-theme/jquery-ui-1.9.0.custom.css", "static/lib/jquery.ui.timepicker/css/jquery-ui-timepicker-addon.css", "static/lib/jquery.ui.notify/css/ui.notify.css", "static/lib/jquery.tipsy/tipsy.css", "static/lib/jquery.textext/jquery.textext.css", "static/src/css/base.css", "static/src/css/data_export.css", "static/lib/cleditor/jquery.cleditor.css", ], 'qweb' : [ "static/src/xml/*.xml", ], 'test': [ "static/test/testing.js", "static/test/class.js", "static/test/registry.js", "static/test/form.js", "static/test/data.js", "static/test/list-utils.js", "static/test/formats.js", "static/test/rpc.js", "static/test/evals.js", "static/test/search.js", "static/test/Widget.js", "static/test/list.js", "static/test/list-editable.js", "static/test/mutex.js" ], 'bootstrap': True, }
"""Stuff to parse AIFF-C and AIFF files. Unless explicitly stated otherwise, the description below is true both for AIFF-C files and AIFF files. An AIFF-C file has the following structure. +-----------------+ | FORM | +-----------------+ | <size> | +----+------------+ | | AIFC | | +------------+ | | <chunks> | | | . | | | . | | | . | +----+------------+ An AIFF file has the string "AIFF" instead of "AIFC". A chunk consists of an identifier (4 bytes) followed by a size (4 bytes, big endian order), followed by the data. The size field does not include the size of the 8 byte header. The following chunk types are recognized. FVER <version number of AIFF-C defining document> (AIFF-C only). MARK <# of markers> (2 bytes) list of markers: <marker ID> (2 bytes, must be > 0) <position> (4 bytes) <marker name> ("pstring") COMM <# of channels> (2 bytes) <# of sound frames> (4 bytes) <size of the samples> (2 bytes) <sampling frequency> (10 bytes, IEEE 80-bit extended floating point) in AIFF-C files only: <compression type> (4 bytes) <human-readable version of compression type> ("pstring") SSND <offset> (4 bytes, not used by this program) <blocksize> (4 bytes, not used by this program) <sound data> A pstring consists of 1 byte length, a string of characters, and 0 or 1 byte pad to make the total length even. Usage. Reading AIFF files: f = aifc.open(file, 'r') where file is either the name of a file or an open file pointer. The open file pointer must have methods read(), seek(), and close(). In some types of audio files, if the setpos() method is not used, the seek() method is not necessary. This returns an instance of a class with the following public methods: getnchannels() -- returns number of audio channels (1 for mono, 2 for stereo) getsampwidth() -- returns sample width in bytes getframerate() -- returns sampling frequency getnframes() -- returns number of audio frames getcomptype() -- returns compression type ('NONE' for AIFF files) getcompname() -- returns human-readable version of compression type ('not compressed' for AIFF files) getparams() -- returns a tuple consisting of all of the above in the above order getmarkers() -- get the list of marks in the audio file or None if there are no marks getmark(id) -- get mark with the specified id (raises an error if the mark does not exist) readframes(n) -- returns at most n frames of audio rewind() -- rewind to the beginning of the audio stream setpos(pos) -- seek to the specified position tell() -- return the current position close() -- close the instance (make it unusable) The position returned by tell(), the position given to setpos() and the position of marks are all compatible and have nothing to do with the actual position in the file. The close() method is called automatically when the class instance is destroyed. Writing AIFF files: f = aifc.open(file, 'w') where file is either the name of a file or an open file pointer. The open file pointer must have methods write(), tell(), seek(), and close(). This returns an instance of a class with the following public methods: aiff() -- create an AIFF file (AIFF-C default) aifc() -- create an AIFF-C file setnchannels(n) -- set the number of channels setsampwidth(n) -- set the sample width setframerate(n) -- set the frame rate setnframes(n) -- set the number of frames setcomptype(type, name) -- set the compression type and the human-readable compression type setparams(tuple) -- set all parameters at once setmark(id, pos, name) -- add specified mark to the list of marks tell() -- return current position in output file (useful in combination with setmark()) writeframesraw(data) -- write audio frames without pathing up the file header writeframes(data) -- write audio frames and patch up the file header close() -- patch up the file header and close the output file You should set the parameters before the first writeframesraw or writeframes. The total number of frames does not need to be set, but when it is set to the correct value, the header does not have to be patched up. It is best to first set all parameters, perhaps possibly the compression type, and then write audio frames using writeframesraw. When all frames have been written, either call writeframes('') or close() to patch up the sizes in the header. Marks can be added anytime. If there are any marks, ypu must call close() after all frames have been written. The close() method is called automatically when the class instance is destroyed. When a file is opened with the extension '.aiff', an AIFF file is written, otherwise an AIFF-C file is written. This default can be changed by calling aiff() or aifc() before the first writeframes or writeframesraw. """
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <EMAIL>, 2012-2013 # Copyright (c), NAME <EMAIL>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
#Problem 8: #The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832. #73167176531330624919225119674426574742355349194934 #96983520312774506326239578318016984801869478851843 #85861560789112949495459501737958331952853208805511 #12540698747158523863050715693290963295227443043557 #66896648950445244523161731856403098711121722383113 #62229893423380308135336276614282806444486645238749 #30358907296290491560440772390713810515859307960866 #70172427121883998797908792274921901699720888093776 #65727333001053367881220235421809751254540594752243 #52584907711670556013604839586446706324415722155397 #53697817977846174064955149290862569321978468622482 #83972241375657056057490261407972968652414535100474 #82166370484403199890008895243450658541227588666881 #16427171479924442928230863465674813919123162824586 #17866458359124566529476545682848912883142607690042 #24219022671055626321111109370544217506941658960408 #07198403850962455444362981230987879927244284909188 #84580156166097919133875499200524063689912560717606 #05886116467109405077541002256983155200055935729725 #71636269561882670428252483600823257530420752963450 #Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this #product?
"""Module for interactive demos using IPython. This module implements a few classes for running Python scripts interactively in IPython for demonstrations. With very simple markup (a few tags in comments), you can control points where the script stops executing and returns control to IPython. Provided classes ---------------- The classes are (see their docstrings for further details): - Demo: pure python demos - IPythonDemo: demos with input to be processed by IPython as if it had been typed interactively (so magics work, as well as any other special syntax you may have added via input prefilters). - LineDemo: single-line version of the Demo class. These demos are executed one line at a time, and require no markup. - IPythonLineDemo: IPython version of the LineDemo class (the demo is executed a line at a time, but processed via IPython). - ClearMixin: mixin to make Demo classes with less visual clutter. It declares an empty marquee and a pre_cmd that clears the screen before each block (see Subclassing below). - ClearDemo, ClearIPDemo: mixin-enabled versions of the Demo and IPythonDemo classes. Inheritance diagram: .. inheritance-diagram:: IPython.lib.demo :parts: 3 Subclassing ----------- The classes here all include a few methods meant to make customization by subclassing more convenient. Their docstrings below have some more details: - highlight(): format every block and optionally highlight comments and docstring content. - marquee(): generates a marquee to provide visible on-screen markers at each block start and end. - pre_cmd(): run right before the execution of each block. - post_cmd(): run right after the execution of each block. If the block raises an exception, this is NOT called. Operation --------- The file is run in its own empty namespace (though you can pass it a string of arguments as if in a command line environment, and it will see those as sys.argv). But at each stop, the global IPython namespace is updated with the current internal demo namespace, so you can work interactively with the data accumulated so far. By default, each block of code is printed (with syntax highlighting) before executing it and you have to confirm execution. This is intended to show the code to an audience first so you can discuss it, and only proceed with execution once you agree. There are a few tags which allow you to modify this behavior. The supported tags are: # <demo> stop Defines block boundaries, the points where IPython stops execution of the file and returns to the interactive prompt. You can optionally mark the stop tag with extra dashes before and after the word 'stop', to help visually distinguish the blocks in a text editor: # <demo> --- stop --- # <demo> silent Make a block execute silently (and hence automatically). Typically used in cases where you have some boilerplate or initialization code which you need executed but do not want to be seen in the demo. # <demo> auto Make a block execute automatically, but still being printed. Useful for simple code which does not warrant discussion, since it avoids the extra manual confirmation. # <demo> auto_all This tag can _only_ be in the first block, and if given it overrides the individual auto tags to make the whole demo fully automatic (no block asks for confirmation). It can also be given at creation time (or the attribute set later) to override what's in the file. While _any_ python file can be run as a Demo instance, if there are no stop tags the whole file will run in a single block (no different that calling first %pycat and then %run). The minimal markup to make this useful is to place a set of stop tags; the other tags are only there to let you fine-tune the execution. This is probably best explained with the simple example file below. You can copy this into a file named ex_demo.py, and try running it via:: from IPython.lib.demo import Demo d = Demo('ex_demo.py') d() Each time you call the demo object, it runs the next block. The demo object has a few useful methods for navigation, like again(), edit(), jump(), seek() and back(). It can be reset for a new run via reset() or reloaded from disk (in case you've edited the source) via reload(). See their docstrings below. Note: To make this simpler to explore, a file called "demo-exercizer.py" has been added to the "docs/examples/core" directory. Just cd to this directory in an IPython session, and type:: %run demo-exercizer.py and then follow the directions. Example ------- The following is a very simple example of a valid demo file. :: #################### EXAMPLE DEMO <ex_demo.py> ############################### '''A simple interactive demo to illustrate the use of IPython's Demo class.''' print 'Hello, welcome to an interactive IPython demo.' # The mark below defines a block boundary, which is a point where IPython will # stop execution and return to the interactive prompt. The dashes are actually # optional and used only as a visual aid to clearly separate blocks while # editing the demo code. # <demo> stop x = 1 y = 2 # <demo> stop # the mark below makes this block as silent # <demo> silent print 'This is a silent block, which gets executed but not printed.' # <demo> stop # <demo> auto print 'This is an automatic block.' print 'It is executed without asking for confirmation, but printed.' z = x+y print 'z=',x # <demo> stop # This is just another normal block. print 'z is now:', z print 'bye!' ################### END EXAMPLE DEMO <ex_demo.py> ############################ """
"""============== Array indexing ============== Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing, which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This section is just an overview of the various options and issues related to indexing. Aside from single element indexing, the details on most of these options are to be found in related sections. Assignment vs referencing ========================= Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See the section at the end for specific examples and explanations on how assignments work. Single element indexing ======================= Single element indexing for a 1-D array is what one expects. It work exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. :: >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that it is not necessary to separate each dimension's index into its own set of square brackets. :: >>> x.shape = (2,5) # now x is 2-dimensional >>> x[1,3] 8 >>> x[1,-1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: :: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: :: >>> x[0][2] 2 So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient as a new temporary array is created after the first index that is subsequently indexed by 2. Note to those used to IDL or Fortran memory order as it relates to indexing. Numpy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. Other indexing options ====================== It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. A few examples illustrates best: :: >>> x = np.arange(10) >>> x[2:5] array([2, 3, 4]) >>> x[:-7] array([0, 1, 2]) >>> x[1:7:2] array([1, 3, 5]) >>> y = np.arange(35).reshape(5,7) >>> y[1:5:2,::3] array([[ 7, 10, 13], [21, 24, 27]]) Note that slices of arrays do not copy the internal array data but also produce new views of the original data. It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance. It is possible to use special features to effectively increase the number of dimensions in an array through indexing so the resulting array aquires the shape needed for use in an expression or with a specific function. Index arrays ============ Numpy arrays may be indexed with other arrays (or any other sequence- like object that can be converted to an array, such as lists, with the exception of tuples; see the end of this document for why this is). The use of index arrays ranges from simple, straightforward cases to complex, hard-to-understand cases. For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. Index arrays must be of integer type. Each value in the array indicates which value in the array to use in place of the index. To illustrate: :: >>> x = np.arange(10,1,-1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index array) where each index is replaced by the value the index array has in the array being indexed. Negative values are permitted and work as they do with single indices or slices: :: >>> x[np.array([3,3,-3,8])] array([7, 7, 4, 2]) It is an error to have index values out of bounds: :: >>> x[np.array([3, 3, 20, 8])] <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9 Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array, but with the type and values of the array being indexed. As an example, we can use a multidimensional index array instead: :: >>> x[np.array([[1,1],[2,3]])] array([[9, 9], [8, 7]]) Indexing Multi-dimensional arrays ================================= Things become more complex when multidimensional arrays are indexed, particularly with multidimensional index arrays. These tend to be more unusal uses, but theyare permitted, and they are useful for some problems. We'll start with thesimplest multidimensional case (using the array y from the previous examples): :: >>> y[np.array([0,2,4]), np.array([0,1,2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2]. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: :: >>> y[np.array([0,2,4]), np.array([0,1])] <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: :: >>> y[np.array([0,2,4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: :: >>> y[np.array([0,2,4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) What results is the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (size of row, number index elements). An example of where this may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. In general, the shape of the resulant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. Boolean or "mask" index arrays ============================== Boolean arrays used as indices are treated in a different manner entirely than index arrays. Boolean arrays must be of the same shape as the initial dimensions of the array being indexed. In the most straightforward case, the boolean array has the same shape: :: >>> b = y>20 >>> y[b] array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) Unlike in the case of integer index arrays, in the boolean case, the result is a 1-D array containing all the elements in the indexed array corresponding to all the true elements in the boolean array. The elements in the indexed array are always iterated and returned in :term:`row-major` (C-style) order. The result is also identical to ``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy of the data, not a view as one gets with slices. The result will be multidimensional if y has more dimensions than b. For example: :: >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y array([False, False, False, True, True], dtype=bool) >>> y[b[:,5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...], which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed. For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of shape (2,3,5) results in a 2-D result of shape (4,5): :: >>> x = np.arange(30).reshape(2,3,5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) For further details, consult the numpy reference documentation on array indexing. Combining index arrays with slices ================================== Index arrays may be combined with slices. For example: :: >>> y[np.array([0,2,4]),1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array to produce a resultant array of shape (3,2). Likewise, slicing can be combined with broadcasted boolean indices: :: >>> y[b[:,5],1:3] array([[22, 23], [29, 30]]) Structural indexing tools ========================= To facilitate easy matching of array shapes with expressions and in assignments, the np.newaxis object can be used within array indices to add new dimensions with a size of 1. For example: :: >>> y.shape (5, 7) >>> y[:,np.newaxis,:].shape (5, 1, 7) Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to combine two arrays in a way that otherwise would require explicitly reshaping operations. For example: :: >>> x = np.arange(5) >>> x[:,np.newaxis] + x[np.newaxis,:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example: :: >>> z = np.arange(81).reshape(3,3,3,3) >>> z[1,...,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) This is equivalent to: :: >>> z[1,:,:,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) Assigning values to indexed arrays ================================== As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: :: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: :: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): :: >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z)) Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: :: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs ======================================================== The index syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example (using the previous definition for the array z): :: >>> indices = (1,1,1,1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: :: >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: :: >>> indices = (1, Ellipsis, 1) # same as [1,...,1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason it is possible to use the output from the np.where() function directly as an index since it always returns a tuple of index arrays. Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1,1,1,1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1,1,1,1)] # returns a single value 40 """
# rgToolFactoryMultIn.py # see https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home # # copyright NAME (ross stop lazarus at gmail stop com) May 2012 # # all rights reserved # Licensed under the LGPL # suggestions for improvement and bug fixes welcome at https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home # # January 2015 # unified all setups by passing the script on the cl rather than via a PIPE - no need for treat_bash_special so removed # # in the process of building a complex tool # added ability to choose one of the current toolshed package_r or package_perl or package_python dependencies and source that package # add that package to tool_dependencies # Note that once the generated tool is loaded, it will have that package's env.sh loaded automagically so there is no # --envshpath in the parameters for the generated tool and it uses the system one which will be first on the adjusted path. # # sept 2014 added additional params from # https://bitbucket.org/mvdbeek/dockertoolfactory/src/d4863bcf7b521532c7e8c61b6333840ba5393f73/DockerToolFactory.py?at=default # passing them is complex # and they are restricted to NOT contain commas or double quotes to ensure that they can be safely passed together on # the toolfactory command line as a comma delimited double quoted string for parsing and passing to the script # see examples on this tool form # august 2014 # Allows arbitrary number of input files # NOTE positional parameters are now passed to script # and output (may be "None") is *before* arbitrary number of inputs # # march 2014 # had to remove dependencies because cross toolshed dependencies are not possible - can't pre-specify a toolshed url for graphicsmagick and ghostscript # grrrrr - night before a demo # added dependencies to a tool_dependencies.xml if html page generated so generated tool is properly portable # # added ghostscript and graphicsmagick as dependencies # fixed a wierd problem where gs was trying to use the new_files_path from universe (database/tmp) as ./database/tmp # errors ensued # # august 2013 # found a problem with GS if $TMP or $TEMP missing - now inject /tmp and warn # # july 2013 # added ability to combine images and individual log files into html output # just make sure there's a log file foo.log and it will be output # together with all images named like "foo_*.pdf # otherwise old format for html # # January 2013 # problem pointed out by NAME added escaping for <>$ - thought I did that ages ago... # # August 11 2012 # changed to use shell=False and cl as a sequence # This is a Galaxy tool factory for simple scripts in python, R or whatever ails ye. # It also serves as the wrapper for the new tool. # # you paste and run your script # Only works for simple scripts that read one input from the history. # Optionally can write one new history dataset, # and optionally collect any number of outputs into links on an autogenerated HTML page. # DO NOT install on a public or important site - please. # installed generated tools are fine if the script is safe. # They just run normally and their user cannot do anything unusually insecure # but please, practice safe toolshed. # Read the fucking code before you install any tool # especially this one # After you get the script working on some test data, you can # optionally generate a toolshed compatible gzip file # containing your script safely wrapped as an ordinary Galaxy script in your local toolshed for # safe and largely automated installation in a production Galaxy. # If you opt for an HTML output, you get all the script outputs arranged # as a single Html history item - all output files are linked, thumbnails for all the pdfs. # Ugly but really inexpensive. # # Patches appreciated please. # # # long route to June 2012 product # Behold the awesome power of Galaxy and the toolshed with the tool factory to bind them # derived from an integrated script model # called rgBaseScriptWrapper.py # Note to the unwary: # This tool allows arbitrary scripting on your Galaxy as the Galaxy user # There is nothing stopping a malicious user doing whatever they choose # Extremely dangerous!! # Totally insecure. So, trusted users only # # preferred model is a developer using their throw away workstation instance - ie a private site. # no real risk. The universe_wsgi.ini admin_users string is checked - only admin users are permitted to run this tool. #
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # -------------------------------------------------------------------- # # things to look into some day: # TODO: sort out True/False/boolean issues for Python 2.3
# -*- coding: utf-8 -*- # Spearmint # # Academic and Non-Commercial Research Use Software License and Terms # of Use # # Spearmint is a software package to perform Bayesian optimization # according to specific algorithms (the “Software”). The Software is # designed to automatically run experiments (thus the code name # 'spearmint') in a manner that iteratively adjusts a number of # parameters so as to minimize some objective in as few runs as # possible. # # The Software was developed by NAME NAME and # NAME at Harvard University, NAME at the # University of Toronto (“Toronto”), and NAME at the # Université de Sherbrooke (“Sherbrooke”), which assigned its rights # in the Software to Socpra Sciences et Génie # S.E.C. (“Socpra”). Pursuant to an inter-institutional agreement # between the parties, it is distributed for free academic and # non-commercial research use by the President and Fellows of Harvard # College (“Harvard”). # # Using the Software indicates your agreement to be bound by the terms # of this Software Use Agreement (“Agreement”). Absent your agreement # to the terms below, you (the “End User”) have no rights to hold or # use the Software whatsoever. # # Harvard agrees to grant hereunder the limited non-exclusive license # to End User for the use of the Software in the performance of End # User’s internal, non-commercial research and academic use at End # User’s academic or not-for-profit research institution # (“Institution”) on the following terms and conditions: # # 1. NO REDISTRIBUTION. The Software remains the property Harvard, # Toronto and Socpra, and except as set forth in Section 4, End User # shall not publish, distribute, or otherwise transfer or make # available the Software to any other party. # # 2. NO COMMERCIAL USE. End User shall not use the Software for # commercial purposes and any such use of the Software is expressly # prohibited. This includes, but is not limited to, use of the # Software in fee-for-service arrangements, core facilities or # laboratories or to provide research services to (or in collaboration # with) third parties for a fee, and in industry-sponsored # collaborative research projects where any commercial rights are # granted to the sponsor. If End User wishes to use the Software for # commercial purposes or for any other restricted purpose, End User # must execute a separate license agreement with Harvard. # # Requests for use of the Software for commercial purposes, please # contact: # # Office of Technology Development # Harvard University # Smith Campus Center, Suite 727E # 1350 Massachusetts Avenue # Cambridge, MA 02138 USA # Telephone: (617) 495-3067 # Facsimile: (617) 495-9568 # E-mail: EMAIL 3. OWNERSHIP AND COPYRIGHT NOTICE. Harvard, Toronto and Socpra own # all intellectual property in the Software. End User shall gain no # ownership to the Software. End User shall not remove or delete and # shall retain in the Software, in any modifications to Software and # in any Derivative Works, the copyright, trademark, or other notices # pertaining to Software as provided with the Software. # # 4. DERIVATIVE WORKS. End User may create and use Derivative Works, # as such term is defined under U.S. copyright laws, provided that any # such Derivative Works shall be restricted to non-commercial, # internal research and academic use at End User’s Institution. End # User may distribute Derivative Works to other Institutions solely # for the performance of non-commercial, internal research and # academic use on terms substantially similar to this License and # Terms of Use. # # 5. FEEDBACK. In order to improve the Software, comments from End # Users may be useful. End User agrees to provide Harvard with # feedback on the End User’s use of the Software (e.g., any bugs in # the Software, the user experience, etc.). Harvard is permitted to # use such information provided by End User in making changes and # improvements to the Software without compensation or an accounting # to End User. # # 6. NON ASSERT. End User acknowledges that Harvard, Toronto and/or # Sherbrooke or Socpra may develop modifications to the Software that # may be based on the feedback provided by End User under Section 5 # above. Harvard, Toronto and Sherbrooke/Socpra shall not be # restricted in any way by End User regarding their use of such # information. End User acknowledges the right of Harvard, Toronto # and Sherbrooke/Socpra to prepare, publish, display, reproduce, # transmit and or use modifications to the Software that may be # substantially similar or functionally equivalent to End User’s # modifications and/or improvements if any. In the event that End # User obtains patent protection for any modification or improvement # to Software, End User agrees not to allege or enjoin infringement of # End User’s patent against Harvard, Toronto or Sherbrooke or Socpra, # or any of the researchers, medical or research staff, officers, # directors and employees of those institutions. # # 7. PUBLICATION & ATTRIBUTION. End User has the right to publish, # present, or share results from the use of the Software. In # accordance with customary academic practice, End User will # acknowledge Harvard, Toronto and Sherbrooke/Socpra as the providers # of the Software and may cite the relevant reference(s) from the # following list of publications: # # Practical Bayesian Optimization of Machine Learning Algorithms # NAME, NAME and NAME Neural Information Processing Systems, 2012 # # Multi-Task Bayesian Optimization # NAME, NAME and NAME Advances in Neural Information Processing Systems, 2013 # # Input Warping for Bayesian Optimization of Non-stationary Functions # NAME, NAME, NAME and NAME Preprint, arXiv:1402.0929, http://arxiv.org/abs/1402.0929, 2013 # # Bayesian Optimization and Semiparametric Models with Applications to # Assistive Technology NAME, PhD Thesis, University of # Toronto, 2013 # # 8. NO WARRANTIES. THE SOFTWARE IS PROVIDED "AS IS." TO THE FULLEST # EXTENT PERMITTED BY LAW, HARVARD, TORONTO AND SHERBROOKE AND SOCPRA # HEREBY DISCLAIM ALL WARRANTIES OF ANY KIND (EXPRESS, IMPLIED OR # OTHERWISE) REGARDING THE SOFTWARE, INCLUDING BUT NOT LIMITED TO ANY # IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR # PURPOSE, OWNERSHIP, AND NON-INFRINGEMENT. HARVARD, TORONTO AND # SHERBROOKE AND SOCPRA MAKE NO WARRANTY ABOUT THE ACCURACY, # RELIABILITY, COMPLETENESS, TIMELINESS, SUFFICIENCY OR QUALITY OF THE # SOFTWARE. HARVARD, TORONTO AND SHERBROOKE AND SOCPRA DO NOT WARRANT # THAT THE SOFTWARE WILL OPERATE WITHOUT ERROR OR INTERRUPTION. # # 9. LIMITATIONS OF LIABILITY AND REMEDIES. USE OF THE SOFTWARE IS AT # END USER’S OWN RISK. IF END USER IS DISSATISFIED WITH THE SOFTWARE, # ITS EXCLUSIVE REMEDY IS TO STOP USING IT. IN NO EVENT SHALL # HARVARD, TORONTO OR SHERBROOKE OR SOCPRA BE LIABLE TO END USER OR # ITS INSTITUTION, IN CONTRACT, TORT OR OTHERWISE, FOR ANY DIRECT, # INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR OTHER # DAMAGES OF ANY KIND WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH # THE SOFTWARE, EVEN IF HARVARD, TORONTO OR SHERBROOKE OR SOCPRA IS # NEGLIGENT OR OTHERWISE AT FAULT, AND REGARDLESS OF WHETHER HARVARD, # TORONTO OR SHERBROOKE OR SOCPRA IS ADVISED OF THE POSSIBILITY OF # SUCH DAMAGES. # # 10. INDEMNIFICATION. To the extent permitted by law, End User shall # indemnify, defend and hold harmless Harvard, Toronto and Sherbrooke # and Socpra, their corporate affiliates, current or future directors, # trustees, officers, faculty, medical and professional staff, # employees, students and agents and their respective successors, # heirs and assigns (the "Indemnitees"), against any liability, # damage, loss or expense (including reasonable attorney's fees and # expenses of litigation) incurred by or imposed upon the Indemnitees # or any one of them in connection with any claims, suits, actions, # demands or judgments arising from End User’s breach of this # Agreement or its Institution’s use of the Software except to the # extent caused by the gross negligence or willful misconduct of # Harvard, Toronto or Sherbrooke or Socpra. This indemnification # provision shall survive expiration or termination of this Agreement. # # 11. GOVERNING LAW. This Agreement shall be construed and governed by # the laws of the Commonwealth of Massachusetts regardless of # otherwise applicable choice of law standards. # # 12. NON-USE OF NAME. Nothing in this License and Terms of Use shall # be construed as granting End Users or their Institutions any rights # or licenses to use any trademarks, service marks or logos associated # with the Software. You may not use the terms “Harvard” or # “University of Toronto” or “Université de Sherbrooke” or “Socpra # Sciences et Génie S.E.C.” (or a substantially similar term) in any # way that is inconsistent with the permitted uses described # herein. You agree not to use any name or emblem of Harvard, Toronto # or Sherbrooke, or any of their subdivisions for any purpose, or to # falsely suggest any relationship between End User (or its # Institution) and Harvard, Toronto and/or Sherbrooke, or in any # manner that would infringe or violate any of their rights. # # 13. End User represents and warrants that it has the legal authority # to enter into this License and Terms of Use on behalf of itself and # its Institution.
""" ============== Array indexing ============== Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing, which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This section is just an overview of the various options and issues related to indexing. Aside from single element indexing, the details on most of these options are to be found in related sections. Assignment vs referencing ========================= Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See the section at the end for specific examples and explanations on how assignments work. Single element indexing ======================= Single element indexing for a 1-D array is what one expects. It work exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. :: >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that it is not necessary to separate each dimension's index into its own set of square brackets. :: >>> x.shape = (2,5) # now x is 2-dimensional >>> x[1,3] 8 >>> x[1,-1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: :: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that remaining dimension of lenth 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: :: >>> x[0][2] 2 So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient a new temporary array is created after the first index that is subsequently indexed by 2. Note to those used to IDL or Fortran memory order as it relates to indexing. Numpy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. Other indexing options ====================== It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. A few examples illustrates best: :: >>> x = np.arange(10) >>> x[2:5] array([2, 3, 4]) >>> x[:-7] array([0, 1, 2]) >>> x[1:7:2] array([1, 3, 5]) >>> y = np.arange(35).reshape(5,7) >>> y[1:5:2,::3] array([[ 7, 10, 13], [21, 24, 27]]) Note that slices of arrays do not copy the internal array data but also produce new views of the original data. It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance. It is possible to use special features to effectively increase the number of dimensions in an array through indexing so the resulting array aquires the shape needed for use in an expression or with a specific function. Index arrays ============ Numpy arrays may be indexed with other arrays (or any other sequence- like object that can be converted to an array, such as lists, with the exception of tuples; see the end of this document for why this is). The use of index arrays ranges from simple, straightforward cases to complex, hard-to-understand cases. For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. Index arrays must be of integer type. Each value in the array indicates which value in the array to use in place of the index. To illustrate: :: >>> x = np.arange(10,1,-1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index array) where each index is replaced by the value the index array has in the array being indexed. Negative values are permitted and work as they do with single indices or slices: :: >>> x[np.array([3,3,-3,8])] array([7, 7, 4, 2]) It is an error to have index values out of bounds: :: >>> x[np.array([3, 3, 20, 8])] <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9 Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array, but with the type and values of the array being indexed. As an example, we can use a multidimensional index array instead: :: >>> x[np.array([[1,1],[2,3]])] array([[9, 9], [8, 7]]) Indexing Multi-dimensional arrays ================================= Things become more complex when multidimensional arrays are indexed, particularly with multidimensional index arrays. These tend to be more unusal uses, but theyare permitted, and they are useful for some problems. We'll start with thesimplest multidimensional case (using the array y from the previous examples): :: >>> y[np.array([0,2,4]), np.array([0,1,2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2]. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: :: >>> y[np.array([0,2,4]), np.array([0,1])] <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: :: >>> y[np.array([0,2,4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: :: >>> y[np.array([0,2,4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) What results is the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (size of row, number index elements). An example of where this may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. In general, the shape of the resulant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. Boolean or "mask" index arrays ============================== Boolean arrays used as indices are treated in a different manner entirely than index arrays. Boolean arrays must be of the same shape as the initial dimensions of the array being indexed. In the most straightforward case, the boolean array has the same shape: :: >>> b = y>20 >>> y[b] array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) The result is a 1-D array containing all the elements in the indexed array corresponding to all the true elements in the boolean array. As with index arrays, what is returned is a copy of the data, not a view as one gets with slices. The result will be multidimensional if y has more dimensions than b. For example: :: >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y array([False, False, False, True, True], dtype=bool) >>> y[b[:,5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...], which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed. For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of shape (2,3,5) results in a 2-D result of shape (4,5): :: >>> x = np.arange(30).reshape(2,3,5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) For further details, consult the numpy reference documentation on array indexing. Combining index arrays with slices ================================== Index arrays may be combined with slices. For example: :: >>> y[np.array([0,2,4]),1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array to produce a resultant array of shape (3,2). Likewise, slicing can be combined with broadcasted boolean indices: :: >>> y[b[:,5],1:3] array([[22, 23], [29, 30]]) Structural indexing tools ========================= To facilitate easy matching of array shapes with expressions and in assignments, the np.newaxis object can be used within array indices to add new dimensions with a size of 1. For example: :: >>> y.shape (5, 7) >>> y[:,np.newaxis,:].shape (5, 1, 7) Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to combine two arrays in a way that otherwise would require explicitly reshaping operations. For example: :: >>> x = np.arange(5) >>> x[:,np.newaxis] + x[np.newaxis,:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example: :: >>> z = np.arange(81).reshape(3,3,3,3) >>> z[1,...,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) This is equivalent to: :: >>> z[1,:,:,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) Assigning values to indexed arrays ================================== As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: :: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: :: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): :: >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z)) Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: :: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs ======================================================== The index syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example (using the previous definition for the array z): :: >>> indices = (1,1,1,1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: :: >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: :: >>> indices = (1, Ellipsis, 1) # same as [1,...,1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason it is possible to use the output from the np.where() function directly as an index since it always returns a tuple of index arrays. Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1,1,1,1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1,1,1,1)] # returns a single value 40 """
""" WARNING: THIS EXTENSION IS EXPERIMENTAL Experimental extension may (and probably will) change at any time. Please do not rely on these features until they are more fully vetted. The Reload Config Framework Extension enables applications Built on Cement (tm) to easily reload configuration settings any time configuration files are modified without stopping/restarting the process. Requirements ------------ * Python 2.6+, Python 3+ * Python Modules: pyinotify * Linux (Kernel 2.6.13+) Features -------- * Application configuration files (``CementApp.Meta.config_files``) are reloaded if modified. * Application plugin configuration files (Anything found in (``CementApp.Meta.plugin_config_dirs``) are reloaded if modified. * The framework calls ``CementApp.config.parse_file()`` on any watched files once the kernel has signaled a modification. * New configurations settings are accessible via ``CementApp.config`` nearly immediately once the kernel (inotify) picks up the change. * Provides a ``pre_reload_config`` and ``post_reload_config`` hook so that applications can tie into the event and perform operations any time a configuration file is modified. * Asynchronously monitors configuration files for changes via inotify. Long running processes are not blocked by the operations performed when files are detected to be modified. Limitations ----------- * Currently this extension only re-parses configuration files into the config handler. Some applications may need further work in-order to truly honor those changes. For example, if a configuration settings toggles something on or off, or triggers something else to happen (like making an API connection, etc)... this extension does not currently handle that, and it is left up to the application developer to tie into the events via the provided hooks. * Only available on Linux based systems. Configuration ------------- This extension does not currently honor any configuration settings. Hooks ----- This extension defines the following hooks: pre_reload_config ^^^^^^^^^^^^^^^^^ Run right before any framework actions are performed once modifications to any of the watched files are detected. Expects a single argument, which is the ``app`` object, and does not expect anything in return. .. code-block:: python def my_pre_reload_config_hook(app): # do something with app? pass post_reload_config ^^^^^^^^^^^^^^^^^^ Run right after any framework actions are performed once modifications to any of the watched files are detected. Expects a single argument, which is the ``app`` object, and does not expect anything in return. .. code-block:: python def my_post_reload_config_hook(app): # do something with app? pass Usage ----- The following example shows how to add the reload_config extension, as well as perform an arbitrary action any time configuration changes are detected. .. code-block:: python from time import sleep from cement.core.exc import CaughtSignal from cement.core.foundation import CementApp from cement.core.controller import CementBaseController, expose def print_foo(app): print "Foo => %s" % app.config.get('myapp', 'foo') class Base(CementBaseController): class Meta: label = 'base' @expose(hide=True) def default(self): print('Inside Base.default()') # simulate a long running process while True: sleep(30) class MyApp(CementApp): class Meta: label = 'myapp' base_controller = Base extensions = ['reload_config'] with MyApp() as app: # run this anytime the configuration has changed app.hook.register('post_reload_config', print_foo) try: app.run() except CaughtSignal as e: # maybe do something... but catch it regardless so app.close() is # called when exiting `with` cleanly. print(e) The following would output something like the following when ``~/.myapp.conf`` or any other configuration file is modified (spaces added for clarity): .. code-block:: console Inside Base.default() 2015-05-05 03:00:32,023 (DEBUG) cement.ext.ext_reload_config : config path modified: mask=IN_CLOSE_WRITE, path=/home/vagrant/.myapp.conf 2015-05-05 03:00:32,023 (DEBUG) cement.core.config : config file '/home/vagrant/.myapp.conf' exists, loading settings... 2015-05-05 03:00:32,023 (DEBUG) cement.core.hook : running hook 'post_reload_config' (<function print_foo at 0x7f1b52a5ab70>) from __main__ Foo => bar 2015-05-05 03:00:32,023 (DEBUG) cement.ext.ext_reload_config : config path modified: mask=IN_CLOSE_WRITE, path=/home/vagrant/.myapp.conf 2015-05-05 03:00:32,023 (DEBUG) cement.core.config : config file '/home/vagrant/.myapp.conf' exists, loading settings... 2015-05-05 03:00:32,023 (DEBUG) cement.core.hook : running hook 'post_reload_config' (<function print_foo at 0x7f1b52a5ab70>) from __main__ Foo => bar2 2015-05-05 03:00:32,023 (DEBUG) cement.ext.ext_reload_config : config path modified: mask=IN_CLOSE_WRITE, path=/home/vagrant/.myapp.conf 2015-05-05 03:00:32,023 (DEBUG) cement.core.config : config file '/home/vagrant/.myapp.conf' exists, loading settings... 2015-05-05 03:00:32,023 (DEBUG) cement.core.hook : running hook 'post_reload_config' (<function print_foo at 0x7f1b52a5ab70>) from __main__ Foo => bar3 """
"""CPStats, a package for collecting and reporting on program statistics. Overview ======== Statistics about program operation are an invaluable monitoring and debugging tool. Unfortunately, the gathering and reporting of these critical values is usually ad-hoc. This package aims to add a centralized place for gathering statistical performance data, a structure for recording that data which provides for extrapolation of that data into more useful information, and a method of serving that data to both human investigators and monitoring software. Let's examine each of those in more detail. Data Gathering -------------- Just as Python's `logging` module provides a common importable for gathering and sending messages, performance statistics would benefit from a similar common mechanism, and one that does *not* require each package which wishes to collect stats to import a third-party module. Therefore, we choose to re-use the `logging` module by adding a `statistics` object to it. That `logging.statistics` object is a nested dict. It is not a custom class, because that would: 1. require libraries and applications to import a third-party module in order to participate 2. inhibit innovation in extrapolation approaches and in reporting tools, and 3. be slow. There are, however, some specifications regarding the structure of the dict.:: { +----"SQLAlchemy": { | "Inserts": 4389745, | "Inserts per Second": | lambda s: s["Inserts"] / (time() - s["Start"]), | C +---"Table Statistics": { | o | "widgets": {-----------+ N | l | "Rows": 1.3M, | Record a | l | "Inserts": 400, | m | e | },---------------------+ e | c | "froobles": { s | t | "Rows": 7845, p | i | "Inserts": 0, a | o | }, c | n +---}, e | "Slow Queries": | [{"Query": "SELECT * FROM widgets;", | "Processing Time": 47.840923343, | }, | ], +----}, } The `logging.statistics` dict has four levels. The topmost level is nothing more than a set of names to introduce modularity, usually along the lines of package names. If the SQLAlchemy project wanted to participate, for example, it might populate the item `logging.statistics['SQLAlchemy']`, whose value would be a second-layer dict we call a "namespace". Namespaces help multiple packages to avoid collisions over key names, and make reports easier to read, to boot. The maintainers of SQLAlchemy should feel free to use more than one namespace if needed (such as 'SQLAlchemy ORM'). Note that there are no case or other syntax constraints on the namespace names; they should be chosen to be maximally readable by humans (neither too short nor too long). Each namespace, then, is a dict of named statistical values, such as 'Requests/sec' or 'Uptime'. You should choose names which will look good on a report: spaces and capitalization are just fine. In addition to scalars, values in a namespace MAY be a (third-layer) dict, or a list, called a "collection". For example, the CherryPy :class:`StatsTool` keeps track of what each request is doing (or has most recently done) in a 'Requests' collection, where each key is a thread ID; each value in the subdict MUST be a fourth dict (whew!) of statistical data about each thread. We call each subdict in the collection a "record". Similarly, the :class:`StatsTool` also keeps a list of slow queries, where each record contains data about each slow query, in order. Values in a namespace or record may also be functions, which brings us to: Extrapolation ------------- The collection of statistical data needs to be fast, as close to unnoticeable as possible to the host program. That requires us to minimize I/O, for example, but in Python it also means we need to minimize function calls. So when you are designing your namespace and record values, try to insert the most basic scalar values you already have on hand. When it comes time to report on the gathered data, however, we usually have much more freedom in what we can calculate. Therefore, whenever reporting tools (like the provided :class:`StatsPage` CherryPy class) fetch the contents of `logging.statistics` for reporting, they first call `extrapolate_statistics` (passing the whole `statistics` dict as the only argument). This makes a deep copy of the statistics dict so that the reporting tool can both iterate over it and even change it without harming the original. But it also expands any functions in the dict by calling them. For example, you might have a 'Current Time' entry in the namespace with the value "lambda scope: time.time()". The "scope" parameter is the current namespace dict (or record, if we're currently expanding one of those instead), allowing you access to existing static entries. If you're truly evil, you can even modify more than one entry at a time. However, don't try to calculate an entry and then use its value in further extrapolations; the order in which the functions are called is not guaranteed. This can lead to a certain amount of duplicated work (or a redesign of your schema), but that's better than complicating the spec. After the whole thing has been extrapolated, it's time for: Reporting --------- The :class:`StatsPage` class grabs the `logging.statistics` dict, extrapolates it all, and then transforms it to HTML for easy viewing. Each namespace gets its own header and attribute table, plus an extra table for each collection. This is NOT part of the statistics specification; other tools can format how they like. You can control which columns are output and how they are formatted by updating StatsPage.formatting, which is a dict that mirrors the keys and nesting of `logging.statistics`. The difference is that, instead of data values, it has formatting values. Use None for a given key to indicate to the StatsPage that a given column should not be output. Use a string with formatting (such as '%.3f') to interpolate the value(s), or use a callable (such as lambda v: v.isoformat()) for more advanced formatting. Any entry which is not mentioned in the formatting dict is output unchanged. Monitoring ---------- Although the HTML output takes pains to assign unique id's to each <td> with statistical data, you're probably better off fetching /cpstats/data, which outputs the whole (extrapolated) `logging.statistics` dict in JSON format. That is probably easier to parse, and doesn't have any formatting controls, so you get the "original" data in a consistently-serialized format. Note: there's no treatment yet for datetime objects. Try time.time() instead for now if you can. Nagios will probably thank you. Turning Collection Off ---------------------- It is recommended each namespace have an "Enabled" item which, if False, stops collection (but not reporting) of statistical data. Applications SHOULD provide controls to pause and resume collection by setting these entries to False or True, if present. Usage ===== To collect statistics on CherryPy applications:: from cherrypy.lib import cpstats appconfig['/']['tools.cpstats.on'] = True To collect statistics on your own code:: import logging # Initialize the repository if not hasattr(logging, 'statistics'): logging.statistics = {} # Initialize my namespace mystats = logging.statistics.setdefault('My Stuff', {}) # Initialize my namespace's scalars and collections mystats.update({ 'Enabled': True, 'Start Time': time.time(), 'Important Events': 0, 'Events/Second': lambda s: ( (s['Important Events'] / (time.time() - s['Start Time']))), }) ... for event in events: ... # Collect stats if mystats.get('Enabled', False): mystats['Important Events'] += 1 To report statistics:: root.cpstats = cpstats.StatsPage() To format statistics reports:: See 'Reporting', above. """
# #!/usr/bin/env python3 # # __author__ = "Ashwin NAME # GUI viewer to view JSON data as tree. # # Ubuntu packages needed: # # python3-pyqt5 # # # Std # import argparse # import collections # import json # import sys # # # External # from PyQt5 import QtCore # from PyQt5 import QtGui # from PyQt5 import QtWidgets # # class TextToTreeItem: # # def __init__(self): # self.text_list = [] # self.titem_list = [] # # def append(self, text_list, titem): # for text in text_list: # self.text_list.append(text) # self.titem_list.append(titem) # # # Return model indices that match string # def find(self, find_str): # # titem_list = [] # for i, s in enumerate(self.text_list): # if find_str in s: # titem_list.append(self.titem_list[i]) # # return titem_list # # # class JsonView(QtWidgets.QWidget): # # def __init__(self, D): # super(JsonView, self).__init__() # # self.find_box = None # self.tree_widget = None # self.text_to_titem = TextToTreeItem() # self.find_str = "" # self.found_titem_list = [] # self.found_idx = 0 # # # jfile = open(fpath) # # jdata = json.load(jfile, object_pairs_hook=collections.OrderedDict) # # jdata = collections.OrderedDict(D) # # # Find UI # # find_layout = self.make_find_ui() # # # Tree # # self.tree_widget = QtWidgets.QTreeWidget() # self.tree_widget.setHeaderLabels(["Key", "Value"]) # self.tree_widget.header().setSectionResizeMode(QtWidgets.QHeaderView.Stretch) # # root_item = QtWidgets.QTreeWidgetItem(["Root"]) # self.recurse_jdata(jdata, root_item) # self.tree_widget.addTopLevelItem(root_item) # # # Add table to layout # # layout = QtWidgets.QHBoxLayout() # layout.addWidget(self.tree_widget) # # # Group box # # gbox = QtWidgets.QGroupBox() # gbox.setLayout(layout) # # layout2 = QtWidgets.QVBoxLayout() # layout2.addLayout(find_layout) # layout2.addWidget(gbox) # # self.setLayout(layout2) # # def make_find_ui(self): # # # Text box # self.find_box = QtWidgets.QLineEdit() # self.find_box.returnPressed.connect(self.find_button_clicked) # # # Find Button # find_button = QtWidgets.QPushButton("Find") # find_button.clicked.connect(self.find_button_clicked) # # layout = QtWidgets.QHBoxLayout() # layout.addWidget(self.find_box) # layout.addWidget(find_button) # # return layout # # def find_button_clicked(self): # # find_str = self.find_box.text() # # # Very common for use to click Find on empty string # if find_str == "": # return # # # New search string # if find_str != self.find_str: # self.find_str = find_str # self.found_titem_list = self.text_to_titem.find(self.find_str) # self.found_idx = 0 # else: # item_num = len(self.found_titem_list) # self.found_idx = (self.found_idx + 1) % item_num # # self.tree_widget.setCurrentItem(self.found_titem_list[self.found_idx]) # # # def recurse_jdata(self, jdata, tree_widget): # # if isinstance(jdata, dict): # for key, val in jdata.items(): # self.tree_add_row(key, val, tree_widget) # elif isinstance(jdata, list): # for i, val in enumerate(jdata): # key = str(i) # self.tree_add_row(key, val, tree_widget) # else: # print("This should never be reached!") # # def tree_add_row(self, key, val, tree_widget): # # text_list = [] # # if isinstance(val, dict) or isinstance(val, list): # text_list.append(key) # row_item = QtWidgets.QTreeWidgetItem([key]) # self.recurse_jdata(val, row_item) # else: # text_list.append(key) # text_list.append(str(val)) # row_item = QtWidgets.QTreeWidgetItem([key, str(val)]) # # tree_widget.addChild(row_item) # self.text_to_titem.append(text_list, row_item) # # # class JsonViewer(QtWidgets.QMainWindow): # # def __init__(self, D): # super(JsonViewer, self).__init__() # # # fpath = sys.argv[1] # json_view = JsonView(D) # # self.setCentralWidget(json_view) # self.setWindowTitle("JSON Viewer") # self.show() # # def keyPressEvent(self, e): # if e.key() == QtCore.Qt.Key_Escape: # self.close() # # # def viewLipd(D): # print("Opening the viewer. This may take a few moments...") # qt_app = QtWidgets.QApplication(sys.argv) # json_viewer = JsonViewer(D) # qt_app.exec_() # # # # # if "__main__" == __name__: # # main()
# -*- coding: utf-8 -*- # Copyright 2007-2011 The HyperSpy developers # # This file is part of HyperSpy. # # HyperSpy is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # HyperSpy is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with HyperSpy. If not, see <http://www.gnu.org/licenses/>. #~ import traits.api as t #~ import traitsui.api as tui #~ from traitsui.menu import OKButton, CancelButton, Action, MenuBar, Menu #~ #~ from hyperspy import io #~ from egerton_quantification import EgertonPanel #~ #~ import messages #~ import tools #~ from hyperspy.misc.interactive_ns import interactive_ns #~ from hyperspy import Release #~ #~ # File ################################################### #~ #~ #~ #~ class LoadSpectrum(t.HasTraits): #~ sp_file = t.File() #~ traits_view = tui.View( #~ tui.Item('sp_file', label = 'File'), #~ buttons = [OKButton, CancelButton], #~ kind = 'modal') #~ #~ file_view = tui.View( #~ tui.Group( #~ tui.Group( #~ tui.Item('f.hl_file'), #~ ), #~ tui.Group( #~ tui.Item('micro.name', ), #~ tui.Item('micro.alpha'), #~ tui. Item('micro.beta'), #~ tui. Item('micro.E0'), #~ ), #~ tui.Group( #~ tui. Item('micro.name', ), #~ tui. Item('micro.alpha'), #~ tui. Item('micro.beta'), #~ tui. Item('micro.E0'),)) #~ ) #~ #~ # ~ # Actions #~ #~ menu_file_open = Action(name = "Open...", #~ action = "open_file", #~ toolip = "Open an SI file",) #~ #~ menu_file_save = Action(name = "Save...", #~ action = "save_file", #~ toolip = "Save an SI file",) #~ #~ menu_help_about = Action(name = "About", #~ action = "notification_about", #~ toolip = "",) #~ #~ menu_tools_calibrate = Action(name = "Calibrate", #~ action = "calibrate", #~ toolip = "",) #~ #~ menu_tools_egerton_quantification = Action(name = "Egerton Quantification", #~ action = "egerton_quantification", #~ toolip = "",) #~ #~ menu_tools_savitzky_golay = Action(name = "Savitzky-Golay Smoothing", #~ action = "savitzky_golay", #~ toolip = "",) #~ #~ menu_tools_lowess = Action(name = "Lowess Smoothing", #~ action = "lowess", #~ toolip = "",) #~ #~ menu_edit_acquisition_parameters = Action(name = "Acquisition parameters", #~ action = "edit_acquisition_parameters", #~ toolip = "",) #~ # ~ # Menu #~ menubar = MenuBar() #~ # ~ # File #~ Menu(menu_file_open, menu_file_save, name = 'File') #~ # ~ # Edit #~ # ~ # Open #~ #~ # Main Window ################################################## #~ #~ #~ class MainWindowHandler(tui.Handler): #~ #~ def open_file(self, *args, **kw): #~ S = LoadSpectrum() #~ S.edit_traits() #~ if S.sp_file is not t.Undefined: #~ s = io.load(S.sp_file) #~ s.plot() #~ interactive_ns['s'] = s #~ #~ def save_file(self, *args, **kw): #~ pass #~ #~ #~ def egerton_quantification(self, *args, **kw): #~ if interactive_ns.has_key('s'): #~ ep = EgertonPanel(interactive_ns['s']) #~ ep.edit_traits() #~ #~ def savitzky_golay(self, *args, **kw): #~ sg = tools.SavitzkyGolay() #~ sg.edit_traits() #~ #~ def lowess(self, *args, **kw): #~ lw = tools.Lowess() #~ lw.edit_traits() #~ #~ def notification_about(self,*args, **kw): #~ messages.information(Release.info) #~ #~ def calibrate(self,*args, **kw): #~ w = tools.Calibration() #~ w.edit_traits() #~ #~ class MainWindow(t.HasTraits): #~ # ~ traits_view = tui.View( #view contents, # ~ # ..., #~ handler = MainWindowHandler(), #~ title = 'HyperSpy', #~ width = 500, #~ menubar = MenuBar( #~ Menu( #~ menu_file_open, #~ menu_file_save, #~ name = 'File'), #~ Menu( #~ menu_edit_acquisition_parameters, #~ name = 'Edit'), #~ Menu( #~ menu_tools_calibrate, #~ menu_tools_egerton_quantification, #~ menu_tools_savitzky_golay, #~ menu_tools_lowess, #~ name = 'Tools'), #~ Menu( #~ menu_help_about, #~ name = 'Help'), #~ )) #~ #~ #~ if __name__ == '__main__': #~ window = MainWindow() #~ window.configure_traits()
""" =============== Array Internals =============== Internal organization of numpy arrays ===================================== It helps to understand a bit about how numpy arrays are handled under the covers to help understand numpy better. This section will not go into great detail. Those wishing to understand the full details are referred to Travis Oliphant's book "Guide to Numpy". Numpy arrays consist of two major components, the raw array data (from now on, referred to as the data buffer), and the information about the raw array data. The data buffer is typically what people think of as arrays in C or Fortran, a contiguous (and fixed) block of memory containing fixed sized data items. Numpy also contains a significant set of data that describes how to interpret the data in the data buffer. This extra information contains (among other things): 1) The basic data element's size in bytes 2) The start of the data within the data buffer (an offset relative to the beginning of the data buffer). 3) The number of dimensions and the size of each dimension 4) The separation between elements for each dimension (the 'stride'). This does not have to be a multiple of the element size 5) The byte order of the data (which may not be the native byte order) 6) Whether the buffer is read-only 7) Information (via the dtype object) about the interpretation of the basic data element. The basic data element may be as simple as a int or a float, or it may be a compound object (e.g., struct-like), a fixed character field, or Python object pointers. 8) Whether the array is to interpreted as C-order or Fortran-order. This arrangement allow for very flexible use of arrays. One thing that it allows is simple changes of the metadata to change the interpretation of the array buffer. Changing the byteorder of the array is a simple change involving no rearrangement of the data. The shape of the array can be changed very easily without changing anything in the data buffer or any data copying at all Among other things that are made possible is one can create a new array metadata object that uses the same data buffer to create a new view of that data buffer that has a different interpretation of the buffer (e.g., different shape, offset, byte order, strides, etc) but shares the same data bytes. Many operations in numpy do just this such as slices. Other operations, such as transpose, don't move data elements around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn't move. Typically these new versions of the array metadata but the same data buffer are new 'views' into the data buffer. There is a different ndarray object, but it uses the same data buffer. This is why it is necessary to force copies through use of the .copy() method if one really wants to make a new and independent copy of the data buffer. New views into arrays mean the the object reference counts for the data buffer increase. Simply doing away with the original array object will not remove the data buffer if other views of it still exist. Multidimensional Array Indexing Order Issues ============================================ What is the right way to index multi-dimensional arrays? Before you jump to conclusions about the one and true way to index multi-dimensional arrays, it pays to understand why this is a confusing issue. This section will try to explain in detail how numpy indexing works and why we adopt the convention we do for images, and when it may be appropriate to adopt other conventions. The first thing to understand is that there are two conflicting conventions for indexing 2-dimensional arrays. Matrix notation uses the first index to indicate which row is being selected and the second index to indicate which column is selected. This is opposite the geometrically oriented-convention for images where people generally think the first index represents x position (i.e., column) and the second represents y position (i.e., row). This alone is the source of much confusion; matrix-oriented users and image-oriented users expect two different things with regard to indexing. The second issue to understand is how indices correspond to the order the array is stored in memory. In Fortran the first index is the most rapidly varying index when moving through the elements of a two dimensional array as it is stored in memory. If you adopt the matrix convention for indexing, then this means the matrix is stored one column at a time (since the first index moves to the next row as it changes). Thus Fortran is considered a Column-major language. C has just the opposite convention. In C, the last index changes most rapidly as one moves through the array as stored in memory. Thus C is a Row-major language. The matrix is stored by rows. Note that in both cases it presumes that the matrix convention for indexing is being used, i.e., for both Fortran and C, the first index is the row. Note this convention implies that the indexing convention is invariant and that the data order changes to keep that so. But that's not the only way to look at it. Suppose one has large two-dimensional arrays (images or matrices) stored in data files. Suppose the data are stored by rows rather than by columns. If we are to preserve our index convention (whether matrix or image) that means that depending on the language we use, we may be forced to reorder the data if it is read into memory to preserve our indexing convention. For example if we read row-ordered data into memory without reordering, it will match the matrix indexing convention for C, but not for Fortran. Conversely, it will match the image indexing convention for Fortran, but not for C. For C, if one is using data stored in row order, and one wants to preserve the image index convention, the data must be reordered when reading into memory. In the end, which you do for Fortran or C depends on which is more important, not reordering data or preserving the indexing convention. For large images, reordering data is potentially expensive, and often the indexing convention is inverted to avoid that. The situation with numpy makes this issue yet more complicated. The internal machinery of numpy arrays is flexible enough to accept any ordering of indices. One can simply reorder indices by manipulating the internal stride information for arrays without reordering the data at all. Numpy will know how to map the new index order to the data without moving the data. So if this is true, why not choose the index order that matches what you most expect? In particular, why not define row-ordered images to use the image convention? (This is sometimes referred to as the Fortran convention vs the C convention, thus the 'C' and 'FORTRAN' order options for array ordering in numpy.) The drawback of doing this is potential performance penalties. It's common to access the data sequentially, either implicitly in array operations or explicitly by looping over rows of an image. When that is done, then the data will be accessed in non-optimal order. As the first index is incremented, what is actually happening is that elements spaced far apart in memory are being sequentially accessed, with usually poor memory access speeds. For example, for a two dimensional image 'im' defined so that im[0, 10] represents the value at x=0, y=10. To be consistent with usual Python behavior then im[0] would represent a column at x=0. Yet that data would be spread over the whole array since the data are stored in row order. Despite the flexibility of numpy's indexing, it can't really paper over the fact basic operations are rendered inefficient because of data order or that getting contiguous subarrays is still awkward (e.g., im[:,0] for the first row, vs im[0]), thus one can't use an idiom such as for row in im; for col in im does work, but doesn't yield contiguous column data. As it turns out, numpy is smart enough when dealing with ufuncs to determine which index is the most rapidly varying one in memory and uses that for the innermost loop. Thus for ufuncs there is no large intrinsic advantage to either approach in most cases. On the other hand, use of .flat with an FORTRAN ordered array will lead to non-optimal memory access as adjacent elements in the flattened array (iterator, actually) are not contiguous in memory. Indeed, the fact is that Python indexing on lists and other sequences naturally leads to an outside-to inside ordering (the first index gets the largest grouping, the next the next largest, and the last gets the smallest element). Since image data are normally stored by rows, this corresponds to position within rows being the last item indexed. If you do want to use Fortran ordering realize that there are two approaches to consider: 1) accept that the first index is just not the most rapidly changing in memory and have all your I/O routines reorder your data when going from memory to disk or visa versa, or use numpy's mechanism for mapping the first index to the most rapidly varying data. We recommend the former if possible. The disadvantage of the latter is that many of numpy's functions will yield arrays without Fortran ordering unless you are careful to use the 'order' keyword. Doing this would be highly inconvenient. Otherwise we recommend simply learning to reverse the usual order of indices when accessing elements of an array. Granted, it goes against the grain, but it is more in line with Python semantics and the natural order of the data. """
"""ctypes-based OpenGL wrapper for Python This is the PyOpenGL 3.x tree, it attempts to provide a largely compatible API for code written with the PyOpenGL 2.x series using the ctypes foreign function interface system. Configuration Variables: There are a few configuration variables in this top-level module. Applications should be the only code that tweaks these variables, mid-level libraries should not take it upon themselves to disable/enable features at this level. The implication there is that your library code should be able to work with any of the valid configurations available with these sets of flags. Further, once any entry point has been loaded, the variables can no longer be updated. The OpenGL._confligflags module imports the variables from this location, and once that import occurs the flags should no longer be changed. ERROR_CHECKING -- if set to a False value before importing any OpenGL.* libraries will completely disable error-checking. This can dramatically improve performance, but makes debugging far harder. This is intended to be turned off *only* in a production environment where you *know* that your code is entirely free of situations where you use exception-handling to handle error conditions, i.e. where you are explicitly checking for errors everywhere they can occur in your code. Default: True ERROR_LOGGING -- If True, then wrap array-handler functions with error-logging operations so that all exceptions will be reported to log objects in OpenGL.logs, note that this means you will get lots of error logging whenever you have code that tests by trying something and catching an error, this is intended to be turned on only during development so that you can see why something is failing. Errors are normally logged to the OpenGL.errors logger. Only triggers if ERROR_CHECKING is True Default: False ERROR_ON_COPY -- if set to a True value before importing the numpy/lists support modules, will cause array operations to raise OpenGL.error.CopyError if the operation would cause a data-copy in order to make the passed data-type match the target data-type. This effectively disables all list/tuple array support, as they are inherently copy-based. This feature allows for optimisation of your application. It should only be enabled during testing stages to prevent raising errors on recoverable conditions at run-time. Default: False CONTEXT_CHECKING -- if set to True, PyOpenGL will wrap *every* GL and GLU call with a check to see if there is a valid context. If there is no valid context then will throw OpenGL.errors.NoContext. This is an *extremely* slow check and is not enabled by default, intended to be enabled in order to track down (wrong) code that uses GL/GLU entry points before the context has been initialized (something later Linux GLs are very picky about). Default: False STORE_POINTERS -- if set to True, PyOpenGL array operations will attempt to store references to pointers which are being passed in order to prevent memory-access failures if the pointed-to-object goes out of scope. This behaviour is primarily intended to allow temporary arrays to be created without causing memory errors, thus it is trading off performance for safety. To use this flag effectively, you will want to first set ERROR_ON_COPY to True and eliminate all cases where you are copying arrays. Copied arrays *will* segfault your application deep within the GL if you disable this feature! Once you have eliminated all copying of arrays in your application, you will further need to be sure that all arrays which are passed to the GL are stored for at least the time period for which they are active in the GL. That is, you must be sure that your array objects live at least until they are no longer bound in the GL. This is something you need to confirm by thinking about your application's structure. When you are sure your arrays won't cause seg-faults, you can set STORE_POINTERS=False in your application and enjoy a (slight) speed up. Note: this flag is *only* observed when ERROR_ON_COPY == True, as a safety measure to prevent pointless segfaults Default: True WARN_ON_FORMAT_UNAVAILABLE -- If True, generates logging-module warn-level events when a FormatHandler plugin is not loadable (with traceback). Default: False FULL_LOGGING -- If True, then wrap functions with logging operations which reports each call along with its arguments to the OpenGL.calltrace logger at the INFO level. This is *extremely* slow. You should *not* enable this in production code! You will need to have a logging configuration (e.g. logging.basicConfig() ) call in your top-level script to see the results of the logging. Default: False ALLOW_NUMPY_SCALARS -- if True, we will wrap all GLint/GLfloat calls conversions with wrappers that allow for passing numpy scalar values. Note that this is experimental, *not* reliable, and very slow! Note that byte/char types are not wrapped. Default: False UNSIGNED_BYTE_IMAGES_AS_STRING -- if True, we will return GL_UNSIGNED_BYTE image-data as strings, instead of arrays for glReadPixels and glGetTexImage Default: True FORWARD_COMPATIBLE_ONLY -- only include OpenGL 3.1 compatible entry points. Note that this will generally break most PyOpenGL code that hasn't been explicitly made "legacy free" via a significant rewrite. Default: False SIZE_1_ARRAY_UNPACK -- if True, unpack size-1 arrays to be scalar values, as done in PyOpenGL 1.5 -> 3.0.0, that is, if a glGenList( 1 ) is done, return a uint rather than an array of uints. Default: True USE_ACCELERATE -- if True, attempt to use the OpenGL_accelerate package to provide Cython-coded accelerators for core wrapping operations. Default: True MODULE_ANNOTATIONS -- if True, attempt to annotate alternates() and constants to track in which module they are defined (only useful for the documentation-generation passes, really). Default: False """
""" Define a simple format for saving numpy arrays to disk with the full information about them. The ``.npy`` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The ``.npz`` format is the standard format for persisting *multiple* NumPy arrays on disk. A ``.npz`` file is a zip file containing multiple ``.npy`` files, one for each array. Capabilities ------------ - Can represent all NumPy arrays including nested record arrays and object arrays. - Represents the data in its native binary form. - Supports Fortran-contiguous arrays directly. - Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C "long int" writes out an array with "long ints", a reading machine with 32-bit C "long ints" will yield an array with 64-bit integers. - Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able create a solution in his preferred programming language to read most ``.npy`` files that he has been given without much documentation. - Allows memory-mapping of the data. See `open_memmep`. - Can be read from a filelike stream object instead of an actual file. - Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. Limitations ----------- - Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. .. warning:: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the ``loadedarray.view(correct_dtype)`` method. File extensions --------------- We recommend using the ``.npy`` and ``.npz`` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using ``.npy`` and ``.npz``. Version numbering ----------------- The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. Format Version 1.0 ------------------ The first 6 bytes are a magic string: exactly ``\\x93NUMPY``. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. ``\\x01``. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. ``\\x00``. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array's format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (``\\n``) and padded with spaces (``\\x20``) to make the total length of ``magic string + 4 + HEADER_LEN`` be evenly divisible by 16 for alignment purposes. The dictionary contains three keys: "descr" : dtype.descr An object that can be passed as an argument to the `numpy.dtype` constructor to create the array's dtype. "fortran_order" : bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. "shape" : tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. ``dtype.hasobject is True``), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on ``fortran_order``) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that ``shape=()`` means there is 1 element) by ``dtype.itemsize``. Notes ----- The ``.npy`` format, including reasons for creating it and a comparison of alternatives, is described fully in the "npy-format" NEP. """
""" Basic functions used by several sub-packages and useful to have in the main name-space. Type Handling ------------- ================ =================== iscomplexobj Test for complex object, scalar result isrealobj Test for real object, scalar result iscomplex Test for complex elements, array result isreal Test for real elements, array result imag Imaginary part real Real part real_if_close Turns complex number with tiny imaginary part to real isneginf Tests for negative infinity, array result isposinf Tests for positive infinity, array result isnan Tests for nans, array result isinf Tests for infinity, array result isfinite Tests for finite numbers, array result isscalar True if argument is a scalar nan_to_num Replaces NaN's with 0 and infinities with large numbers cast Dictionary of functions to force cast to each type common_type Determine the minimum common type code for a group of arrays mintypecode Return minimal allowed common typecode. ================ =================== Index Tricks ------------ ================ =================== mgrid Method which allows easy construction of N-d 'mesh-grids' ``r_`` Append and construct arrays: turns slice objects into ranges and concatenates them, for 2d arrays appends rows. index_exp Konrad Hinsen's index_expression class instance which can be useful for building complicated slicing syntax. ================ =================== Useful Functions ---------------- ================ =================== select Extension of where to multiple conditions and choices extract Extract 1d array from flattened array according to mask insert Insert 1d array of values into Nd array according to mask linspace Evenly spaced samples in linear space logspace Evenly spaced samples in logarithmic space fix Round x to nearest integer towards zero mod Modulo mod(x,y) = x % y except keeps sign of y amax Array maximum along axis amin Array minimum along axis ptp Array max-min along axis cumsum Cumulative sum along axis prod Product of elements along axis cumprod Cumluative product along axis diff Discrete differences along axis angle Returns angle of complex argument unwrap Unwrap phase along given axis (1-d algorithm) sort_complex Sort a complex-array (based on real, then imaginary) trim_zeros Trim the leading and trailing zeros from 1D array. vectorize A class that wraps a Python function taking scalar arguments into a generalized function which can handle arrays of arguments using the broadcast rules of numerix Python. ================ =================== Shape Manipulation ------------------ ================ =================== squeeze Return a with length-one dimensions removed. atleast_1d Force arrays to be > 1D atleast_2d Force arrays to be > 2D atleast_3d Force arrays to be > 3D vstack Stack arrays vertically (row on row) hstack Stack arrays horizontally (column on column) column_stack Stack 1D arrays as columns into 2D array dstack Stack arrays depthwise (along third dimension) split Divide array into a list of sub-arrays hsplit Split into columns vsplit Split into rows dsplit Split along third dimension ================ =================== Matrix (2D Array) Manipulations ------------------------------- ================ =================== fliplr 2D array with columns flipped flipud 2D array with rows flipped rot90 Rotate a 2D array a multiple of 90 degrees eye Return a 2D array with ones down a given diagonal diag Construct a 2D array from a vector, or return a given diagonal from a 2D array. mat Construct a Matrix bmat Build a Matrix from blocks ================ =================== Polynomials ----------- ================ =================== poly1d A one-dimensional polynomial class poly Return polynomial coefficients from roots roots Find roots of polynomial given coefficients polyint Integrate polynomial polyder Differentiate polynomial polyadd Add polynomials polysub Substract polynomials polymul Multiply polynomials polydiv Divide polynomials polyval Evaluate polynomial at given argument ================ =================== Import Tricks ------------- ================ =================== ppimport Postpone module import until trying to use it ppimport_attr Postpone module import until trying to use its attribute ppresolve Import postponed module and return it. ================ =================== Machine Arithmetics ------------------- ================ =================== machar_single Single precision floating point arithmetic parameters machar_double Double precision floating point arithmetic parameters ================ =================== Threading Tricks ---------------- ================ =================== ParallelExec Execute commands in parallel thread. ================ =================== 1D Array Set Operations ----------------------- Set operations for 1D numeric arrays based on sort() function. ================ =================== ediff1d Array difference (auxiliary function). unique Unique elements of an array. intersect1d Intersection of 1D arrays with unique elements. setxor1d Set exclusive-or of 1D arrays with unique elements. in1d Test whether elements in a 1D array are also present in another array. union1d Union of 1D arrays with unique elements. setdiff1d Set difference of 1D arrays with unique elements. ================ =================== """
#!/usr/bin/env python # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is font utility code. # # The Initial Developer of the Original Code is Mozilla Corporation. # Portions created by the Initial Developer are Copyright (C) 2009 # the Initial Developer. All Rights Reserved. # # Contributor(s): # NAME <EMAIL> # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** */ # eotlitetool.py - create EOT version of OpenType font for use with IE # # Usage: eotlitetool.py [-o output-filename] font1 [font2 ...] # # OpenType file structure # http://www.microsoft.com/typography/otspec/otff.htm # # Types: # # BYTE 8-bit unsigned integer. # CHAR 8-bit signed integer. # USHORT 16-bit unsigned integer. # SHORT 16-bit signed integer. # ULONG 32-bit unsigned integer. # Fixed 32-bit signed fixed-point number (16.16) # LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer. # # SFNT Header # # Fixed sfnt version // 0x00010000 for version 1.0. # USHORT numTables // Number of tables. # USHORT searchRange // (Maximum power of 2 <= numTables) x 16. # USHORT entrySelector // Log2(maximum power of 2 <= numTables). # USHORT rangeShift // NumTables x 16-searchRange. # # Table Directory # # ULONG tag // 4-byte identifier. # ULONG checkSum // CheckSum for this table. # ULONG offset // Offset from beginning of TrueType font file. # ULONG length // Length of this table. # # OS/2 Table (Version 4) # # USHORT version // 0x0004 # SHORT xAvgCharWidth # USHORT usWeightClass # USHORT usWidthClass # USHORT fsType # SHORT ySubscriptXSize # SHORT ySubscriptYSize # SHORT ySubscriptXOffset # SHORT ySubscriptYOffset # SHORT ySuperscriptXSize # SHORT ySuperscriptYSize # SHORT ySuperscriptXOffset # SHORT ySuperscriptYOffset # SHORT yStrikeoutSize # SHORT yStrikeoutPosition # SHORT sFamilyClass # BYTE panose[10] # ULONG ulUnicodeRange1 // Bits 0-31 # ULONG ulUnicodeRange2 // Bits 32-63 # ULONG ulUnicodeRange3 // Bits 64-95 # ULONG ulUnicodeRange4 // Bits 96-127 # CHAR achVendID[4] # USHORT fsSelection # USHORT usFirstCharIndex # USHORT usLastCharIndex # SHORT sTypoAscender # SHORT sTypoDescender # SHORT sTypoLineGap # USHORT usWinAscent # USHORT usWinDescent # ULONG ulCodePageRange1 // Bits 0-31 # ULONG ulCodePageRange2 // Bits 32-63 # SHORT sxHeight # SHORT sCapHeight # USHORT usDefaultChar # USHORT usBreakChar # USHORT usMaxContext # # # The Naming Table is organized as follows: # # [name table header] # [name records] # [string data] # # Name Table Header # # USHORT format // Format selector (=0). # USHORT count // Number of name records. # USHORT stringOffset // Offset to start of string storage (from start of table). # # Name Record # # USHORT platformID // Platform ID. # USHORT encodingID // Platform-specific encoding ID. # USHORT languageID // Language ID. # USHORT nameID // Name ID. # USHORT length // String length (in bytes). # USHORT offset // String offset from start of storage area (in bytes). # # head Table # # Fixed tableVersion // Table version number 0x00010000 for version 1.0. # Fixed fontRevision // Set by font manufacturer. # ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum. # ULONG magicNumber // Set to 0x5F0F3CF5. # USHORT flags # USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines. # LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # SHORT xMin // For all glyph bounding boxes. # SHORT yMin # SHORT xMax # SHORT yMax # USHORT macStyle # USHORT lowestRecPPEM // Smallest readable size in pixels. # SHORT fontDirectionHint # SHORT indexToLocFormat // 0 for short offsets, 1 for long. # SHORT glyphDataFormat // 0 for current format. # # # # Embedded OpenType (EOT) file format # http://www.w3.org/Submission/EOT/ # # EOT version 0x00020001 # # An EOT font consists of a header with the original OpenType font # appended at the end. Most of the data in the EOT header is simply a # copy of data from specific tables within the font data. The exceptions # are the 'Flags' field and the root string name field. The root string # is a set of names indicating domains for which the font data can be # used. A null root string implies the font data can be used anywhere. # The EOT header is in little-endian byte order but the font data remains # in big-endian order as specified by the OpenType spec. # # Overall structure: # # [EOT header] # [EOT name records] # [font data] # # EOT header # # ULONG eotSize // Total structure length in bytes (including string and font data) # ULONG fontDataSize // Length of the OpenType font (FontData) in bytes # ULONG version // Version number of this format - 0x00020001 # ULONG flags // Processing Flags (0 == no special processing) # BYTE fontPANOSE[10] // OS/2 Table panose # BYTE charset // DEFAULT_CHARSET (0x01) # BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise # ULONG weight // OS/2 Table usWeightClass # USHORT fsType // OS/2 Table fsType (specifies embedding permission flags) # USHORT magicNumber // Magic number for EOT file - 0x504C. # ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1 # ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2 # ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3 # ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4 # ULONG codePageRange1 // OS/2 Table ulCodePageRange1 # ULONG codePageRange2 // OS/2 Table ulCodePageRange2 # ULONG checkSumAdjustment // head Table CheckSumAdjustment # ULONG reserved[4] // Reserved - must be 0 # USHORT padding1 // Padding - must be 0 # # EOT name records # # USHORT FamilyNameSize // Font family name size in bytes # BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16 # USHORT Padding2 // Padding - must be 0 # # USHORT StyleNameSize // Style name size in bytes # BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16 # USHORT Padding3 // Padding - must be 0 # # USHORT VersionNameSize // Version name size in bytes # bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16 # USHORT Padding4 // Padding - must be 0 # # USHORT FullNameSize // Full name size in bytes # BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16 # USHORT Padding5 // Padding - must be 0 # # USHORT RootStringSize // Root string size in bytes # BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
""" Module for running iterative sweeps of sensitivity analyses on surrogate models over a range of sample sizes to test for confidence interval convergence. OVERVIEW ========== This module provides functions to run sample size convergence tests on surrogate model sensitivity analysis. So what does all that BS actually mean? The module GenSAData.py is used to run sensitivity analysis on surrogate models generated and trained by surrogate_modeler.py. In order to test the dependence on certain input parameters or combinations of input parameters, we have to generate an array of input parameter samples across ranges of each input parameter to the model (see the text file DB10_ParamRanges.txt for an example of how parameter ranges are specified). The outputs of GenSAData is a sensitivity of the model to each input parameter and, (if second-order sensitivities are being calculated), the sensitivities of the model to every combination of input parameters. With each sensitivity, there is an associated confidence interval, or error margin. Generally, the lower the confidence interval, the higher the certainty of the sensitivity value is, which is desirable for finding with parameter or combination of parameters the model is most sensitive to. One of the important hyperparameters we can tweak when running sensitivity analysis is the number of samples generated. Theoretically, the more samples we generate to run SA on, the more certain we will be of the sensitivities. The purpose of this model, then, is to run lots and lots of iterations of GenSAData (well, as many as you specify with n_iter), collect the sensitivity confidences for each iteration, and plot them as a function of n_iter. We should expect to see confidence intervals approaching zero as n_iter -> infinity, which would indicate the error margin of our sensitivities is decreasing with increasing sample size. USAGE ========== The main function of this module is run_convergence_tests(). As the name suggests, this function does the core work of sweeping over a range of sample sizes and calls run_sensitivity_analysis() from GenSAData for each sample size, and writes the first-order (S1) and total (ST) sensitivity confidences for each parameter to a text csv. The sample range tested on is specified by min_nsamples and max_nsamples. The number of samples tested in that range is set by n_iter, and the sample list Nsample_list is generated using a simple linear interpolation across the specified range. The next most important function in this module is plot_convergence which, as the name once again suggests, plots the results of the convergence test. This function plots an array of confidence values across the sample space they were generated with. It expects an MxN array, where the rows are the confidence values calculated with a certain number of samples. M is the number of iterations run_convergence_tests() did, so basically n_iter. N is twice the number of input parameters to the model, plus one for the first column which contains the number of samples for that iteration. The rest of the columns contains the confidence intervals. If there are P input parameters, column 0 is the sample number, columns 1:P are the first order sensitivity confidences, and columns P+1:2*P+1 are the total confidences. In the future, if second order sensitivity confidences are added, then there will be 2*P + P^2 +1 columns. Columns 1:P will still be first order confidences, columns P+1 : P^2 will be second-order confidences, and columns P^2+1:P^2+P+1 will be the total confidences. In order to run the convergence tests and plot the results, you can setup a main() function to first run the convergence test by calling run_convergence_test(), then reading the confidences csv that is generated into an array that is then passed to plot_convergence(). TIPS AND TRICKS ========== Some tricky aspects of using these functions: - The surrogate model run_convergence_test() actuall tests is NOT set in this module. Whichever model GenSAData is set to generate sensitivities for is what model run_convergence_tests() will run on. You can specify which file to load the surrogate model from in the project root config file. - The name of the file run_test_convergence() saves its results to is prefixed with the string set with the variable "outfile" and postfixed with """
# Configuration file for jupyter-notebook. #------------------------------------------------------------------------------ # Configurable configuration #------------------------------------------------------------------------------ #------------------------------------------------------------------------------ # SingletonConfigurable configuration #------------------------------------------------------------------------------ # A configurable that only allows one instance. # # This class is for classes that should only have one instance of itself or # *any* subclass. To create and retrieve such a class use the # :meth:`SingletonConfigurable.instance` method. #------------------------------------------------------------------------------ # Application configuration #------------------------------------------------------------------------------ # This is an application. # The date format used by logging formatters for %(asctime)s # c.Application.log_datefmt = '%Y-%m-%d %H:%M:%S' # The Logging format template # c.Application.log_format = '[%(name)s]%(highlevel)s %(message)s' # Set the log level by value or name. # c.Application.log_level = 30 #------------------------------------------------------------------------------ # JupyterApp configuration #------------------------------------------------------------------------------ # Base class for Jupyter applications # Answer yes to any prompts. # c.JupyterApp.answer_yes = False # Full path of a config file. # c.JupyterApp.config_file = u'' # Generate default config file. # c.JupyterApp.generate_config = False # Specify a config file to load. # c.JupyterApp.config_file_name = u'' #------------------------------------------------------------------------------ # NotebookApp configuration #------------------------------------------------------------------------------ # The number of additional ports to try if the specified port is not available. # c.NotebookApp.port_retries = 50 # Extra variables to supply to jinja templates when rendering. # c.NotebookApp.jinja_template_vars = traitlets.Undefined # The url for MathJax.js. # c.NotebookApp.mathjax_url = '' # Supply extra arguments that will be passed to Jinja environment. # c.NotebookApp.jinja_environment_options = traitlets.Undefined # The IP address the notebook server will listen on. # c.NotebookApp.ip = 'localhost' # DEPRECATED use base_url # c.NotebookApp.base_project_url = '/' # Python modules to load as notebook server extensions. This is an experimental # API, and may change in future releases. # c.NotebookApp.server_extensions = traitlets.Undefined # The random bytes used to secure cookies. By default this is a new random # number every time you start the Notebook. Set it to a value in a config file # to enable logins to persist across server sessions. # # Note: Cookie secrets should be kept private, do not share config files with # cookie_secret stored in plaintext (you can read the value from a file). # c.NotebookApp.cookie_secret = '' # The default URL to redirect to from `/` # c.NotebookApp.default_url = '/tree' # The port the notebook server will listen on. # c.NotebookApp.port = 8888 # The kernel spec manager class to use. Should be a subclass of # `jupyter_client.kernelspec.KernelSpecManager`. # # The Api of KernelSpecManager is provisional and might change without warning # between this version of IPython and the next stable one. # c.NotebookApp.kernel_spec_manager_class = <class 'jupyter_client.kernelspec.KernelSpecManager'> # Set the Access-Control-Allow-Origin header # # Use '*' to allow any origin to access your server. # # Takes precedence over allow_origin_pat. # c.NotebookApp.allow_origin = '' # The notebook manager class to use. # c.NotebookApp.contents_manager_class = <class 'notebook.services.contents.filemanager.FileContentsManager'> # Use a regular expression for the Access-Control-Allow-Origin header # # Requests from an origin matching the expression will get replies with: # # Access-Control-Allow-Origin: origin # # where `origin` is the origin of the request. # # Ignored if allow_origin is set. # c.NotebookApp.allow_origin_pat = '' # The full path to an SSL/TLS certificate file. # c.NotebookApp.certfile = u'' # The logout handler class to use. # c.NotebookApp.logout_handler_class = <class 'notebook.auth.logout.LogoutHandler'> # The base URL for the notebook server. # # Leading and trailing slashes can be omitted, and will automatically be added. # c.NotebookApp.base_url = '/' # The session manager class to use. # c.NotebookApp.session_manager_class = <class 'notebook.services.sessions.sessionmanager.SessionManager'> # Supply overrides for the tornado.web.Application that the IPython notebook # uses. # c.NotebookApp.tornado_settings = traitlets.Undefined # The directory to use for notebooks and kernels. # c.NotebookApp.notebook_dir = u'' # The kernel manager class to use. # c.NotebookApp.kernel_manager_class = <class 'notebook.services.kernels.kernelmanager.MappingKernelManager'> # The file where the cookie secret is stored. # c.NotebookApp.cookie_secret_file = u'' # Supply SSL options for the tornado HTTPServer. See the tornado docs for # details. # c.NotebookApp.ssl_options = traitlets.Undefined # # c.NotebookApp.file_to_run = '' # DISABLED: use %pylab or %matplotlib in the notebook to enable matplotlib. # c.NotebookApp.pylab = 'disabled' # Whether to enable MathJax for typesetting math/TeX # # MathJax is the javascript library IPython uses to render math/LaTeX. It is # very large, so you may want to disable it if you have a slow internet # connection, or for offline use of the notebook. # # When disabled, equations etc. will appear as their untransformed TeX source. # c.NotebookApp.enable_mathjax = True # Reraise exceptions encountered loading server extensions? # c.NotebookApp.reraise_server_extension_failures = False # The base URL for websockets, if it differs from the HTTP server (hint: it # almost certainly doesn't). # # Should be in the form of an HTTP origin: ws[s]://hostname[:port] # c.NotebookApp.websocket_url = '' # Whether to open in a browser after starting. The specific browser used is # platform dependent and determined by the python standard library `webbrowser` # module, unless it is overridden using the --browser (NotebookApp.browser) # configuration option.
""" ============================= Subclassing ndarray in python ============================= Credits ------- This page is based with thanks on the wiki page on subclassing by NAME - http://www.scipy.org/Subclasses. Introduction ------------ Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ndarrays and object creation ============================ Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: #. Explicit constructor call - as in ``MySubClass(params)``. This is the usual route to Python instance creation. #. View casting - casting an existing ndarray as a given subclass #. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See :ref:`new-from-template` for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. .. _view-casting: View casting ------------ *View casting* is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) <class 'C'> .. _new-from-template: Creating new from template -------------------------- New instances of an ndarray subclass can also come about by a very similar mechanism to :ref:`view-casting`, when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' <class 'C'> >>> v is c_arr # but it's a new instance False The slice is a *view* onto the original ``c_arr`` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (``c_arr.copy()``), creating ufunc output arrays (see also :ref:`array-wrap`), and reducing methods (like ``c_arr.mean()``. Relationship of view casting and new-from-template -------------------------------------------------- These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, :ref:`view-casting` means you have created a new instance of your array type from any potential subclass of ndarray. :ref:`new-from-template` means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. Implications for subclassing ---------------------------- If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also :ref:`view-casting` or :ref:`new-from-template`. Numpy has the machinery to do this, and this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the ``ndarray.__new__`` method for the main work of object initialization, rather then the more usual ``__init__`` method. The second is the use of the ``__array_finalize__`` method to allow subclasses to clean up after the creation of views and new instances from templates. A brief Python primer on ``__new__`` and ``__init__`` ===================================================== ``__new__`` is a standard Python method, and, if present, is called before ``__init__`` when we create a class instance. See the `python __new__ documentation <http://docs.python.org/reference/datamodel.html#object.__new__>`_ for more detail. For example, consider the following Python code: .. testcode:: class C(object): def __new__(cls, *args): print 'Cls in __new__:', cls print 'Args in __new__:', args return object.__new__(cls, *args) def __init__(self, *args): print 'type(self) in __init__:', type(self) print 'Args in __init__:', args meaning that we get: >>> c = C('hello') Cls in __new__: <class 'C'> Args in __new__: ('hello',) type(self) in __init__: <class 'C'> Args in __init__: ('hello',) When we call ``C('hello')``, the ``__new__`` method gets its own class as first argument, and the passed argument, which is the string ``'hello'``. After python calls ``__new__``, it usually (see below) calls our ``__init__`` method, with the output of ``__new__`` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the ``__new__`` method or the ``__init__`` method, or both, and in fact ndarray does not have an ``__init__`` method, because all the initialization is done in the ``__new__`` method. Why use ``__new__`` rather than just the usual ``__init__``? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: .. testcode:: class D(C): def __new__(cls, *args): print 'D cls is:', cls print 'D args in __new__:', args return C.__new__(C, *args) def __init__(self, *args): # we never get here print 'In D __init__' meaning that: >>> obj = D('hello') D cls is: <class 'D'> D args in __new__: ('hello',) Cls in __new__: <class 'C'> Args in __new__: ('hello',) >>> type(obj) <class 'C'> The definition of ``C`` is the same as before, but for ``D``, the ``__new__`` method returns an instance of class ``C`` rather than ``D``. Note that the ``__init__`` method of ``D`` does not get called. In general, when the ``__new__`` method returns an object of class other than the class in which it is defined, the ``__init__`` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like:: obj = ndarray.__new__(subtype, shape, ... where ``subdtype`` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class ``ndarray``. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray ``__new__`` method knows nothing of what we have done in our own ``__new__`` method in order to set attributes, and so on. (Aside - why not call ``obj = subdtype.__new__(...`` then? Because we may not have a ``__new__`` method with the same call signature). The role of ``__array_finalize__`` ================================== ``__array_finalize__`` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: #. explicit constructor call (``obj = MySubClass(params)``). This will call the usual sequence of ``MySubClass.__new__`` then (if it exists) ``MySubClass.__init__``. #. :ref:`view-casting` #. :ref:`new-from-template` Our ``MySubClass.__new__`` method only gets called in the case of the explicit constructor call, so we can't rely on ``MySubClass.__new__`` or ``MySubClass.__init__`` to deal with the view casting and new-from-template. It turns out that ``MySubClass.__array_finalize__`` *does* get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to ``ndarray.__new__(MySubClass,...)``, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of ``ndarray.__new__(MySubClass,...`` is called, at the C level. The arguments that ``__array_finalize__`` recieves differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: .. testcode:: import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print 'In __new__ with class %s' % cls return np.ndarray.__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print 'In __init__ with class %s' % self.__class__ def __array_finalize__(self, obj): print 'In array_finalize:' print ' self type is %s' % type(self) print ' obj type is %s' % type(obj) Now: >>> # Explicit constructor >>> c = C((10,)) In __new__ with class <class 'C'> In array_finalize: self type is <class 'C'> obj type is <type 'NoneType'> In __init__ with class <class 'C'> >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is <class 'C'> obj type is <type 'numpy.ndarray'> >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is <class 'C'> obj type is <class 'C'> The signature of ``__array_finalize__`` is:: def __array_finalize__(self, obj): ``ndarray.__new__`` passes ``__array_finalize__`` the new object, of our own class (``self``) as well as the object from which the view has been taken (``obj``). As you can see from the output above, the ``self`` is always a newly created instance of our subclass, and the type of ``obj`` differs for the three instance creation methods: * When called from the explicit constructor, ``obj`` is ``None`` * When called from view casting, ``obj`` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, ``obj`` is another instance of our own subclass, that we might use to update the new ``self`` instance. Because ``__array_finalize__`` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. Simple example - adding an extra attribute to ndarray ----------------------------------------------------- .. testcode:: import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = np.ndarray.__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything Using the object looks like this: >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) <class 'InfoArray'> >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) <class 'InfoArray'> >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) <class 'InfoArray'> >>> cast_arr.info is None True This class isn't very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to ``np.array`` and return an object. Slightly more realistic example - attribute added to existing array ------------------------------------------------------------------- Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. .. testcode:: import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) So: >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) <class 'RealisticInfoArray'> >>> obj.info 'information' >>> v = obj[1:] >>> type(v) <class 'RealisticInfoArray'> >>> v.info 'information' .. _array-wrap: ``__array_wrap__`` for ufuncs ------------------------------------------------------- ``__array_wrap__`` gets called at the end of numpy ufuncs and other numpy functions, to allow a subclass to set the type of the return value and update attributes and metadata. Let's show how this works with an example. First we make the same subclass as above, but with a different name and some print statements: .. testcode:: import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print 'In __array_finalize__:' print ' self is %s' % repr(self) print ' obj is %s' % repr(obj) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None): print 'In __array_wrap__:' print ' self is %s' % repr(self) print ' arr is %s' % repr(out_arr) # then just call the parent return np.ndarray.__array_wrap__(self, out_arr, context) We run a ufunc on an instance of our new array: >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' Note that the ufunc (``np.add``) has called the ``__array_wrap__`` method of the input with the highest ``__array_priority__`` value, in this case ``MySubClass.__array_wrap__``, with arguments ``self`` as ``obj``, and ``out_arr`` as the (ndarray) result of the addition. In turn, the default ``__array_wrap__`` (``ndarray.__array_wrap__``) has cast the result to class ``MySubClass``, and called ``__array_finalize__`` - hence the copying of the ``info`` attribute. This has all happened at the C level. But, we could do anything we wanted: .. testcode:: class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None): return 'I lost your data' >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' So, by defining a specific ``__array_wrap__`` method for our subclass, we can tweak the output from ufuncs. The ``__array_wrap__`` method requires ``self``, then an argument - which is the result of the ufunc - and an optional parameter *context*. This parameter is returned by some ufuncs as a 3-element tuple: (name of the ufunc, argument of the ufunc, domain of the ufunc). ``__array_wrap__`` should return an instance of its containing class. See the masked array subclass for an implementation. In addition to ``__array_wrap__``, which is called on the way out of the ufunc, there is also an ``__array_prepare__`` method which is called on the way into the ufunc, after the output arrays are created but before any computation has been performed. The default implementation does nothing but pass through the array. ``__array_prepare__`` should not attempt to access the array data or resize the array, it is intended for setting the output array type, updating attributes and metadata, and performing any checks based on the input that may be desired before computation begins. Like ``__array_wrap__``, ``__array_prepare__`` must return an ndarray or subclass thereof or raise an error. Extra gotchas - custom ``__del__`` methods and ndarray.base ----------------------------------------------------------- One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, ``arr`` and have taken a slice with ``v = arr[1:]``. The two objects are looking at the same memory. Numpy keeps track of where the data came from for a particular array or view, with the ``base`` attribute: >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the view it derived from >>> v2.base is v1 True In general, if the array owns its own memory, as for ``arr`` in this case, then ``arr.base`` will be None - there are some exceptions to this - see the numpy book for more details. The ``base`` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the ``memmap`` class in ``numpy.core``. """
#! /usr/bin/python3 # ################################################################# ## Copyright (c) 2013 NAME <EMAIL> # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ################################################################# # Ver:0.1.8 / Datum 07.02.2016 new modul based on: # ht3_dispatch.py Ver:IP_ADDRESS/ Datum 04.03.2015 # and # ht3_decode.py Ver:IP_ADDRESS/ Datum 04.03.2015 # additonal modifications are: # 'new telegram 88 00 2a 00' added (TR-10F) # 'new telegram 90 08 1a 00' added (FW/FR1x0) # 'new telegram 88 10 16 00' added (heater-details) # message:88001800 with length:=31 or 33 # message:9000ff00 with length:=29 added # 'IPM_LastschaltmodulMsg' buscoding assigned to a0...af # message:9000ff00 with length:=29 and 32 Byte added # HeizkreisMsg_ID677_max33byte added for CWxyz handling # Heating-circuit assignment corrected (6F...72). # Ver:IP_ADDRESS/ Datum 10.02.2016 Methode: HeizkreisMsg_ID677_max33byte() # 'Tsoll_HK' assigned to Byte12 # 'Vbetriebs_art' assigned to Byte27 # Ver:IP_ADDRESS/ Datum 22.02.2016 'IPM_LastschaltmodulWWModeMsg()'. # fix for wrong msg-detection: 'a1 00 34 00' # 'IPM_LastschaltmodulMsg()' fixed wrong HK-circuit assignment # Ver:0.1.9 / Datum 27.04.2016 'HeizkreisMsg_ID677_max33byte()' corrected. # 'msg_88002a00()' betriebsart assignment deleted # 'HeizkreisMsg_FW100_200_xybyte()' entfaellt. # Ver:0.1.10 / Datum 22.08.2016 Redesign of modul for MsgID-messageshandling. # 'SolarMsg()' modified for msgID866 and # for msgID910 # Display- and cause-code added for error-reports # 'SolarMsg()' corrected # message-ID handling added # "Vbetriebs_art" replaced with "Voperation_status" # Heater active time changed from minutes to hours. # Ver:0.2 / Datum 29.08.2016 Fkt.doc added, minor debugtext-changes. # Ver:0.2.2 / Datum 14.10.2016 CW100:=157 added on MsgID 2. # no decoding of 'T_Soll' anymore at msgID:=53. # 'msgID_259_Solar()' modified, 3-bytes for solarpump-runtime. # 'msgID_51_DomesticHotWater()' added. # 'msgID_296_ErrorMsg()' added. # 'msgID_188_Hybrid()' corrected. # Ver:0.2.3 / Datum 17.01.2017 'msgID_52_DomesticHotWater()' corrected. # MsgID:24 for device:(10)hex added, see: # https://www.mikrocontroller.net/topic/324673#4864801 # Ver:0.3 / Datum 19.06.2017 now debug-output in _search_4_transceiver_message(). # controller- and bus-type handling added. # Ver:0.3.1 / Datum 18.01.2019 modified 'deviceadr_2msgid_blacklist[]' # modified msgID_22_Heaterdevice() # msgID_26_HeatingCircuit_HK1() updated and renamed # msgID_30_T_HydraulischeWeiche() THydr.Device added # msgID_35_HydraulischeWeiche() updated with hc_pump # msgID_51_DomesticHotWater() saving data on target:= 0 only # msgID_596_HeatingCircuit() added # msgID_597_HeatingCircuit() added # msgID_697_704_HeatingCircuit() Tsoll_HK # msgID_727_734_HeatingCircuit() added # msgID_737_744_HeatingCircuit() added # msgID_747_754_HeatingCircuit() Frostdanger added # _IsRequestCall() and _RequestCall() added # msgID_569_HeatingCircuit() added # msgID_52_DomesticHotWater() modified for tempsensor failure detection # msgID_30_T_HydraulischeWeiche() modified for 'ch_Thdrylic_switch' # using logitem HG:V_spare2. # msgID_25_Heaterdevice() 'ch_Thdrylic_switch' (V_spare2) added. # update of display errorcode-handling. # modified 'msgID_260_Solar()', SO:V_spare_1 & SO:V_spare_2 used now. # SO:'V_ertrag_sum_calc' used now for SO:'Ertrag Summe' # msgID_52_DomesticHotWater() storing data only if valid. # Ver:0.3.2 / Datum 08.09.2020 modified 'msgID_51_DomesticHotWater' for storing 'T-Soll max' # no decoding if length is <= 8 for EMS2 heating-circuit messages. # 'msgID_52_DomesticHotWater()' modified handling for not available sensor-values. #################################################################
## ## ## Apache License ## Version 2.0, January 2004 ## http://www.apache.org/licenses/ ## ## TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION ## ## 1. Definitions. ## ## "License" shall mean the terms and conditions for use, reproduction, ## and distribution as defined by Sections 1 through 9 of this document. ## ## "Licensor" shall mean the copyright owner or entity authorized by ## the copyright owner that is granting the License. ## ## "Legal Entity" shall mean the union of the acting entity and all ## other entities that control, are controlled by, or are under common ## control with that entity. For the purposes of this definition, ## "control" means (i) the power, direct or indirect, to cause the ## direction or management of such entity, whether by contract or ## otherwise, or (ii) ownership of fifty percent (50%) or more of the ## outstanding shares, or (iii) beneficial ownership of such entity. ## ## "You" (or "Your") shall mean an individual or Legal Entity ## exercising permissions granted by this License. ## ## "Source" form shall mean the preferred form for making modifications, ## including but not limited to software source code, documentation ## source, and configuration files. ## ## "Object" form shall mean any form resulting from mechanical ## transformation or translation of a Source form, including but ## not limited to compiled object code, generated documentation, ## and conversions to other media types. ## ## "Work" shall mean the work of authorship, whether in Source or ## Object form, made available under the License, as indicated by a ## copyright notice that is included in or attached to the work ## (an example is provided in the Appendix below). ## ## "Derivative Works" shall mean any work, whether in Source or Object ## form, that is based on (or derived from) the Work and for which the ## editorial revisions, annotations, elaborations, or other modifications ## represent, as a whole, an original work of authorship. For the purposes ## of this License, Derivative Works shall not include works that remain ## separable from, or merely link (or bind by name) to the interfaces of, ## the Work and Derivative Works thereof. ## ## "Contribution" shall mean any work of authorship, including ## the original version of the Work and any modifications or additions ## to that Work or Derivative Works thereof, that is intentionally ## submitted to Licensor for inclusion in the Work by the copyright owner ## or by an individual or Legal Entity authorized to submit on behalf of ## the copyright owner. For the purposes of this definition, "submitted" ## means any form of electronic, verbal, or written communication sent ## to the Licensor or its representatives, including but not limited to ## communication on electronic mailing lists, source code control systems, ## and issue tracking systems that are managed by, or on behalf of, the ## Licensor for the purpose of discussing and improving the Work, but ## excluding communication that is conspicuously marked or otherwise ## designated in writing by the copyright owner as "Not a Contribution." ## ## "Contributor" shall mean Licensor and any individual or Legal Entity ## on behalf of whom a Contribution has been received by Licensor and ## subsequently incorporated within the Work. ## ## 2. Grant of Copyright License. Subject to the terms and conditions of ## this License, each Contributor hereby grants to You a perpetual, ## worldwide, non-exclusive, no-charge, royalty-free, irrevocable ## copyright license to reproduce, prepare Derivative Works of, ## publicly display, publicly perform, sublicense, and distribute the ## Work and such Derivative Works in Source or Object form. ## ## 3. Grant of Patent License. Subject to the terms and conditions of ## this License, each Contributor hereby grants to You a perpetual, ## worldwide, non-exclusive, no-charge, royalty-free, irrevocable ## (except as stated in this section) patent license to make, have made, ## use, offer to sell, sell, import, and otherwise transfer the Work, ## where such license applies only to those patent claims licensable ## by such Contributor that are necessarily infringed by their ## Contribution(s) alone or by combination of their Contribution(s) ## with the Work to which such Contribution(s) was submitted. If You ## institute patent litigation against any entity (including a ## cross-claim or counterclaim in a lawsuit) alleging that the Work ## or a Contribution incorporated within the Work constitutes direct ## or contributory patent infringement, then any patent licenses ## granted to You under this License for that Work shall terminate ## as of the date such litigation is filed. ## ## 4. Redistribution. You may reproduce and distribute copies of the ## Work or Derivative Works thereof in any medium, with or without ## modifications, and in Source or Object form, provided that You ## meet the following conditions: ## ## (a) You must give any other recipients of the Work or ## Derivative Works a copy of this License; and ## ## (b) You must cause any modified files to carry prominent notices ## stating that You changed the files; and ## ## (c) You must retain, in the Source form of any Derivative Works ## that You distribute, all copyright, patent, trademark, and ## attribution notices from the Source form of the Work, ## excluding those notices that do not pertain to any part of ## the Derivative Works; and ## ## (d) If the Work includes a "NOTICE" text file as part of its ## distribution, then any Derivative Works that You distribute must ## include a readable copy of the attribution notices contained ## within such NOTICE file, excluding those notices that do not ## pertain to any part of the Derivative Works, in at least one ## of the following places: within a NOTICE text file distributed ## as part of the Derivative Works; within the Source form or ## documentation, if provided along with the Derivative Works; or, ## within a display generated by the Derivative Works, if and ## wherever such third-party notices normally appear. The contents ## of the NOTICE file are for informational purposes only and ## do not modify the License. You may add Your own attribution ## notices within Derivative Works that You distribute, alongside ## or as an addendum to the NOTICE text from the Work, provided ## that such additional attribution notices cannot be construed ## as modifying the License. ## ## You may add Your own copyright statement to Your modifications and ## may provide additional or different license terms and conditions ## for use, reproduction, or distribution of Your modifications, or ## for any such Derivative Works as a whole, provided Your use, ## reproduction, and distribution of the Work otherwise complies with ## the conditions stated in this License. ## ## 5. Submission of Contributions. Unless You explicitly state otherwise, ## any Contribution intentionally submitted for inclusion in the Work ## by You to the Licensor shall be under the terms and conditions of ## this License, without any additional terms or conditions. ## Notwithstanding the above, nothing herein shall supersede or modify ## the terms of any separate license agreement you may have executed ## with Licensor regarding such Contributions. ## ## 6. Trademarks. This License does not grant permission to use the trade ## names, trademarks, service marks, or product names of the Licensor, ## except as required for reasonable and customary use in describing the ## origin of the Work and reproducing the content of the NOTICE file. ## ## 7. Disclaimer of Warranty. Unless required by applicable law or ## agreed to in writing, Licensor provides the Work (and each ## Contributor provides its Contributions) on an "AS IS" BASIS, ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or ## implied, including, without limitation, any warranties or conditions ## of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A ## PARTICULAR PURPOSE. You are solely responsible for determining the ## appropriateness of using or redistributing the Work and assume any ## risks associated with Your exercise of permissions under this License. ## ## 8. Limitation of Liability. In no event and under no legal theory, ## whether in tort (including negligence), contract, or otherwise, ## unless required by applicable law (such as deliberate and grossly ## negligent acts) or agreed to in writing, shall any Contributor be ## liable to You for damages, including any direct, indirect, special, ## incidental, or consequential damages of any character arising as a ## result of this License or out of the use or inability to use the ## Work (including but not limited to damages for loss of goodwill, ## work stoppage, computer failure or malfunction, or any and all ## other commercial damages or losses), even if such Contributor ## has been advised of the possibility of such damages. ## ## 9. Accepting Warranty or Additional Liability. While redistributing ## the Work or Derivative Works thereof, You may choose to offer, ## and charge a fee for, acceptance of support, warranty, indemnity, ## or other liability obligations and/or rights consistent with this ## License. However, in accepting such obligations, You may act only ## on Your own behalf and on Your sole responsibility, not on behalf ## of any other Contributor, and only if You agree to indemnify, ## defend, and hold each Contributor harmless for any liability ## incurred by, or claims asserted against, such Contributor by reason ## of your accepting any such warranty or additional liability. ## ## END OF TERMS AND CONDITIONS ## ## APPENDIX: How to apply the Apache License to your work. ## ## To apply the Apache License to your work, attach the following ## boilerplate notice, with the fields enclosed by brackets "[]" ## replaced with your own identifying information. (Don't include ## the brackets!) The text should be enclosed in the appropriate ## comment syntax for the file format. We also recommend that a ## file or class name and description of purpose be included on the ## same "printed page" as the copyright notice for easier ## identification within third-party archives. ## ## Copyright [yyyy] [name of copyright owner] ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. ## You may obtain a copy of the License at ## ## http://www.apache.org/licenses/LICENSE-2.0 ## ## Unless required by applicable law or agreed to in writing, software ## distributed under the License is distributed on an "AS IS" BASIS, ## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ## See the License for the specific language governing permissions and ## limitations under the License. ## #------------------------------------------------------------------------------- # Name: mask.py # Purpose: This file supports quality assurance for sensor frames. # # Author: NAME Created: 18.09.2014 # Copyright: (c) GrafR 2014 # Licence: Apache 2.0 #------------------------------------------------------------------------------- #!/usr/bin/env python
"""============== Array indexing ============== Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing, which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This section is just an overview of the various options and issues related to indexing. Aside from single element indexing, the details on most of these options are to be found in related sections. Assignment vs referencing ========================= Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See the section at the end for specific examples and explanations on how assignments work. Single element indexing ======================= Single element indexing for a 1-D array is what one expects. It work exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. :: >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that it is not necessary to separate each dimension's index into its own set of square brackets. :: >>> x.shape = (2,5) # now x is 2-dimensional >>> x[1,3] 8 >>> x[1,-1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: :: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: :: >>> x[0][2] 2 So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient as a new temporary array is created after the first index that is subsequently indexed by 2. Note to those used to IDL or Fortran memory order as it relates to indexing. NumPy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. Other indexing options ====================== It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. A few examples illustrates best: :: >>> x = np.arange(10) >>> x[2:5] array([2, 3, 4]) >>> x[:-7] array([0, 1, 2]) >>> x[1:7:2] array([1, 3, 5]) >>> y = np.arange(35).reshape(5,7) >>> y[1:5:2,::3] array([[ 7, 10, 13], [21, 24, 27]]) Note that slices of arrays do not copy the internal array data but also produce new views of the original data. It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance. It is possible to use special features to effectively increase the number of dimensions in an array through indexing so the resulting array aquires the shape needed for use in an expression or with a specific function. Index arrays ============ NumPy arrays may be indexed with other arrays (or any other sequence- like object that can be converted to an array, such as lists, with the exception of tuples; see the end of this document for why this is). The use of index arrays ranges from simple, straightforward cases to complex, hard-to-understand cases. For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. Index arrays must be of integer type. Each value in the array indicates which value in the array to use in place of the index. To illustrate: :: >>> x = np.arange(10,1,-1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index array) where each index is replaced by the value the index array has in the array being indexed. Negative values are permitted and work as they do with single indices or slices: :: >>> x[np.array([3,3,-3,8])] array([7, 7, 4, 2]) It is an error to have index values out of bounds: :: >>> x[np.array([3, 3, 20, 8])] <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9 Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array, but with the type and values of the array being indexed. As an example, we can use a multidimensional index array instead: :: >>> x[np.array([[1,1],[2,3]])] array([[9, 9], [8, 7]]) Indexing Multi-dimensional arrays ================================= Things become more complex when multidimensional arrays are indexed, particularly with multidimensional index arrays. These tend to be more unusual uses, but they are permitted, and they are useful for some problems. We'll start with the simplest multidimensional case (using the array y from the previous examples): :: >>> y[np.array([0,2,4]), np.array([0,1,2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2]. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: :: >>> y[np.array([0,2,4]), np.array([0,1])] <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: :: >>> y[np.array([0,2,4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: :: >>> y[np.array([0,2,4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) What results is the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (number of index elements, size of row). An example of where this may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. In general, the shape of the resultant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. Boolean or "mask" index arrays ============================== Boolean arrays used as indices are treated in a different manner entirely than index arrays. Boolean arrays must be of the same shape as the initial dimensions of the array being indexed. In the most straightforward case, the boolean array has the same shape: :: >>> b = y>20 >>> y[b] array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) Unlike in the case of integer index arrays, in the boolean case, the result is a 1-D array containing all the elements in the indexed array corresponding to all the true elements in the boolean array. The elements in the indexed array are always iterated and returned in :term:`row-major` (C-style) order. The result is also identical to ``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy of the data, not a view as one gets with slices. The result will be multidimensional if y has more dimensions than b. For example: :: >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y array([False, False, False, True, True]) >>> y[b[:,5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...], which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed. For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of shape (2,3,5) results in a 2-D result of shape (4,5): :: >>> x = np.arange(30).reshape(2,3,5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) For further details, consult the numpy reference documentation on array indexing. Combining index arrays with slices ================================== Index arrays may be combined with slices. For example: :: >>> y[np.array([0,2,4]),1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array to produce a resultant array of shape (3,2). Likewise, slicing can be combined with broadcasted boolean indices: :: >>> y[b[:,5],1:3] array([[22, 23], [29, 30]]) Structural indexing tools ========================= To facilitate easy matching of array shapes with expressions and in assignments, the np.newaxis object can be used within array indices to add new dimensions with a size of 1. For example: :: >>> y.shape (5, 7) >>> y[:,np.newaxis,:].shape (5, 1, 7) Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to combine two arrays in a way that otherwise would require explicitly reshaping operations. For example: :: >>> x = np.arange(5) >>> x[:,np.newaxis] + x[np.newaxis,:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example: :: >>> z = np.arange(81).reshape(3,3,3,3) >>> z[1,...,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) This is equivalent to: :: >>> z[1,:,:,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) Assigning values to indexed arrays ================================== As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: :: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: :: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): :: >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z)) Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: :: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs ======================================================== The index syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example (using the previous definition for the array z): :: >>> indices = (1,1,1,1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: :: >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: :: >>> indices = (1, Ellipsis, 1) # same as [1,...,1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason it is possible to use the output from the np.nonzero() function directly as an index since it always returns a tuple of index arrays. Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1,1,1,1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1,1,1,1)] # returns a single value 40 """
"""============== Array indexing ============== Array indexing refers to any use of the square brackets ([]) to index array values. There are many options to indexing, which give numpy indexing great power, but with power comes some complexity and the potential for confusion. This section is just an overview of the various options and issues related to indexing. Aside from single element indexing, the details on most of these options are to be found in related sections. Assignment vs referencing ========================= Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See the section at the end for specific examples and explanations on how assignments work. Single element indexing ======================= Single element indexing for a 1-D array is what one expects. It work exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. :: >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 Unlike lists and tuples, numpy arrays support multidimensional indexing for multidimensional arrays. That means that it is not necessary to separate each dimension's index into its own set of square brackets. :: >>> x.shape = (2,5) # now x is 2-dimensional >>> x[1,3] 8 >>> x[1,-1] 9 Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: :: >>> x[0] array([0, 1, 2, 3, 4]) That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: :: >>> x[0][2] 2 So note that ``x[0,2] = x[0][2]`` though the second case is more inefficient as a new temporary array is created after the first index that is subsequently indexed by 2. Note to those used to IDL or Fortran memory order as it relates to indexing. NumPy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. Other indexing options ====================== It is possible to slice and stride arrays to extract arrays of the same number of dimensions, but of different sizes than the original. The slicing and striding works exactly the same way it does for lists and tuples except that they can be applied to multiple dimensions as well. A few examples illustrates best: :: >>> x = np.arange(10) >>> x[2:5] array([2, 3, 4]) >>> x[:-7] array([0, 1, 2]) >>> x[1:7:2] array([1, 3, 5]) >>> y = np.arange(35).reshape(5,7) >>> y[1:5:2,::3] array([[ 7, 10, 13], [21, 24, 27]]) Note that slices of arrays do not copy the internal array data but also produce new views of the original data. It is possible to index arrays with other arrays for the purposes of selecting lists of values out of arrays into new arrays. There are two different ways of accomplishing this. One uses one or more arrays of index values. The other involves giving a boolean array of the proper shape to indicate the values to be selected. Index arrays are a very powerful tool that allow one to avoid looping over individual elements in arrays and thus greatly improve performance. It is possible to use special features to effectively increase the number of dimensions in an array through indexing so the resulting array aquires the shape needed for use in an expression or with a specific function. Index arrays ============ NumPy arrays may be indexed with other arrays (or any other sequence- like object that can be converted to an array, such as lists, with the exception of tuples; see the end of this document for why this is). The use of index arrays ranges from simple, straightforward cases to complex, hard-to-understand cases. For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. Index arrays must be of integer type. Each value in the array indicates which value in the array to use in place of the index. To illustrate: :: >>> x = np.arange(10,1,-1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) The index array consisting of the values 3, 3, 1 and 8 correspondingly create an array of length 4 (same as the index array) where each index is replaced by the value the index array has in the array being indexed. Negative values are permitted and work as they do with single indices or slices: :: >>> x[np.array([3,3,-3,8])] array([7, 7, 4, 2]) It is an error to have index values out of bounds: :: >>> x[np.array([3, 3, 20, 8])] <type 'exceptions.IndexError'>: index 20 out of bounds 0<=index<9 Generally speaking, what is returned when index arrays are used is an array with the same shape as the index array, but with the type and values of the array being indexed. As an example, we can use a multidimensional index array instead: :: >>> x[np.array([[1,1],[2,3]])] array([[9, 9], [8, 7]]) Indexing Multi-dimensional arrays ================================= Things become more complex when multidimensional arrays are indexed, particularly with multidimensional index arrays. These tend to be more unusual uses, but they are permitted, and they are useful for some problems. We'll start with the simplest multidimensional case (using the array y from the previous examples): :: >>> y[np.array([0,2,4]), np.array([0,1,2])] array([ 0, 15, 30]) In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is y[0,0]. The next value is y[2,1], and the last is y[4,2]. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: :: >>> y[np.array([0,2,4]), np.array([0,1])] <type 'exceptions.ValueError'>: shape mismatch: objects cannot be broadcast to a single shape The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: :: >>> y[np.array([0,2,4]), 1] array([ 1, 15, 29]) Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: :: >>> y[np.array([0,2,4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) What results is the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (size of row, number index elements). An example of where this may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. In general, the shape of the resultant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. Boolean or "mask" index arrays ============================== Boolean arrays used as indices are treated in a different manner entirely than index arrays. Boolean arrays must be of the same shape as the initial dimensions of the array being indexed. In the most straightforward case, the boolean array has the same shape: :: >>> b = y>20 >>> y[b] array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34]) Unlike in the case of integer index arrays, in the boolean case, the result is a 1-D array containing all the elements in the indexed array corresponding to all the true elements in the boolean array. The elements in the indexed array are always iterated and returned in :term:`row-major` (C-style) order. The result is also identical to ``y[np.nonzero(b)]``. As with index arrays, what is returned is a copy of the data, not a view as one gets with slices. The result will be multidimensional if y has more dimensions than b. For example: :: >>> b[:,5] # use a 1-D boolean whose first dim agrees with the first dim of y array([False, False, False, True, True], dtype=bool) >>> y[b[:,5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to y[b, ...], which means y is indexed by b followed by as many : as are needed to fill out the rank of y. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed. For example, using a 2-D boolean array of shape (2,3) with four True elements to select rows from a 3-D array of shape (2,3,5) results in a 2-D result of shape (4,5): :: >>> x = np.arange(30).reshape(2,3,5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) For further details, consult the numpy reference documentation on array indexing. Combining index arrays with slices ================================== Index arrays may be combined with slices. For example: :: >>> y[np.array([0,2,4]),1:3] array([[ 1, 2], [15, 16], [29, 30]]) In effect, the slice is converted to an index array np.array([[1,2]]) (shape (1,2)) that is broadcast with the index array to produce a resultant array of shape (3,2). Likewise, slicing can be combined with broadcasted boolean indices: :: >>> y[b[:,5],1:3] array([[22, 23], [29, 30]]) Structural indexing tools ========================= To facilitate easy matching of array shapes with expressions and in assignments, the np.newaxis object can be used within array indices to add new dimensions with a size of 1. For example: :: >>> y.shape (5, 7) >>> y[:,np.newaxis,:].shape (5, 1, 7) Note that there are no new elements in the array, just that the dimensionality is increased. This can be handy to combine two arrays in a way that otherwise would require explicitly reshaping operations. For example: :: >>> x = np.arange(5) >>> x[:,np.newaxis] + x[np.newaxis,:] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) The ellipsis syntax maybe used to indicate selecting in full any remaining unspecified dimensions. For example: :: >>> z = np.arange(81).reshape(3,3,3,3) >>> z[1,...,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) This is equivalent to: :: >>> z[1,:,:,2] array([[29, 32, 35], [38, 41, 44], [47, 50, 53]]) Assigning values to indexed arrays ================================== As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: :: >>> x = np.arange(10) >>> x[2:7] = 1 or an array of the right size: :: >>> x[2:7] = np.arange(5) Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): :: >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j <type 'exceptions.TypeError'>: can't convert complex to long; use long(abs(z)) Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: :: >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is because a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at x[1]+1 is assigned to x[1] three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs ======================================================== The index syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example (using the previous definition for the array z): :: >>> indices = (1,1,1,1) >>> z[indices] 40 So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: :: >>> indices = (1,1,1,slice(0,2)) # same as [1,1,1,0:2] >>> z[indices] array([39, 40]) Likewise, ellipsis can be specified by code by using the Ellipsis object: :: >>> indices = (1, Ellipsis, 1) # same as [1,...,1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) For this reason it is possible to use the output from the np.where() function directly as an index since it always returns a tuple of index arrays. Because the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: :: >>> z[[1,1,1,1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1,1,1,1)] # returns a single value 40 """
""" Purpose: Linear Algebra Parser Based on: SimpleCalc.py example (author NAME in pyparsing-1.3.3 Author: NAME Ellis & Grant, Inc. 2005 License: You may freely use, modify, and distribute this software. Warranty: THIS SOFTWARE HAS NO WARRANTY WHATSOEVER. USE AT YOUR OWN RISK. Notes: Parses infix linear algebra (LA) notation for vectors, matrices, and scalars. Output is C code function calls. The parser can be run as an interactive interpreter or included as module to use for in-place substitution into C files containing LA equations. Supported operations are: OPERATION: INPUT OUTPUT Scalar addition: "a = b+c" "a=(b+c)" Scalar subtraction: "a = b-c" "a=(b-c)" Scalar multiplication: "a = b*c" "a=b*c" Scalar division: "a = b/c" "a=b/c" Scalar exponentiation: "a = b^c" "a=pow(b,c)" Vector scaling: "V3_a = V3_b * c" "vCopy(a,vScale(b,c))" Vector addition: "V3_a = V3_b + V3_c" "vCopy(a,vAdd(b,c))" Vector subtraction: "V3_a = V3_b - V3_c" "vCopy(a,vSubtract(b,c))" Vector dot product: "a = V3_b * V3_c" "a=vDot(b,c)" Vector outer product: "M3_a = V3_b @ V3_c" "a=vOuterProduct(b,c)" Vector magn. squared: "a = V3_b^Mag2" "a=vMagnitude2(b)" Vector magnitude: "a = V3_b^Mag" "a=sqrt(vMagnitude2(b))" Matrix scaling: "M3_a = M3_b * c" "mCopy(a,mScale(b,c))" Matrix addition: "M3_a = M3_b + M3_c" "mCopy(a,mAdd(b,c))" Matrix subtraction: "M3_a = M3_b - M3_c" "mCopy(a,mSubtract(b,c))" Matrix multiplication: "M3_a = M3_b * M3_c" "mCopy(a,mMultiply(b,c))" Matrix by vector mult.: "V3_a = M3_b * V3_c" "vCopy(a,mvMultiply(b,c))" Matrix inversion: "M3_a = M3_b^-1" "mCopy(a,mInverse(b))" Matrix transpose: "M3_a = M3_b^T" "mCopy(a,mTranspose(b))" Matrix determinant: "a = M3_b^Det" "a=mDeterminant(b)" The parser requires the expression to be an equation. Each non-scalar variable must be prefixed with a type tag, 'M3_' for 3x3 matrices and 'V3_' for 3-vectors. For proper compilation of the C code, the variables need to be declared without the prefix as float[3] for vectors and float[3][3] for matrices. The operations do not modify any variables on the right-hand side of the equation. Equations may include nested expressions within parentheses. The allowed binary operators are '+-*/^' for scalars, and '+-*^@' for vectors and matrices with the meanings defined in the table above. Specifying an improper combination of operands, e.g. adding a vector to a matrix, is detected by the parser and results in a Python TypeError Exception. The usual cause of this is omitting one or more tag prefixes. The parser knows nothing about a a variable's C declaration and relies entirely on the type tags. Errors in C declarations are not caught until compile time. Usage: To process LA equations embedded in source files, import this module and pass input and output file objects to the fprocess() function. You can can also invoke the parser from the command line, e.g. 'python LAparser.py', to run a small test suite and enter an interactive loop where you can enter LA equations and see the resulting C code. """
""" (GPL) GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: <program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <http://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <http://www.gnu.org/philosophy/why-not-lgpl.html>. """
""" The core class bits of TileStache. Two important classes can be found here. Layer represents a set of tiles in TileStache. It keeps references to providers, projections, a Configuration instance, and other details required for to the storage and rendering of a tile set. Layers are represented in the configuration file as a dictionary: { "cache": ..., "layers": { "example-name": { "provider": { ... }, "metatile": { ... }, "preview": { ... }, "projection": ..., "stale lock timeout": ..., "cache lifespan": ..., "write cache": ..., "bounds": { ... }, "allowed origin": ..., "maximum cache age": ..., "redirects": ..., "tile height": ..., "jpeg options": ..., "png options": ... } } } - "provider" refers to a Provider, explained in detail in TileStache.Providers. - "metatile" optionally makes it possible for multiple individual tiles to be rendered at one time, for greater speed and efficiency. This is commonly used for the Mapnik provider. See below for more information on metatiles. - "preview" optionally overrides the starting point for the built-in per-layer slippy map preview, useful for image-based layers where appropriate. See below for more information on the preview. - "projection" names a geographic projection, explained in TileStache.Geography. If omitted, defaults to spherical mercator. - "stale lock timeout" is an optional number of seconds to wait before forcing a lock that might be stuck. This is defined on a per-layer basis, rather than for an entire cache at one time, because you may have different expectations for the rendering speeds of different layer configurations. Defaults to 15. - "cache lifespan" is an optional number of seconds that cached tiles should be stored. This is defined on a per-layer basis. Defaults to forever if None, 0 or omitted. - "write cache" is an optional boolean value to allow skipping cache write altogether. This is defined on a per-layer basis. Defaults to true if omitted. - "bounds" is an optional dictionary of six tile boundaries to limit the rendered area: low (lowest zoom level), high (highest zoom level), north, west, south, and east (all in degrees). - "allowed origin" is an optional string that shows up in the response HTTP header Access-Control-Allow-Origin, useful for when you need to provide javascript direct access to response data such as GeoJSON or pixel values. The header is part of a W3C working draft (http://www.w3.org/TR/cors/). - "maximum cache age" is an optional number of seconds used to control behavior of downstream caches. Causes TileStache responses to include Cache-Control and Expires HTTP response headers. Useful when TileStache is itself hosted behind an HTTP cache such as Squid, Cloudfront, or Akamai. - "redirects" is an optional dictionary of per-extension HTTP redirects, treated as lowercase. Useful in cases where your tile provider can support many formats but you want to enforce limits to save on cache usage. If a request is made for a tile with an extension in the dictionary keys, a response can be generated that redirects the client to the same tile with another extension. - "tile height" gives the height of the image tile in pixels. You almost always want to leave this at the default value of 256, but you can use a value of 512 to create double-size, double-resolution tiles for high-density phone screens. - "jpeg options" is an optional dictionary of JPEG creation options, passed through to PIL: http://effbot.org/imagingbook/format-jpeg.htm. - "png options" is an optional dictionary of PNG creation options, passed through to PIL: http://effbot.org/imagingbook/format-png.htm. - "pixel effect" is an optional dictionary that defines an effect to be applied for all tiles of this layer. Pixel effect can be any of these: blackwhite, greyscale, desaturate, pixelate, halftone, or blur. The public-facing URL of a single tile for this layer might look like this: http://example.com/tilestache.cgi/example-name/0/0/0.png Sample JPEG creation options: { "quality": 90, "progressive": true, "optimize": true } Sample PNG creation options: { "optimize": true, "palette": "filename.act" } Sample pixel effect: { "name": "desaturate", "factor": 0.85 } Sample bounds: { "low": 9, "high": 15, "south": 37.749, "west": -122.358, "north": 37.860, "east": -122.113 } Metatile represents a larger area to be rendered at one time. Metatiles are represented in the configuration file as a dictionary: { "rows": 4, "columns": 4, "buffer": 64 } - "rows" and "columns" are the height and width of the metatile measured in tiles. This example metatile is four rows tall and four columns wide, so it will render sixteen tiles simultaneously. - "buffer" is a buffer area around the metatile, measured in pixels. This is useful for providers with labels or icons, where it's necessary to draw a bit extra around the edges to ensure that text is not cut off. This example metatile has a buffer of 64 pixels, so the resulting metatile will be 1152 pixels square: 4 rows x 256 pixels + 2 x 64 pixel buffer. The preview can be accessed through a URL like /<layer name>/preview.html: { "lat": 33.9901, "lon": -116.1637, "zoom": 16, "ext": "jpg" } - "lat" and "lon" are the starting latitude and longitude in degrees. - "zoom" is the starting zoom level. - "ext" is the filename extension, e.g. "png". """
""" Implementation of the trigsimp algorithm by Fu et al. The idea behind the ``fu`` algorithm is to use a sequence of rules, applied in what is heuristically known to be a smart order, to select a simpler expression that is equivalent to the input. There are transform rules in which a single rule is applied to the expression tree. The following are just mnemonic in nature; see the docstrings for examples. TR0 - simplify expression TR1 - sec-csc to cos-sin TR2 - tan-cot to sin-cos ratio TR2i - sin-cos ratio to tan TR3 - angle canonicalization TR4 - functions at special angles TR5 - powers of sin to powers of cos TR6 - powers of cos to powers of sin TR7 - reduce cos power (increase angle) TR8 - expand products of sin-cos to sums TR9 - contract sums of sin-cos to products TR10 - separate sin-cos arguments TR10i - collect sin-cos arguments TR11 - reduce double angles TR12 - separate tan arguments TR12i - collect tan arguments TR13 - expand product of tan-cot TRmorrie - prod(cos(x*2**i), (i, 0, k - 1)) -> sin(2**k*x)/(2**k*sin(x)) TR14 - factored powers of sin or cos to cos or sin power TR15 - negative powers of sin to cot power TR16 - negative powers of cos to tan power TR22 - tan-cot powers to negative powers of sec-csc functions TR111 - negative sin-cos-tan powers to csc-sec-cot There are 4 combination transforms (CTR1 - CTR4) in which a sequence of transformations are applied and the simplest expression is selected from a few options. Finally, there are the 2 rule lists (RL1 and RL2), which apply a sequence of transformations and combined transformations, and the ``fu`` algorithm itself, which applies rules and rule lists and selects the best expressions. There is also a function ``L`` which counts the number of trigonometric funcions that appear in the expression. Other than TR0, re-writing of expressions is not done by the transformations. e.g. TR10i finds pairs of terms in a sum that are in the form like ``cos(x)*cos(y) + sin(x)*sin(y)``. Such expression are targeted in a bottom-up traversal of the expression, but no manipulation to make them appear is attempted. For example, Set-up for examples below: >>> from sympy.simplify.fu import fu, L, TR9, TR10i, TR11 >>> from sympy import factor, sin, cos, powsimp >>> from sympy.abc import x, y, z, a >>> from time import time >>> eq = cos(x + y)/cos(x) >>> TR10i(eq.expand(trig=True)) -sin(x)*sin(y)/cos(x) + cos(y) If the expression is put in "normal" form (with a common denominator) then the transformation is successful: >>> TR10i(_.normal()) cos(x + y)/cos(x) TR11's behavior is similar. It rewrites double angles as smaller angles but doesn't do any simplification of the result. >>> TR11(sin(2)**a*cos(1)**(-a), 1) (2*sin(1)*cos(1))**a*cos(1)**(-a) >>> powsimp(_) (2*sin(1))**a The temptation is to try make these TR rules "smarter" but that should really be done at a higher level; the TR rules should try maintain the "do one thing well" principle. There is one exception, however. In TR10i and TR9 terms are recognized even when they are each multiplied by a common factor: >>> fu(a*cos(x)*cos(y) + a*sin(x)*sin(y)) a*cos(x - y) Factoring with ``factor_terms`` is used but it it "JIT"-like, being delayed until it is deemed necessary. Furthermore, if the factoring does not help with the simplification, it is not retained, so ``a*cos(x)*cos(y) + a*sin(x)*sin(z)`` does not become the factored (but unsimplified in the trigonometric sense) expression: >>> fu(a*cos(x)*cos(y) + a*sin(x)*sin(z)) a*sin(x)*sin(z) + a*cos(x)*cos(y) In some cases factoring might be a good idea, but the user is left to make that decision. For example: >>> expr=((15*sin(2*x) + 19*sin(x + y) + 17*sin(x + z) + 19*cos(x - z) + ... 25)*(20*sin(2*x) + 15*sin(x + y) + sin(y + z) + 14*cos(x - z) + ... 14*cos(y - z))*(9*sin(2*y) + 12*sin(y + z) + 10*cos(x - y) + 2*cos(y - ... z) + 18)).expand(trig=True).expand() In the expanded state, there are nearly 1000 trig functions: >>> L(expr) 932 If the expression where factored first, this would take time but the resulting expression would be transformed very quickly: >>> def clock(f, n=2): ... t=time(); f(); return round(time()-t, n) ... >>> clock(lambda: factor(expr)) # doctest: +SKIP 0.86 >>> clock(lambda: TR10i(expr), 3) # doctest: +SKIP 0.016 If the unexpanded expression is used, the transformation takes longer but not as long as it took to factor it and then transform it: >>> clock(lambda: TR10i(expr), 2) # doctest: +SKIP 0.28 So neither expansion nor factoring is used in ``TR10i``: if the expression is already factored (or partially factored) then expansion with ``trig=True`` would destroy what is already known and take longer; if the expression is expanded, factoring may take longer than simply applying the transformation itself. Although the algorithms should be canonical, always giving the same result, they may not yield the best result. This, in general, is the nature of simplification where searching all possible transformation paths is very expensive. Here is a simple example. There are 6 terms in the following sum: >>> expr = (sin(x)**2*cos(y)*cos(z) + sin(x)*sin(y)*cos(x)*cos(z) + ... sin(x)*sin(z)*cos(x)*cos(y) + sin(y)*sin(z)*cos(x)**2 + sin(y)*sin(z) + ... cos(y)*cos(z)) >>> args = expr.args Serendipitously, fu gives the best result: >>> fu(expr) 3*cos(y - z)/2 - cos(2*x + y + z)/2 But if different terms were combined, a less-optimal result might be obtained, requiring some additional work to get better simplification, but still less than optimal. The following shows an alternative form of ``expr`` that resists optimal simplification once a given step is taken since it leads to a dead end: >>> TR9(-cos(x)**2*cos(y + z) + 3*cos(y - z)/2 + ... cos(y + z)/2 + cos(-2*x + y + z)/4 - cos(2*x + y + z)/4) sin(2*x)*sin(y + z)/2 - cos(x)**2*cos(y + z) + 3*cos(y - z)/2 + cos(y + z)/2 Here is a smaller expression that exhibits the same behavior: >>> a = sin(x)*sin(z)*cos(x)*cos(y) + sin(x)*sin(y)*cos(x)*cos(z) >>> TR10i(a) sin(x)*sin(y + z)*cos(x) >>> newa = _ >>> TR10i(expr - a) # this combines two more of the remaining terms sin(x)**2*cos(y)*cos(z) + sin(y)*sin(z)*cos(x)**2 + cos(y - z) >>> TR10i(_ + newa) == _ + newa # but now there is no more simplification True Without getting lucky or trying all possible pairings of arguments, the final result may be less than optimal and impossible to find without better heuristics or brute force trial of all possibilities. Notes ===== This work was started by NAME at the Technological School "Electronic systems" (30.11.2011). References ========== http://rfdz.ph-noe.ac.at/fileadmin/Mathematik_Uploads/ACDCA/ DESTIME2006/DES_contribs/Fu/simplification.pdf http://www.sosmath.com/trig/Trig5/trig5/pdf/pdf.html gives a formula sheet. """
""" This is a procedural interface to the matplotlib object-oriented plotting library. The following plotting commands are provided; the majority have MATLAB |reg| [*]_ analogs and similar arguments. .. |reg| unicode:: 0xAE _Plotting commands acorr - plot the autocorrelation function annotate - annotate something in the figure arrow - add an arrow to the axes axes - Create a new axes axhline - draw a horizontal line across axes axvline - draw a vertical line across axes axhspan - draw a horizontal bar across axes axvspan - draw a vertical bar across axes axis - Set or return the current axis limits autoscale - turn axis autoscaling on or off, and apply it bar - make a bar chart barh - a horizontal bar chart broken_barh - a set of horizontal bars with gaps box - set the axes frame on/off state boxplot - make a box and whisker plot violinplot - make a violin plot cla - clear current axes clabel - label a contour plot clf - clear a figure window clim - adjust the color limits of the current image close - close a figure window colorbar - add a colorbar to the current figure cohere - make a plot of coherence contour - make a contour plot contourf - make a filled contour plot csd - make a plot of cross spectral density delaxes - delete an axes from the current figure draw - Force a redraw of the current figure errorbar - make an errorbar graph figlegend - make legend on the figure rather than the axes figimage - make a figure image figtext - add text in figure coords figure - create or change active figure fill - make filled polygons findobj - recursively find all objects matching some criteria gca - return the current axes gcf - return the current figure gci - get the current image, or None getp - get a graphics property grid - set whether gridding is on hist - make a histogram hold - set the axes hold state ioff - turn interaction mode off ion - turn interaction mode on isinteractive - return True if interaction mode is on imread - load image file into array imsave - save array as an image file imshow - plot image data ishold - return the hold state of the current axes legend - make an axes legend locator_params - adjust parameters used in locating axis ticks loglog - a log log plot matshow - display a matrix in a new figure preserving aspect margins - set margins used in autoscaling pause - pause for a specified interval pcolor - make a pseudocolor plot pcolormesh - make a pseudocolor plot using a quadrilateral mesh pie - make a pie chart plot - make a line plot plot_date - plot dates plotfile - plot column data from an ASCII tab/space/comma delimited file pie - pie charts polar - make a polar plot on a PolarAxes psd - make a plot of power spectral density quiver - make a direction field (arrows) plot rc - control the default params rgrids - customize the radial grids and labels for polar savefig - save the current figure scatter - make a scatter plot setp - set a graphics property semilogx - log x axis semilogy - log y axis show - show the figures specgram - a spectrogram plot spy - plot sparsity pattern using markers or image stem - make a stem plot subplot - make one subplot (numrows, numcols, axesnum) subplots - make a figure with a set of (numrows, numcols) subplots subplots_adjust - change the params controlling the subplot positions of current figure subplot_tool - launch the subplot configuration tool suptitle - add a figure title table - add a table to the plot text - add some text at location x,y to the current axes thetagrids - customize the radial theta grids and labels for polar tick_params - control the appearance of ticks and tick labels ticklabel_format - control the format of tick labels title - add a title to the current axes tricontour - make a contour plot on a triangular grid tricontourf - make a filled contour plot on a triangular grid tripcolor - make a pseudocolor plot on a triangular grid triplot - plot a triangular grid xcorr - plot the autocorrelation function of x and y xlim - set/get the xlimits ylim - set/get the ylimits xticks - set/get the xticks yticks - set/get the yticks xlabel - add an xlabel to the current axes ylabel - add a ylabel to the current axes autumn - set the default colormap to autumn bone - set the default colormap to bone cool - set the default colormap to cool copper - set the default colormap to copper flag - set the default colormap to flag gray - set the default colormap to gray hot - set the default colormap to hot hsv - set the default colormap to hsv jet - set the default colormap to jet pink - set the default colormap to pink prism - set the default colormap to prism spring - set the default colormap to spring summer - set the default colormap to summer winter - set the default colormap to winter spectral - set the default colormap to spectral _Event handling connect - register an event handler disconnect - remove a connected event handler _Matrix commands cumprod - the cumulative product along a dimension cumsum - the cumulative sum along a dimension detrend - remove the mean or besdt fit line from an array diag - the k-th diagonal of matrix diff - the n-th differnce of an array eig - the eigenvalues and eigen vectors of v eye - a matrix where the k-th diagonal is ones, else zero find - return the indices where a condition is nonzero fliplr - flip the rows of a matrix up/down flipud - flip the columns of a matrix left/right linspace - a linear spaced vector of N values from min to max inclusive logspace - a log spaced vector of N values from min to max inclusive meshgrid - repeat x and y to make regular matrices ones - an array of ones rand - an array from the uniform distribution [0,1] randn - an array from the normal distribution rot90 - rotate matrix k*90 degress counterclockwise squeeze - squeeze an array removing any dimensions of length 1 tri - a triangular matrix tril - a lower triangular matrix triu - an upper triangular matrix vander - the Vandermonde matrix of vector x svd - singular value decomposition zeros - a matrix of zeros _Probability normpdf - The Gaussian probability density function rand - random numbers from the uniform distribution randn - random numbers from the normal distribution _Statistics amax - the maximum along dimension m amin - the minimum along dimension m corrcoef - correlation coefficient cov - covariance matrix mean - the mean along dimension m median - the median along dimension m norm - the norm of vector x prod - the product along dimension m ptp - the max-min along dimension m std - the standard deviation along dimension m asum - the sum along dimension m ksdensity - the kernel density estimate _Time series analysis bartlett - M-point Bartlett window blackman - M-point Blackman window cohere - the coherence using average periodiogram csd - the cross spectral density using average periodiogram fft - the fast Fourier transform of vector x hamming - M-point Hamming window hanning - M-point Hanning window hist - compute the histogram of x kaiser - M length Kaiser window psd - the power spectral density using average periodiogram sinc - the sinc function of array x _Dates date2num - convert python datetimes to numeric representation drange - create an array of numbers for date plots num2date - convert numeric type (float days since 0001) to datetime _Other angle - the angle of a complex array griddata - interpolate irregularly distributed data to a regular grid load - Deprecated--please use loadtxt. loadtxt - load ASCII data into array. polyfit - fit x, y to an n-th order polynomial polyval - evaluate an n-th order polynomial roots - the roots of the polynomial coefficients in p save - Deprecated--please use savetxt. savetxt - save an array to an ASCII file. trapz - trapezoidal integration __end .. [*] MATLAB is a registered trademark of The MathWorks, Inc. """
# from .. import feature_extraction # import collections # import numpy as np # import scipy.stats # import ml.utils as utils # import ml.system as system # import ml.opensmile # import ml.parsing.arff # # # class PitchExtractor(feature_extraction.FeatureExtractor): # def __init__(self, config): # self.params = utils.read_config(config, require=["pitch_config"]) # self.temp_folder = config["temp_folder"] if "temp_folder" in config else "/tmp/opensmile_arffs/" # system.mkdir_p(self.temp_folder) # # def extract(self, instance): # # last_seconds_values = self.params["extract_on_last_seconds"] # windows = self.params["extract_on_windows"] # # times_pitch, pitch = self.pitch_track(instance) # # all_values = {} # # all_values["pitch_slope"] = collections.defaultdict(lambda: np.nan) # all_values["mean_pitch"] = collections.defaultdict(lambda: np.nan) # # times_pitch = times_pitch - times_pitch.max() # Alineando a 0. # # for (w_from, w_to) in windows: # window = (w_from, w_to) # if not w_from: # indices = np.array([True] * len(times_pitch)) # else: # indices = (times_pitch >= w_from) & (times_pitch < w_to) # # all_values["mean_pitch"][window] = np.mean(pitch[indices]) # # if sum(indices) > 2: # suficientes valores para calcular slope # all_values["pitch_slope"][window] = scipy.stats.linregress(times_pitch[indices], pitch[indices])[0] # [0] => slope (m) # # feat = {} # for (w_from, w_to) in windows: # window = (w_from, w_to) # if not w_from: # in_ms = "all_ipu" # else: # in_ms = "({},{})".format(int(w_from * 1000), int(w_to * 1000)) # # for feat_name in ["pitch_slope", "mean_pitch"]: # feat["{}_{}".format(feat_name, in_ms)] = all_values[feat_name][window] # # if self.params["extended_features"]: # feat["pitch"] = pitch # feat["times_pitch"] = times_pitch # # return feat # # def pitch_track(self, instance): # data = ml.opensmile.call_script(self.params["smile_extract_path"], self.temp_folder, self.params["pitch_config"], instance.filename) # # times = ml.parsing.arff.get_column(data, "frameTime") # pitch = ml.parsing.arff.get_column(data, "F0final_sma") # # times = times[pitch > 0] # pitch = pitch[pitch > 0] # # return times, pitch # # # # fs = instance.audio.frame_rate # # ms_10_in_frames = math.ceil(fs / 100) # Extract every 10 ms. # # min_pitch = 50 if instance.speaker_male() else 75 # # max_pitch = 300 if instance.speaker_male() else 500 # # x = np.array(instance.audio.get_array_of_samples()) # # pitch_swipe = pysptk.swipe(x.astype(np.float64), fs=fs, hopsize=ms_10_in_frames, min=min_pitch, max=max_pitch, otype="pitch", threshold=0.27) # # # # times = np.arange(0, len(pitch_swipe)) / 100.0 # # filtered_times = times[pitch_swipe > 0] # # filtered_pitch_swipe = pitch_swipe[pitch_swipe > 0] # # # # return filtered_times, filtered_pitch_swipe # # def batch_extract(self, instances): # return [self.extract(instance) for instance in instances]
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. Otherwise, return a list of tuples with (name, value) for each option in the section. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
""" Algorithms and classes to support enumerative combinatorics. Currently just multiset partitions, but more could be added. Terminology (following Knuth, algorithm IP_ADDRESSM TAOCP) *multiset* aaabbcccc has a *partition* aaabc | bccc The submultisets, aaabc and bccc of the partition are called *parts*, or sometimes *vectors*. (Knuth notes that multiset partitions can be thought of as partitions of vectors of integers, where the ith element of the vector gives the multiplicity of element i.) The values a, b and c are *components* of the multiset. These correspond to elements of a set, but in a multiset can be present with a multiplicity greater than 1. The algorithm deserves some explanation. Think of the part aaabc from the multiset above. If we impose an ordering on the components of the multiset, we can represent a part with a vector, in which the value of the first element of the vector corresponds to the multiplicity of the first component in that part. Thus, aaabc can be represented by the vector [3, 1, 1]. We can also define an ordering on parts, based on the lexicographic ordering of the vector (leftmost vector element, i.e., the element with the smallest component number, is the most significant), so that [3, 1, 1] > [3, 1, 0] and [3, 1, 1] > [2, 1, 4]. The ordering on parts can be extended to an ordering on partitions: First, sort the parts in each partition, left-to-right in decreasing order. Then partition A is greater than partition B if A's leftmost/greatest part is greater than B's leftmost part. If the leftmost parts are equal, compare the second parts, and so on. In this ordering, the greatest partition of a given multiset has only one part. The least partition is the one in which the components are spread out, one per part. The enumeration algorithms in this file yield the partitions of the argument multiset in decreasing order. The main data structure is a stack of parts, corresponding to the current partition. An important invariant is that the parts on the stack are themselves in decreasing order. This data structure is decremented to find the next smaller partition. Most often, decrementing the partition will only involve adjustments to the smallest parts at the top of the stack, much as adjacent integers *usually* differ only in their last few digits. Knuth's algorithm uses two main operations on parts: Decrement - change the part so that it is smaller in the (vector) lexicographic order, but reduced by the smallest amount possible. For example, if the multiset has vector [5, 3, 1], and the bottom/greatest part is [4, 2, 1], this part would decrement to [4, 2, 0], while [4, 0, 0] would decrement to [3, 3, 1]. A singleton part is never decremented -- [1, 0, 0] is not decremented to [0, 3, 1]. Instead, the decrement operator needs to fail for this case. In Knuth's psuedocode, the decrement operator is step m5. Spread unallocated multiplicity - Once a part has been decremented, it cannot be the rightmost part in the partition. There is some multiplicity that has not been allocated, and new parts must be created above it in the stack to use up this multiplicity. To maintain the invariant that the parts on the stack are in decreasing order, these new parts must be less than or equal to the decremented part. For example, if the multiset is [5, 3, 1], and its most significant part has just been decremented to [5, 3, 0], the spread operation will add a new part so that the stack becomes [[5, 3, 0], [0, 0, 1]]. If the most significant part (for the same multiset) has been decremented to [2, 0, 0] the stack becomes [[2, 0, 0], [2, 0, 0], [1, 3, 1]]. In the psuedocode, the spread operation for one part is step m2. The complete spread operation is a loop of steps m2 and m3. In order to facilitate the spread operation, Knuth stores, for each component of each part, not just the multiplicity of that component in the part, but also the total multiplicity available for this component in this part or any lesser part above it on the stack. One added twist is that Knuth does not represent the part vectors as arrays. Instead, he uses a sparse representation, in which a component of a part is represented as a component number (c), plus the multiplicity of the component in that part (v) as well as the total multiplicity available for that component (u). This saves time that would be spent skipping over zeros. """
""" ============== Array Creation ============== Introduction ============ There are 5 general mechanisms for creating arrays: 1) Conversion from other Python structures (e.g., lists, tuples) 2) Intrinsic numpy array array creation objects (e.g., arange, ones, zeros, etc.) 3) Reading arrays from disk, either from standard or custom formats 4) Creating arrays from raw bytes through the use of strings or buffers 5) Use of special library functions (e.g., random) This section will not cover means of replicating, joining, or otherwise expanding or mutating existing arrays. Nor will it cover creating object arrays or record arrays. Both of those are covered in their own sections. Converting Python array_like Objects to Numpy Arrays ==================================================== In general, numerical data arranged in an array-like structure in Python can be converted to arrays through the use of the array() function. The most obvious examples are lists and tuples. See the documentation for array() for details for its use. Some objects may support the array-protocol and allow conversion to arrays this way. A simple way to find out if the object can be converted to a numpy array using array() is simply to try it interactively and see if it works! (The Python Way). Examples: :: >>> x = np.array([2,3,1,0]) >>> x = np.array([2, 3, 1, 0]) >>> x = np.array([[1,2.0],[0,0],(1+1j,3.)]) # note mix of tuple and lists, and types >>> x = np.array([[ 1.+0.j, 2.+0.j], [ 0.+0.j, 0.+0.j], [ 1.+1.j, 3.+0.j]]) Intrinsic Numpy Array Creation ============================== Numpy has built-in functions for creating arrays from scratch: zeros(shape) will create an array filled with 0 values with the specified shape. The default dtype is float64. ``>>> np.zeros((2, 3)) array([[ 0., 0., 0.], [ 0., 0., 0.]])`` ones(shape) will create an array filled with 1 values. It is identical to zeros in all other respects. arange() will create arrays with regularly incrementing values. Check the docstring for complete information on the various ways it can be used. A few examples will be given here: :: >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=np.float) array([ 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) Note that there are some subtleties regarding the last usage that the user should be aware of that are described in the arange docstring. linspace() will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: :: >>> np.linspace(1., 4., 6) array([ 1. , 1.6, 2.2, 2.8, 3.4, 4. ]) The advantage of this creation function is that one can guarantee the number of elements and the starting and end point, which arange() generally will not do for arbitrary start, stop, and step values. indices() will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension. An example illustrates much better than a verbal description: :: >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) This is particularly useful for evaluating functions of multiple dimensions on a regular grid. Reading Arrays From Disk ======================== This is presumably the most common case of large array creation. The details, of course, depend greatly on the format of data on disk and so this section can only give general pointers on how to handle various formats. Standard Binary Formats ----------------------- Various fields have standard formats for array data. The following lists the ones with known python libraries to read them and return numpy arrays (there may be others for which it is possible to read and convert to numpy arrays so check the last section as well) :: HDF5: PyTables FITS: PyFITS Examples of formats that cannot be read directly but for which it is not hard to convert are libraries like PIL (able to read and write many image formats such as jpg, png, etc). Common ASCII Formats ------------------------ Comma Separated Value files (CSV) are widely used (and an export and import option for programs like Excel). There are a number of ways of reading these files in Python. There are CSV functions in Python and functions in pylab (part of matplotlib). More generic ascii files can be read using the io package in scipy. Custom Binary Formats --------------------- There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the numpy fromfile() function and .tofile() method to read and write numpy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. Use of Special Libraries ------------------------ There are libraries that can be used to generate arrays for special purposes and it isn't possible to enumerate all of them. The most common uses are use of the many array generation functions in random that can generate arrays of random values, and some utility functions to generate special matrices (e.g. diagonal). """
""" Cython optimize zeros API ========================= The underlying C functions for the following root finders can be accessed directly using Cython: - `~scipy.optimize.bisect` - `~scipy.optimize.ridder` - `~scipy.optimize.brenth` - `~scipy.optimize.brentq` The Cython API for the zeros functions is similar except there is no ``disp`` argument. Import the zeros functions using ``cimport`` from `scipy.optimize.cython_optimize`. :: from scipy.optimize.cython_optimize cimport bisect, ridder, brentq, brenth Callback signature ------------------ The zeros functions in `~scipy.optimize.cython_optimize` expect a callback that takes a double for the scalar independent variable as the 1st argument and a user defined ``struct`` with any extra parameters as the 2nd argument. :: double (*callback_type)(double, void*) Examples -------- Usage of `~scipy.optimize.cython_optimize` requires Cython to write callbacks that are compiled into C. For more information on compiling Cython, see the `Cython Documentation <http://docs.cython.org/en/latest/index.html>`_. These are the basic steps: 1. Create a Cython ``.pyx`` file, for example: ``myexample.pyx``. 2. Import the desired root finder from `~scipy.optimize.cython_optimize`. 3. Write the callback function, and call the selected zeros function passing the callback, any extra arguments, and the other solver parameters. :: from scipy.optimize.cython_optimize cimport brentq # import math from Cython from libc cimport math myargs = {'C0': 1.0, 'C1': 0.7} # a dictionary of extra arguments XLO, XHI = 0.5, 1.0 # lower and upper search boundaries XTOL, RTOL, MITR = 1e-3, 1e-3, 10 # other solver parameters # user-defined struct for extra parameters ctypedef struct test_params: double C0 double C1 # user-defined callback cdef double f(double x, void *args): cdef test_params *myargs = <test_params *> args return myargs.C0 - math.exp(-(x - myargs.C1)) # Cython wrapper function cdef double brentq_wrapper_example(dict args, double xa, double xb, double xtol, double rtol, int mitr): # Cython automatically casts dictionary to struct cdef test_params myargs = args return brentq( f, xa, xb, <test_params *> &myargs, xtol, rtol, mitr, NULL) # Python function def brentq_example(args=myargs, xa=XLO, xb=XHI, xtol=XTOL, rtol=RTOL, mitr=MITR): '''Calls Cython wrapper from Python.''' return brentq_wrapper_example(args, xa, xb, xtol, rtol, mitr) 4. If you want to call your function from Python, create a Cython wrapper, and a Python function that calls the wrapper, or use ``cpdef``. Then, in Python, you can import and run the example. :: from myexample import brentq_example x = brentq_example() # 0.6999942848231314 5. Create a Cython ``.pxd`` file if you need to export any Cython functions. Full output ----------- The functions in `~scipy.optimize.cython_optimize` can also copy the full output from the solver to a C ``struct`` that is passed as its last argument. If you don't want the full output, just pass ``NULL``. The full output ``struct`` must be type ``zeros_full_output``, which is defined in `scipy.optimize.cython_optimize` with the following fields: - ``int funcalls``: number of function calls - ``int iterations``: number of iterations - ``int error_num``: error number - ``double root``: root of function The root is copied by `~scipy.optimize.cython_optimize` to the full output ``struct``. An error number of -1 means a sign error, -2 means a convergence error, and 0 means the solver converged. Continuing from the previous example:: from scipy.optimize.cython_optimize cimport zeros_full_output # cython brentq solver with full output cdef brent_full_output brentq_full_output_wrapper_example( dict args, double xa, double xb, double xtol, double rtol, int mitr): cdef test_params myargs = args cdef zeros_full_output my_full_output # use my_full_output instead of NULL brentq(f, xa, xb, &myargs, xtol, rtol, mitr, &my_full_output) return my_full_output # Python function def brent_full_output_example(args=myargs, xa=XLO, xb=XHI, xtol=XTOL, rtol=RTOL, mitr=MITR): '''Returns full output''' return brentq_full_output_wrapper_example(args, xa, xb, xtol, rtol, mitr) result = brent_full_output_example() # {'error_num': 0, # 'funcalls': 6, # 'iterations': 5, # 'root': 0.6999942848231314} """
# Test 64-bit COMPARE LOGICAL IMMEDIATE AND BRANCH in cases where the sheer # number of instructions causes some branches to be out of range. # RUN: python %s | llc -mtriple=s390x-linux-gnu | FileCheck %s # Construct: # # before0: # conditional branch to after0 # ... # beforeN: # conditional branch to after0 # main: # 0xffb4 bytes, from MVIY instructions # conditional branch to main # after0: # ... # conditional branch to main # afterN: # # Each conditional branch sequence occupies 18 bytes if it uses a short # branch and 24 if it uses a long one. The ones before "main:" have to # take the branch length into account, which is 6 for short branches, # so the final (0x4c - 6) / 18 == 3 blocks can use short branches. # The ones after "main:" do not, so the first 0x4c / 18 == 4 blocks # can use short branches. The conservative algorithm we use makes # one of the forward branches unnecessarily long, as noted in the # check output below. # # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 50 # CHECK: jgl [[LABEL:\.L[^ ]*]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 51 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 52 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 53 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 54 # CHECK: jgl [[LABEL]] # ...as mentioned above, the next one could be a CLGIJL instead... # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 55 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 56, [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 57, [[LABEL]] # ...main goes here... # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 100, [[LABEL:\.L[^ ]*]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 101, [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 102, [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgijl [[REG]], 103, [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 104 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 105 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 106 # CHECK: jgl [[LABEL]] # CHECK: lg [[REG:%r[0-5]]], 0(%r3) # CHECK: sg [[REG]], 0(%r4) # CHECK: clgfi [[REG]], 107 # CHECK: jgl [[LABEL]]
# # XML-RPC CLIENT LIBRARY # $Id$ # # an XML-RPC client interface for Python. # # the marshalling and response parser code can also be used to # implement XML-RPC servers. # # Notes: # this version is designed to work with Python 2.1 or newer. # # History: # 1999-01-14 fl Created # 1999-01-15 fl Changed dateTime to use localtime # 1999-01-16 fl Added Binary/base64 element, default to RPC2 service # 1999-01-19 fl Fixed array data element (from Skip Montanaro) # 1999-01-21 fl Fixed dateTime constructor, etc. # 1999-02-02 fl Added fault handling, handle empty sequences, etc. # 1999-02-10 fl Fixed problem with empty responses (from Skip Montanaro) # 1999-06-20 fl Speed improvements, pluggable parsers/transports (0.9.8) # 2000-11-28 fl Changed boolean to check the truth value of its argument # 2001-02-24 fl Added encoding/Unicode/SafeTransport patches # 2001-02-26 fl Added compare support to wrappers (0.9.9/1.0b1) # 2001-03-28 fl Make sure response tuple is a singleton # 2001-03-29 fl Don't require empty params element (from NAME 2001-06-10 fl Folded in _xmlrpclib accelerator support (1.0b2) # 2001-08-20 fl Base xmlrpclib.Error on built-in Exception (from NAME 2001-09-03 fl Allow Transport subclass to override getparser # 2001-09-10 fl Lazy import of urllib, cgi, xmllib (20x import speedup) # 2001-10-01 fl Remove containers from memo cache when done with them # 2001-10-01 fl Use faster escape method (80% dumps speedup) # 2001-10-02 fl More dumps microtuning # 2001-10-04 fl Make sure import expat gets a parser (from NAME 2001-10-10 sm Allow long ints to be passed as ints if they don't overflow # 2001-10-17 sm Test for int and long overflow (allows use on 64-bit systems) # 2001-11-12 fl Use repr() to marshal doubles (from NAME 2002-03-17 fl Avoid buffered read when possible (from NAME 2002-04-07 fl Added pythondoc comments # 2002-04-16 fl Added __str__ methods to datetime/binary wrappers # 2002-05-15 fl Added error constants (from NAME 2002-06-27 fl Merged with Python CVS version # 2002-10-22 fl Added basic authentication (based on code from NAME 2003-01-22 sm Add support for the bool type # 2003-02-27 gvr Remove apply calls # 2003-04-24 sm Use cStringIO if available # 2003-04-25 ak Add support for nil # 2003-06-15 gn Add support for time.struct_time # 2003-07-12 gp Correct marshalling of Faults # 2003-10-31 mvl Add multicall support # 2004-08-20 mvl Bump minimum supported Python version to 2.1 # # Copyright (c) 1999-2002 by Secret Labs AB. # Copyright (c) 1999-2002 by NAME EMAIL http://www.pythonware.com # # -------------------------------------------------------------------- # The XML-RPC client interface is # # Copyright (c) 1999-2002 by Secret Labs AB # Copyright (c) 1999-2002 by NAME By obtaining, using, and/or copying this software and/or its # associated documentation, you agree that you have read, understood, # and will comply with the following terms and conditions: # # Permission to use, copy, modify, and distribute this software and # its associated documentation for any purpose and without fee is # hereby granted, provided that the above copyright notice appears in # all copies, and that both that copyright notice and this permission # notice appear in supporting documentation, and that the name of # Secret Labs AB or the author not be used in advertising or publicity # pertaining to distribution of the software without specific, written # prior permission. # # SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD # TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- # ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR # BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY # DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, # WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS # ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE # OF THIS SOFTWARE. # --------------------------------------------------------------------
""" =================== Universal Functions =================== Ufuncs are, generally speaking, mathematical functions or operations that are applied element-by-element to the contents of an array. That is, the result in each output array element only depends on the value in the corresponding input array (or arrays) and on no other array elements. NumPy comes with a large suite of ufuncs, and scipy extends that suite substantially. The simplest example is the addition operator: :: >>> np.array([0,2,3,4]) + np.array([1,1,-1,2]) array([1, 3, 2, 6]) The unfunc module lists all the available ufuncs in numpy. Documentation on the specific ufuncs may be found in those modules. This documentation is intended to address the more general aspects of unfuncs common to most of them. All of the ufuncs that make use of Python operators (e.g., +, -, etc.) have equivalent functions defined (e.g. add() for +) Type coercion ============= What happens when a binary operator (e.g., +,-,\\*,/, etc) deals with arrays of two different types? What is the type of the result? Typically, the result is the higher of the two types. For example: :: float32 + float64 -> float64 int8 + int32 -> int32 int16 + float32 -> float32 float32 + complex64 -> complex64 There are some less obvious cases generally involving mixes of types (e.g. uints, ints and floats) where equal bit sizes for each are not capable of saving all the information in a different type of equivalent bit size. Some examples are int32 vs float32 or uint32 vs int32. Generally, the result is the higher type of larger size than both (if available). So: :: int32 + float32 -> float64 uint32 + int32 -> int64 Finally, the type coercion behavior when expressions involve Python scalars is different than that seen for arrays. Since Python has a limited number of types, combining a Python int with a dtype=np.int8 array does not coerce to the higher type but instead, the type of the array prevails. So the rules for Python scalars combined with arrays is that the result will be that of the array equivalent the Python scalar if the Python scalar is of a higher 'kind' than the array (e.g., float vs. int), otherwise the resultant type will be that of the array. For example: :: Python int + int8 -> int8 Python float + int8 -> float64 ufunc methods ============= Binary ufuncs support 4 methods. **.reduce(arr)** applies the binary operator to elements of the array in sequence. For example: :: >>> np.add.reduce(np.arange(10)) # adds all elements of array 45 For multidimensional arrays, the first dimension is reduced by default: :: >>> np.add.reduce(np.arange(10).reshape(2,5)) array([ 5, 7, 9, 11, 13]) The axis keyword can be used to specify different axes to reduce: :: >>> np.add.reduce(np.arange(10).reshape(2,5),axis=1) array([10, 35]) **.accumulate(arr)** applies the binary operator and generates an an equivalently shaped array that includes the accumulated amount for each element of the array. A couple examples: :: >>> np.add.accumulate(np.arange(10)) array([ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45]) >>> np.multiply.accumulate(np.arange(1,9)) array([ 1, 2, 6, 24, 120, 720, 5040, 40320]) The behavior for multidimensional arrays is the same as for .reduce(), as is the use of the axis keyword). **.reduceat(arr,indices)** allows one to apply reduce to selected parts of an array. It is a difficult method to understand. See the documentation at: **.outer(arr1,arr2)** generates an outer operation on the two arrays arr1 and arr2. It will work on multidimensional arrays (the shape of the result is the concatenation of the two input shapes.: :: >>> np.multiply.outer(np.arange(3),np.arange(4)) array([[0, 0, 0, 0], [0, 1, 2, 3], [0, 2, 4, 6]]) Output arguments ================ All ufuncs accept an optional output array. The array must be of the expected output shape. Beware that if the type of the output array is of a different (and lower) type than the output result, the results may be silently truncated or otherwise corrupted in the downcast to the lower type. This usage is useful when one wants to avoid creating large temporary arrays and instead allows one to reuse the same array memory repeatedly (at the expense of not being able to use more convenient operator notation in expressions). Note that when the output argument is used, the ufunc still returns a reference to the result. >>> x = np.arange(2) >>> np.add(np.arange(2),np.arange(2.),x) array([0, 2]) >>> x array([0, 2]) and & or as ufuncs ================== Invariably people try to use the python 'and' and 'or' as logical operators (and quite understandably). But these operators do not behave as normal operators since Python treats these quite differently. They cannot be overloaded with array equivalents. Thus using 'and' or 'or' with an array results in an error. There are two alternatives: 1) use the ufunc functions logical_and() and logical_or(). 2) use the bitwise operators & and \\|. The drawback of these is that if the arguments to these operators are not boolean arrays, the result is likely incorrect. On the other hand, most usages of logical_and and logical_or are with boolean arrays. As long as one is careful, this is a convenient way to apply these operators. """
#!/usr/bin/env python # Copyright 2016 MainNerve LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # # Apache License # Version 2.0, January 2004 # http://www.apache.org/licenses/ # # TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION # # 1. Definitions. # # "License" shall mean the terms and conditions for use, reproduction, # and distribution as defined by Sections 1 through 9 of this document. # # "Licensor" shall mean the copyright owner or entity authorized by # the copyright owner that is granting the License. # # "Legal Entity" shall mean the union of the acting entity and all # other entities that control, are controlled by, or are under common # control with that entity. For the purposes of this definition, # "control" means (i) the power, direct or indirect, to cause the # direction or management of such entity, whether by contract or # otherwise, or (ii) ownership of fifty percent (50%) or more of the # outstanding shares, or (iii) beneficial ownership of such entity. # # "You" (or "Your") shall mean an individual or Legal Entity # exercising permissions granted by this License. # # "Source" form shall mean the preferred form for making modifications, # including but not limited to software source code, documentation # source, and configuration files. # # "Object" form shall mean any form resulting from mechanical # transformation or translation of a Source form, including but # not limited to compiled object code, generated documentation, # and conversions to other media types. # # "Work" shall mean the work of authorship, whether in Source or # Object form, made available under the License, as indicated by a # copyright notice that is included in or attached to the work # (an example is provided in the Appendix below). # # "Derivative Works" shall mean any work, whether in Source or Object # form, that is based on (or derived from) the Work and for which the # editorial revisions, annotations, elaborations, or other modifications # represent, as a whole, an original work of authorship. For the purposes # of this License, Derivative Works shall not include works that remain # separable from, or merely link (or bind by name) to the interfaces of, # the Work and Derivative Works thereof. # # "Contribution" shall mean any work of authorship, including # the original version of the Work and any modifications or additions # to that Work or Derivative Works thereof, that is intentionally # submitted to Licensor for inclusion in the Work by the copyright owner # or by an individual or Legal Entity authorized to submit on behalf of # the copyright owner. For the purposes of this definition, "submitted" # means any form of electronic, verbal, or written communication sent # to the Licensor or its representatives, including but not limited to # communication on electronic mailing lists, source code control systems, # and issue tracking systems that are managed by, or on behalf of, the # Licensor for the purpose of discussing and improving the Work, but # excluding communication that is conspicuously marked or otherwise # designated in writing by the copyright owner as "Not a Contribution." # # "Contributor" shall mean Licensor and any individual or Legal Entity # on behalf of whom a Contribution has been received by Licensor and # subsequently incorporated within the Work. # # 2. Grant of Copyright License. Subject to the terms and conditions of # this License, each Contributor hereby grants to You a perpetual, # worldwide, non-exclusive, no-charge, royalty-free, irrevocable # copyright license to reproduce, prepare Derivative Works of, # publicly display, publicly perform, sublicense, and distribute the # Work and such Derivative Works in Source or Object form. # # 3. Grant of Patent License. Subject to the terms and conditions of # this License, each Contributor hereby grants to You a perpetual, # worldwide, non-exclusive, no-charge, royalty-free, irrevocable # (except as stated in this section) patent license to make, have made, # use, offer to sell, sell, import, and otherwise transfer the Work, # where such license applies only to those patent claims licensable # by such Contributor that are necessarily infringed by their # Contribution(s) alone or by combination of their Contribution(s) # with the Work to which such Contribution(s) was submitted. If You # institute patent litigation against any entity (including a # cross-claim or counterclaim in a lawsuit) alleging that the Work # or a Contribution incorporated within the Work constitutes direct # or contributory patent infringement, then any patent licenses # granted to You under this License for that Work shall terminate # as of the date such litigation is filed. # # 4. Redistribution. You may reproduce and distribute copies of the # Work or Derivative Works thereof in any medium, with or without # modifications, and in Source or Object form, provided that You # meet the following conditions: # # (a) You must give any other recipients of the Work or # Derivative Works a copy of this License; and # # (b) You must cause any modified files to carry prominent notices # stating that You changed the files; and # # (c) You must retain, in the Source form of any Derivative Works # that You distribute, all copyright, patent, trademark, and # attribution notices from the Source form of the Work, # excluding those notices that do not pertain to any part of # the Derivative Works; and # # (d) If the Work includes a "NOTICE" text file as part of its # distribution, then any Derivative Works that You distribute must # include a readable copy of the attribution notices contained # within such NOTICE file, excluding those notices that do not # pertain to any part of the Derivative Works, in at least one # of the following places: within a NOTICE text file distributed # as part of the Derivative Works; within the Source form or # documentation, if provided along with the Derivative Works; or, # within a display generated by the Derivative Works, if and # wherever such third-party notices normally appear. The contents # of the NOTICE file are for informational purposes only and # do not modify the License. You may add Your own attribution # notices within Derivative Works that You distribute, alongside # or as an addendum to the NOTICE text from the Work, provided # that such additional attribution notices cannot be construed # as modifying the License. # # You may add Your own copyright statement to Your modifications and # may provide additional or different license terms and conditions # for use, reproduction, or distribution of Your modifications, or # for any such Derivative Works as a whole, provided Your use, # reproduction, and distribution of the Work otherwise complies with # the conditions stated in this License. # # 5. Submission of Contributions. Unless You explicitly state otherwise, # any Contribution intentionally submitted for inclusion in the Work # by You to the Licensor shall be under the terms and conditions of # this License, without any additional terms or conditions. # Notwithstanding the above, nothing herein shall supersede or modify # the terms of any separate license agreement you may have executed # with Licensor regarding such Contributions. # # 6. Trademarks. This License does not grant permission to use the trade # names, trademarks, service marks, or product names of the Licensor, # except as required for reasonable and customary use in describing the # origin of the Work and reproducing the content of the NOTICE file. # # 7. Disclaimer of Warranty. Unless required by applicable law or # agreed to in writing, Licensor provides the Work (and each # Contributor provides its Contributions) on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied, including, without limitation, any warranties or conditions # of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A # PARTICULAR PURPOSE. You are solely responsible for determining the # appropriateness of using or redistributing the Work and assume any # risks associated with Your exercise of permissions under this License. # # 8. Limitation of Liability. In no event and under no legal theory, # whether in tort (including negligence), contract, or otherwise, # unless required by applicable law (such as deliberate and grossly # negligent acts) or agreed to in writing, shall any Contributor be # liable to You for damages, including any direct, indirect, special, # incidental, or consequential damages of any character arising as a # result of this License or out of the use or inability to use the # Work (including but not limited to damages for loss of goodwill, # work stoppage, computer failure or malfunction, or any and all # other commercial damages or losses), even if such Contributor # has been advised of the possibility of such damages. # # 9. Accepting Warranty or Additional Liability. While redistributing # the Work or Derivative Works thereof, You may choose to offer, # and charge a fee for, acceptance of support, warranty, indemnity, # or other liability obligations and/or rights consistent with this # License. However, in accepting such obligations, You may act only # on Your own behalf and on Your sole responsibility, not on behalf # of any other Contributor, and only if You agree to indemnify, # defend, and hold each Contributor harmless for any liability # incurred by, or claims asserted against, such Contributor by reason # of your accepting any such warranty or additional liability. # # END OF TERMS AND CONDITIONS # # APPENDIX: How to apply the Apache License to your work. # # To apply the Apache License to your work, attach the following # boilerplate notice, with the fields enclosed by brackets "[]" # replaced with your own identifying information. (Don't include # the brackets!) The text should be enclosed in the appropriate # comment syntax for the file format. We also recommend that a # file or class name and description of purpose be included on the # same "printed page" as the copyright notice for easier # identification within third-party archives. # # Copyright [yyyy] [name of copyright owner] # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #
""" ======== Glossary ======== along an axis Axes are defined for arrays with more than one dimension. A 2-dimensional array has two corresponding axes: the first running vertically downwards across rows (axis 0), and the second running horizontally across columns (axis 1). Many operation can take place along one of these axes. For example, we can sum each row of an array, in which case we operate along columns, or axis 1:: >>> x = np.arange(12).reshape((3,4)) >>> x array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.sum(axis=1) array([ 6, 22, 38]) array A homogeneous container of numerical elements. Each element in the array occupies a fixed amount of memory (hence homogeneous), and can be a numerical element of a single type (such as float, int or complex) or a combination (such as ``(float, int, float)``). Each array has an associated data-type (or ``dtype``), which describes the numerical type of its elements:: >>> x = np.array([1, 2, 3], float) >>> x array([ 1., 2., 3.]) >>> x.dtype # floating point number, 64 bits of memory per element dtype('float64') # More complicated data type: each array element is a combination of # and integer and a floating point number >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)]) array([(1, 2.0), (3, 4.0)], dtype=[('x', '<i4'), ('y', '<f8')]) Fast element-wise operations, called `ufuncs`_, operate on arrays. array_like Any sequence that can be interpreted as an ndarray. This includes nested lists, tuples, scalars and existing arrays. attribute A property of an object that can be accessed using ``obj.attribute``, e.g., ``shape`` is an attribute of an array:: >>> x = np.array([1, 2, 3]) >>> x.shape (3,) BLAS `Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_ broadcast NumPy can do operations on arrays whose shapes are mismatched:: >>> x = np.array([1, 2]) >>> y = np.array([[3], [4]]) >>> x array([1, 2]) >>> y array([[3], [4]]) >>> x + y array([[4, 5], [5, 6]]) See `doc.broadcasting`_ for more information. C order See `row-major` column-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In column-major order, the leftmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the column-major order as:: [1, 4, 2, 5, 3, 6] Column-major order is also known as the Fortran order, as the Fortran programming language uses it. decorator An operator that transforms a function. For example, a ``log`` decorator may be defined to print debugging information upon function execution:: >>> def log(f): ... def new_logging_func(*args, **kwargs): ... print "Logging call with parameters:", args, kwargs ... return f(*args, **kwargs) ... ... return new_logging_func Now, when we define a function, we can "decorate" it using ``log``:: >>> @log ... def add(a, b): ... return a + b Calling ``add`` then yields: >>> add(1, 2) Logging call with parameters: (1, 2) {} 3 dictionary Resembling a language dictionary, which provides a mapping between words and descriptions thereof, a Python dictionary is a mapping between two objects:: >>> x = {1: 'one', 'two': [1, 2]} Here, `x` is a dictionary mapping keys to values, in this case the integer 1 to the string "one", and the string "two" to the list ``[1, 2]``. The values may be accessed using their corresponding keys:: >>> x[1] 'one' >>> x['two'] [1, 2] Note that dictionaries are not stored in any specific order. Also, most mutable (see *immutable* below) objects, such as lists, may not be used as keys. For more information on dictionaries, read the `Python tutorial <http://docs.python.org/tut>`_. Fortran order See `column-major` flattened Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details. immutable An object that cannot be modified after execution is called immutable. Two common examples are strings and tuples. instance A class definition gives the blueprint for constructing an object:: >>> class House(object): ... wall_colour = 'white' Yet, we have to *build* a house before it exists:: >>> h = House() # build a house Now, ``h`` is called a ``House`` instance. An instance is therefore a specific realisation of a class. iterable A sequence that allows "walking" (iterating) over items, typically using a loop such as:: >>> x = [1, 2, 3] >>> [item**2 for item in x] [1, 4, 9] It is often used in combintion with ``enumerate``:: >>> keys = ['a','b','c'] >>> for n, k in enumerate(keys): ... print "Key %d: %s" % (n, k) ... Key 0: a Key 1: b Key 2: c list A Python container that can hold any number of objects or items. The items do not have to be of the same type, and can even be lists themselves:: >>> x = [2, 2.0, "two", [2, 2.0]] The list `x` contains 4 items, each which can be accessed individually:: >>> x[2] # the string 'two' 'two' >>> x[3] # a list, containing an integer 2 and a float 2.0 [2, 2.0] It is also possible to select more than one item at a time, using *slicing*:: >>> x[0:2] # or, equivalently, x[:2] [2, 2.0] In code, arrays are often conveniently expressed as nested lists:: >>> np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) For more information, read the section on lists in the `Python tutorial <http://docs.python.org/tut>`_. For a mapping type (key-value), see *dictionary*. mask A boolean array, used to select only certain elements for an operation:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> mask = (x > 2) >>> mask array([False, False, False, True, True], dtype=bool) >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) masked array Array that suppressed values indicated by a mask:: >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> x masked_array(data = [-- 2.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> >>> x + [1, 2, 3] masked_array(data = [-- 4.0 --], mask = [ True False True], fill_value = 1e+20) <BLANKLINE> Masked arrays are often used when operating on arrays containing missing or invalid entries. matrix A 2-dimensional ndarray that preserves its two-dimensional nature throughout operations. It has certain special operations, such as ``*`` (matrix multiplication) and ``**`` (matrix power), defined:: >>> x = np.mat([[1, 2], [3, 4]]) >>> x matrix([[1, 2], [3, 4]]) >>> x**2 matrix([[ 7, 10], [15, 22]]) method A function associated with an object. For example, each ndarray has a method called ``repeat``:: >>> x = np.array([1, 2, 3]) >>> x.repeat(2) array([1, 1, 2, 2, 3, 3]) ndarray See *array*. reference If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore, ``a`` and ``b`` are different names for the same Python object. row-major A way to represent items in a N-dimensional array in the 1-dimensional computer memory. In row-major order, the rightmost index "varies the fastest": for example the array:: [[1, 2, 3], [4, 5, 6]] is represented in the row-major order as:: [1, 2, 3, 4, 5, 6] Row-major order is also known as the C order, as the C programming language uses it. New Numpy arrays are by default in row-major order. self Often seen in method signatures, ``self`` refers to the instance of the associated class. For example: >>> class Paintbrush(object): ... color = 'blue' ... ... def paint(self): ... print "Painting the city %s!" % self.color ... >>> p = Paintbrush() >>> p.color = 'red' >>> p.paint() # self refers to 'p' Painting the city red! slice Used to select only certain elements from a sequence:: >>> x = range(5) >>> x [0, 1, 2, 3, 4] >>> x[1:3] # slice from 1 to 3 (excluding 3 itself) [1, 2] >>> x[1:5:2] # slice from 1 to 5, but skipping every second element [1, 3] >>> x[::-1] # slice a sequence in reverse [4, 3, 2, 1, 0] Arrays may have more than one dimension, each which can be sliced individually:: >>> x = np.array([[1, 2], [3, 4]]) >>> x array([[1, 2], [3, 4]]) >>> x[:, 1] array([2, 4]) tuple A sequence that may contain a variable number of types of any kind. A tuple is immutable, i.e., once constructed it cannot be changed. Similar to a list, it can be indexed and sliced:: >>> x = (1, 'one', [1, 2]) >>> x (1, 'one', [1, 2]) >>> x[0] 1 >>> x[:2] (1, 'one') A useful concept is "tuple unpacking", which allows variables to be assigned to the contents of a tuple:: >>> x, y = (1, 2) >>> x, y = 1, 2 This is often used when a function returns multiple values: >>> def return_many(): ... return 1, 'alpha', None >>> a, b, c = return_many() >>> a, b, c (1, 'alpha', None) >>> a 1 >>> b 'alpha' ufunc Universal function. A fast element-wise array operation. Examples include ``add``, ``sin`` and ``logical_or``. view An array that does not own its data, but refers to another array's data instead. For example, we may create a view that only shows every second element of another array:: >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> y = x[::2] >>> y array([0, 2, 4]) >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4]) wrapper Python is a high-level (highly abstracted, or English-like) language. This abstraction comes at a price in execution speed, and sometimes it becomes necessary to use lower level languages to do fast computations. A wrapper is code that provides a bridge between high and the low level languages, allowing, e.g., Python to execute code written in C or Fortran. Examples include ctypes, SWIG and Cython (which wraps C and C++) and f2py (which wraps Fortran). """
""" TestCmd.py: a testing framework for commands and scripts. The TestCmd module provides a framework for portable automated testing of executable commands and scripts (in any language, not just Python), especially commands and scripts that require file system interaction. In addition to running tests and evaluating conditions, the TestCmd module manages and cleans up one or more temporary workspace directories, and provides methods for creating files and directories in those workspace directories from in-line data, here-documents), allowing tests to be completely self-contained. A TestCmd environment object is created via the usual invocation: import TestCmd test = TestCmd.TestCmd() There are a bunch of keyword arguments available at instantiation: test = TestCmd.TestCmd(description = 'string', program = 'program_or_script_to_test', interpreter = 'script_interpreter', workdir = 'prefix', subdir = 'subdir', verbose = Boolean, match = default_match_function, diff = default_diff_function, combine = Boolean) There are a bunch of methods that let you do different things: test.verbose_set(1) test.description_set('string') test.program_set('program_or_script_to_test') test.interpreter_set('script_interpreter') test.interpreter_set(['script_interpreter', 'arg']) test.workdir_set('prefix') test.workdir_set('') test.workpath('file') test.workpath('subdir', 'file') test.subdir('subdir', ...) test.rmdir('subdir', ...) test.write('file', "contents\n") test.write(['subdir', 'file'], "contents\n") test.read('file') test.read(['subdir', 'file']) test.read('file', mode) test.read(['subdir', 'file'], mode) test.writable('dir', 1) test.writable('dir', None) test.preserve(condition, ...) test.cleanup(condition) test.command_args(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program') test.run(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', chdir = 'directory_to_chdir_to', stdin = 'input to feed to the program\n') universal_newlines = True) p = test.start(program = 'program_or_script_to_run', interpreter = 'script_interpreter', arguments = 'arguments to pass to program', universal_newlines = None) test.finish(self, p) test.pass_test() test.pass_test(condition) test.pass_test(condition, function) test.fail_test() test.fail_test(condition) test.fail_test(condition, function) test.fail_test(condition, function, skip) test.no_result() test.no_result(condition) test.no_result(condition, function) test.no_result(condition, function, skip) test.stdout() test.stdout(run) test.stderr() test.stderr(run) test.symlink(target, link) test.banner(string) test.banner(string, width) test.diff(actual, expected) test.match(actual, expected) test.match_exact("actual 1\nactual 2\n", "expected 1\nexpected 2\n") test.match_exact(["actual 1\n", "actual 2\n"], ["expected 1\n", "expected 2\n"]) test.match_re("actual 1\nactual 2\n", regex_string) test.match_re(["actual 1\n", "actual 2\n"], list_of_regexes) test.match_re_dotall("actual 1\nactual 2\n", regex_string) test.match_re_dotall(["actual 1\n", "actual 2\n"], list_of_regexes) test.tempdir() test.tempdir('temporary-directory') test.sleep() test.sleep(seconds) test.where_is('foo') test.where_is('foo', 'PATH1:PATH2') test.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') test.unlink('file') test.unlink('subdir', 'file') The TestCmd module provides pass_test(), fail_test(), and no_result() unbound functions that report test results for use with the Aegis change management system. These methods terminate the test immediately, reporting PASSED, FAILED, or NO RESULT respectively, and exiting with status 0 (success), 1 or 2 respectively. This allows for a distinction between an actual failed test and a test that could not be properly evaluated because of an external condition (such as a full file system or incorrect permissions). import TestCmd TestCmd.pass_test() TestCmd.pass_test(condition) TestCmd.pass_test(condition, function) TestCmd.fail_test() TestCmd.fail_test(condition) TestCmd.fail_test(condition, function) TestCmd.fail_test(condition, function, skip) TestCmd.no_result() TestCmd.no_result(condition) TestCmd.no_result(condition, function) TestCmd.no_result(condition, function, skip) The TestCmd module also provides unbound functions that handle matching in the same way as the match_*() methods described above. import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_exact) test = TestCmd.TestCmd(match = TestCmd.match_re) test = TestCmd.TestCmd(match = TestCmd.match_re_dotall) The TestCmd module provides unbound functions that can be used for the "diff" argument to TestCmd.TestCmd instantiation: import TestCmd test = TestCmd.TestCmd(match = TestCmd.match_re, diff = TestCmd.diff_re) test = TestCmd.TestCmd(diff = TestCmd.simple_diff) The "diff" argument can also be used with standard difflib functions: import difflib test = TestCmd.TestCmd(diff = difflib.context_diff) test = TestCmd.TestCmd(diff = difflib.unified_diff) Lastly, the where_is() method also exists in an unbound function version. import TestCmd TestCmd.where_is('foo') TestCmd.where_is('foo', 'PATH1:PATH2') TestCmd.where_is('foo', 'PATH1;PATH2', '.suffix3;.suffix4') """
"""Drag-and-drop support for Tkinter. This is very preliminary. I currently only support dnd *within* one application, between different windows (or within the same window). I an trying to make this as generic as possible -- not dependent on the use of a particular widget or icon type, etc. I also hope that this will work with Pmw. To enable an object to be dragged, you must create an event binding for it that starts the drag-and-drop process. Typically, you should bind <ButtonPress> to a callback function that you write. The function should call Tkdnd.dnd_start(source, event), where 'source' is the object to be dragged, and 'event' is the event that invoked the call (the argument to your callback function). Even though this is a class instantiation, the returned instance should not be stored -- it will be kept alive automatically for the duration of the drag-and-drop. When a drag-and-drop is already in process for the Tk interpreter, the call is *ignored*; this normally averts starting multiple simultaneous dnd processes, e.g. because different button callbacks all dnd_start(). The object is *not* necessarily a widget -- it can be any application-specific object that is meaningful to potential drag-and-drop targets. Potential drag-and-drop targets are discovered as follows. Whenever the mouse moves, and at the start and end of a drag-and-drop move, the Tk widget directly under the mouse is inspected. This is the target widget (not to be confused with the target object, yet to be determined). If there is no target widget, there is no dnd target object. If there is a target widget, and it has an attribute dnd_accept, this should be a function (or any callable object). The function is called as dnd_accept(source, event), where 'source' is the object being dragged (the object passed to dnd_start() above), and 'event' is the most recent event object (generally a <Motion> event; it can also be <ButtonPress> or <ButtonRelease>). If the dnd_accept() function returns something other than None, this is the new dnd target object. If dnd_accept() returns None, or if the target widget has no dnd_accept attribute, the target widget's parent is considered as the target widget, and the search for a target object is repeated from there. If necessary, the search is repeated all the way up to the root widget. If none of the target widgets can produce a target object, there is no target object (the target object is None). The target object thus produced, if any, is called the new target object. It is compared with the old target object (or None, if there was no old target widget). There are several cases ('source' is the source object, and 'event' is the most recent event object): - Both the old and new target objects are None. Nothing happens. - The old and new target objects are the same object. Its method dnd_motion(source, event) is called. - The old target object was None, and the new target object is not None. The new target object's method dnd_enter(source, event) is called. - The new target object is None, and the old target object is not None. The old target object's method dnd_leave(source, event) is called. - The old and new target objects differ and neither is None. The old target object's method dnd_leave(source, event), and then the new target object's method dnd_enter(source, event) is called. Once this is done, the new target object replaces the old one, and the Tk mainloop proceeds. The return value of the methods mentioned above is ignored; if they raise an exception, the normal exception handling mechanisms take over. The drag-and-drop processes can end in two ways: a final target object is selected, or no final target object is selected. When a final target object is selected, it will always have been notified of the potential drop by a call to its dnd_enter() method, as described above, and possibly one or more calls to its dnd_motion() method; its dnd_leave() method has not been called since the last call to dnd_enter(). The target is notified of the drop by a call to its method dnd_commit(source, event). If no final target object is selected, and there was an old target object, its dnd_leave(source, event) method is called to complete the dnd sequence. Finally, the source object is notified that the drag-and-drop process is over, by a call to source.dnd_end(target, event), specifying either the selected target object, or None if no target object was selected. The source object can use this to implement the commit action; this is sometimes simpler than to do it in the target's dnd_commit(). The target's dnd_commit() method could then simply be aliased to dnd_leave(). At any time during a dnd sequence, the application can cancel the sequence by calling the cancel() method on the object returned by dnd_start(). This will call dnd_leave() if a target is currently active; it will never call dnd_commit(). """
#!/usr/bin/env python #python mapk/plot_mean.py 09/data/mapk3_1e-15_0.25_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_0_normal_ALL_tc.dat # python mapk/plot_mean.py 09/data/mapk3_1e-15_0.25_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-2_normal_ALL_tc.dat # python mapk/plot_mean.py 09/data/mapk3_1e-15_0.25_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-6_normal_ALL_tc.dat # python mapk/plot_mean.py 09/data/mapk3_1e-15_4_fixed_0_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-5_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-4_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-3_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-1_normal_ALL_tc.dat # with lower D, ti=1e-6 # python plot_mean.py 09-3/data/mapk3_1e-15_0.03125_fixed_1e-6_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.0625_fixed_1e-6_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.125_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.25_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-6_normal_ALL_tc.dat # less # of data set # python plot_mean.py 09-3/data/mapk3_1e-15_0.03125_fixed_1e-6_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.0625_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.25_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-6_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-6_normal_ALL_tc.dat # with lower D, ti=1e-2 # python plot_mean.py 09-3/data/mapk3_1e-15_0.03125_fixed_1e-2_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.0625_fixed_1e-2_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.125_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.25_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.5_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_2_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-2_normal_ALL_tc.dat # less # of data set # python plot_mean.py 09-3/data/mapk3_1e-15_0.03125_fixed_1e-2_normal_ALL_tc.dat 09-3/data/mapk3_1e-15_0.0625_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_0.25_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_1_fixed_1e-2_normal_ALL_tc.dat 09/data/mapk3_1e-15_4_fixed_1e-2_normal_ALL_tc.dat
# vim: set fileencoding=utf-8 : # ***********************IMPORTANT NMAP LICENSE TERMS************************ # * * # * The Nmap Security Scanner is (C) 1996-2013 Insecure.Com LLC. Nmap is * # * also a registered trademark of Insecure.Com LLC. This program is free * # * software; you may redistribute and/or modify it under the terms of the * # * GNU General Public License as published by the Free Software * # * Foundation; Version 2 ("GPL"), BUT ONLY WITH ALL OF THE CLARIFICATIONS * # * AND EXCEPTIONS DESCRIBED HEREIN. This guarantees your right to use, * # * modify, and redistribute this software under certain conditions. If * # * you wish to embed Nmap technology into proprietary software, we sell * # * alternative licenses (contact EMAIL Dozens of software * # * vendors already license Nmap technology such as host discovery, port * # * scanning, OS detection, version detection, and the Nmap Scripting * # * Engine. * # * * # * Note that the GPL places important restrictions on "derivative works", * # * yet it does not provide a detailed definition of that term. To avoid * # * misunderstandings, we interpret that term as broadly as copyright law * # * allows. For example, we consider an application to constitute a * # * derivative work for the purpose of this license if it does any of the * # * following with any software or content covered by this license * # * ("Covered Software"): * # * * # * o Integrates source code from Covered Software. * # * * # * o Reads or includes copyrighted data files, such as Nmap's nmap-os-db * # * or nmap-service-probes. * # * * # * o Is designed specifically to execute Covered Software and parse the * # * results (as opposed to typical shell or execution-menu apps, which will * # * execute anything you tell them to). * # * * # * o Includes Covered Software in a proprietary executable installer. The * # * installers produced by InstallShield are an example of this. Including * # * Nmap with other software in compressed or archival form does not * # * trigger this provision, provided appropriate open source decompression * # * or de-archiving software is widely available for no charge. For the * # * purposes of this license, an installer is considered to include Covered * # * Software even if it actually retrieves a copy of Covered Software from * # * another source during runtime (such as by downloading it from the * # * Internet). * # * * # * o Links (statically or dynamically) to a library which does any of the * # * above. * # * * # * o Executes a helper program, module, or script to do any of the above. * # * * # * This list is not exclusive, but is meant to clarify our interpretation * # * of derived works with some common examples. Other people may interpret * # * the plain GPL differently, so we consider this a special exception to * # * the GPL that we apply to Covered Software. Works which meet any of * # * these conditions must conform to all of the terms of this license, * # * particularly including the GPL Section 3 requirements of providing * # * source code and allowing free redistribution of the work as a whole. * # * * # * As another special exception to the GPL terms, Insecure.Com LLC grants * # * permission to link the code of this program with any version of the * # * OpenSSL library which is distributed under a license identical to that * # * listed in the included docs/licenses/OpenSSL.txt file, and distribute * # * linked combinations including the two. * # * * # * Any redistribution of Covered Software, including any derived works, * # * must obey and carry forward all of the terms of this license, including * # * obeying all GPL rules and restrictions. For example, source code of * # * the whole work must be provided and free redistribution must be * # * allowed. All GPL references to "this License", are to be treated as * # * including the special and conditions of the license text as well. * # * * # * Because this license imposes special exceptions to the GPL, Covered * # * Work may not be combined (even as part of a larger work) with plain GPL * # * software. The terms, conditions, and exceptions of this license must * # * be included as well. This license is incompatible with some other open * # * source licenses as well. In some cases we can relicense portions of * # * Nmap or grant special permissions to use it in other open source * # * software. Please contact EMAIL with any such requests. * # * Similarly, we don't incorporate incompatible open source software into * # * Covered Software without special permission from the copyright holders. * # * * # * If you have any questions about the licensing restrictions on using * # * Nmap in other works, are happy to help. As mentioned above, we also * # * offer alternative license to integrate Nmap into proprietary * # * applications and appliances. These contracts have been sold to dozens * # * of software vendors, and generally include a perpetual license as well * # * as providing for priority support and updates. They also fund the * # * continued development of Nmap. Please email EMAIL for * # * further information. * # * * # * If you received these files with a written license agreement or * # * contract stating terms other than the terms above, then that * # * alternative license agreement takes precedence over these comments. * # * * # * Source is provided to this software because we believe users have a * # * right to know exactly what a program is going to do before they run it. * # * This also allows you to audit the software for security holes (none * # * have been found so far). * # * * # * Source code also allows you to port Nmap to new platforms, fix bugs, * # * and add new features. You are highly encouraged to send your changes * # * to the EMAIL mailing list for possible incorporation into the * # * main distribution. By sending these changes to Fyodor or one of the * # * Insecure.Org development mailing lists, or checking them into the Nmap * # * source code repository, it is understood (unless you specify otherwise) * # * that you are offering the Nmap Project (Insecure.Com LLC) the * # * unlimited, non-exclusive right to reuse, modify, and relicense the * # * code. Nmap will always be available Open Source, but this is important * # * because the inability to relicense code has caused devastating problems * # * for other Free Software projects (such as KDE and NASM). We also * # * occasionally relicense the code to third parties as discussed above. * # * If you wish to specify special license conditions of your * # * contributions, just say so when you send them. * # * * # * This program is distributed in the hope that it will be useful, but * # * WITHOUT ANY WARRANTY; without even the implied warranty of * # * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Nmap * # * license file for more details (it's in a COPYING file included with * # * Nmap, and also available from https://svn.nmap.org/nmap/COPYING * # * * # ***************************************************************************/
# -*- coding: utf-8 -*- # # Picard, the next-generation MusicBrainz tagger # # Copyright (C) 2004 NAME Copyright (C) 2006 NAME Copyright (C) 2014 Sophist-UK # Copyright (C) 2014, 2018, 2020 NAME Copyright (C) 2017 NAME Copyright (C) 2018-2019 NAME Copyright (C) 2020 NAME Copyright (C) 2020 Undearius # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # This modules provides functionality for simplifying unicode strings. # The unicode character set (of over 1m codepoints and 24,000 characters) includes: # Normal ascii (latin) non-accented characters # Combined latin characters e.g. ae in normal usage # Compatibility combined latin characters (retained for compatibility with other character sets) # These can look very similar to normal characters and can be confusing for searches, sort orders etc. # Non-latin (e.g. japanese, greek, hebrew etc.) characters # Both latin and non-latin characters can be accented. Accents can be either: # Provided by separate nonspacing_mark characters which are visually overlaid (visually 1 character is actually 2); or # Integrated accented characters (i.e. non-accented characters combined with a nonspace_mark into a single character) # Again these can be confusing for searches, sort orders etc. # Punctuation can also be confusing in unicode e.g. several types of single or double quote mark. # For latin script: # Combined characters, accents and punctuation can be visually similar but look different to search engines, # sort orders etc. and the number of ways to use similar looking characters can (does) result in inconsistent # usage inside Music metadata. # # Simplifying # the unicode character sets by many-to-one mappings can improve consistency and reduce confusion, # however sometimes the choice of specific characters can be a deliberate part of an album, song title or artist name # (and should not therefore be changed without careful thought) and occasionally the choice of characters can be # malicious (i.e. to defeat firewalls or spam filters or to appear to be something else). # # Finally, given the size of the unicode character set, fonts are unlikely to display all characters, # making simplification a necessity. # # Simplification may also be needed to make tags conform to ISO-8859-1 (extended ascii) or to make tags or filenames # into ascii, perhaps because the file system or player cannot support unicode. # # Non-latin scripts may also need to be converted to latin scripts through: # Translation (e.g. hebrew word for mother is translated to "mother"); or # Transliteration (e.g. the SOUND of the hebrew letter or word is spelt out in latin) # These are non-trivial, and the software to do these is far from comprehensive. # This module provides utility functions to enable simplification of latin and punctuation unicode: # 1. simplify compatibility characters; # 2. split combined characters; # 3. remove accents (entirely or if not in ISO-8859-1 as applicable); # 4. replace remaining non-ascii or non-ISO-8859-1 characters with a default character # This module also provides an extension infrastructure to allow translation and / or transliteration plugins to be added.
""" [2015-12-16] Challenge #245 [Intermediate] Ggggggg gggg Ggggg-ggggg! https://www.reddit.com/r/dailyprogrammer/comments/3x3hqa/20151216_challenge_245_intermediate_ggggggg_gggg/ We have discovered a new species of aliens! They look like [this](https://www.redditstatic.com/about/assets/reddit-alien.png) and are trying to communicate with us using the /r/ggggg subreddit! As you might have been able to tell, though, it is awfully hard to understand what they're saying since their super-advanced alphabet only makes use of two letters: "g" and "G". Fortunately, their numbers, spacing and punctuation are the same. We are going to write a program to translate to and from our alphabet to theirs, so we can be enlightened by their intelligence. Feel free to code either the encoding program, the decoding program, or both. ^Also, ^please ^do ^not ^actually ^harass ^the ^residents ^of ^/r/ggggg. # Part 1: Decoding First, we need to be able to understand what the Ggggg aliens are saying. Fortunately, they are cooperative in this matter, and they helpfully include a "key" to translate between their g-based letters and our Latin letters. Your decoder program needs to read this key from the first line of the input, then use it to translate the rest of the input. ## Sample decoder input 1 H GgG d gGg e ggG l GGg o gGG r Ggg w ggg GgGggGGGgGGggGG, ggggGGGggGGggGg! ## Sample decoder output 1 Hello, world! **Explanation:** Reading the input, the key is: * H = GgG * d = gGg * e = ggG * l = GGg * o = gGG * r = Ggg * w = ggg When we read the message from left to right, we can divide it into letters like so (alternating letters bolded): &gt; **GgG**ggG**GGg**GGg**gGG**, **ggg**gGG**Ggg**GGg**gGg**! Take those letter groups and turn them into letters using the key, and you get "Hello, world!" ## Sample decoder input 2 a GgG d GggGg e GggGG g GGGgg h GGGgG i GGGGg l GGGGG m ggg o GGg p Gggg r gG y ggG GGGgGGGgGGggGGgGggG /gG/GggGgGgGGGGGgGGGGGggGGggggGGGgGGGgggGGgGggggggGggGGgG! Note that the letters are *not* guaranteed to be of equal length. ## Sample decoder output 2 hooray /r/dailyprogrammer! # Part 2: Encoding Next, we will go in the other direction. Come up with a key based on the letters "g" and "G" that maps all the letters in a given message to Ggggg equivalents, use it to translate the message, then output both the key and the translated message. You can double-check your work using the decoding script from part 1. ## Sample input Hello, world! ## Sample output H GgG d gGg e ggG l GGg o gGG r Ggg w ggg GgGggGGGgGGggGG, ggggGGGggGGggGg! Your key (and thus message) may end up being completely different than the one provided here. That's fine, as long as it can be translated back. # Part 2.1 (Bonus points): Compression Just as it annoys us to see someone typing "llliiiiikkkeeee ttttthhhiiiisssss", the Ggggg aliens don't actually enjoy unnecessary verbosity. Modify your encoding script to create a key that results in the *shortest possible Ggggg message*. You should be able to decode the output using the same decoder used in part 1 (the second sample input/output in part 1 is actually compressed). Here's a [hint](https://en.wikipedia.org/wiki/Variable-length_code). ## Sample input: Here's the thing. You said a "jackdaw is a crow." Is it in the same family? Yes. No one's arguing that. As someone who is a scientist who studies crows, I am telling you, specifically, in science, no one calls jackdaws crows. If you want to be "specific" like you said, then you shouldn't either. They're not the same thing. If you're saying "crow family" you're referring to the taxonomic grouping of Corvidae, which includes things from nutcrackers to blue jays to ravens. So your reasoning for calling a jackdaw a crow is because random people "call the black ones crows?" Let's get grackles and blackbirds in there, then, too. Also, calling someone a human or an ape? It's not one or the other, that's not how taxonomy works. They're both. A jackdaw is a jackdaw and a member of the crow family. But that's not what you said. You said a jackdaw is a crow, which is not true unless you're okay with calling all members of the crow family crows, which means you'd call blue jays, ravens, and other birds crows, too. Which you said you don't. It's okay to just admit you're wrong, you know? ## Sample output: Found here (a bit too big to paste in the challenge itself): http://www.hastebin.com/raw/inihibehux.txt Remember you can test your decoder on this message, too! -------- C GgggGgg H GgggGgG T GgggGGg a gGg c GGggG d GggG e GgG g ggGgG h GGgGg i gGGg j GgggGGG l gGGG m ggGGg n GGgGG o ggg p ggGGG r GGGg s GGGG t GGgggG u ggGgg v Ggggg w GGggggg y GGggggG GgggGGgGGgGggGGgGGGG GGggGGGgGggGggGGGgGGGGgGGGgGGggGgGGgG GGggggggGgGGGG ggGGGGGGggggggGGGgggGGGGGgGGggG gGgGGgGGGggG GggGgGGgGGGGGGggGggGggGGGGGGGGGgGGggG gggGggggGgGGGGg gGgGGgggG /GGGg/GggGgGggGGggGGGGGggggGggGGGGGGggggggGgGGGGggGgggGGgggGGgGgGGGGg_gGGgGggGGgGgGgGGGG. GgggGgGgGgGggggGgG gGg GGggGgggggggGGG GGggGGGgGggGggGGGgGGGGgGGGgGGggGgGGgG gGGgGggGGgGgGg? GgggGgggggggGGgGgG GgggGGGggggGGgGGgGG ggGggGGGG gggGggggGgGGGGg GGgggGGGgGgGgGGGGgGgG! """
#import unittest # #class TestCase_Unit( unittest.TestCase ): # # def test_export_insert_nok( self ): # # res = self.handler.export_insert( 1, 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( ( 1, ), 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( 1, ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( 1, { 2: 2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( ( 1, ), ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( ( 1, ), { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [ 1, ], 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( 1, [ 2, ] ) # self.assertEquals( res['OK'], False ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [ 1, ], [ 2, ] ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( [1,], {'2':2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_insert( (1,), {'2':2} ) # self.assertEquals( res[ 'OK' ], False ) # # def test_export_insert_ok( self ): # # res = self.handler.export_insert( {'1':1}, {'2':2} ) # self.assertEquals( res[ 'OK' ], True ) # ################################################################################## # # def test_export_update_nok( self ): # # res = self.handler.export_update( 1, 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( ( 1, ), 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( 1, ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( 1, { 2: 2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( ( 1, ), ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( ( 1, ), { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [ 1, ], 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( 1, [ 2, ] ) # self.assertEquals( res['OK'], False ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [ 1, ], [ 2, ] ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( [1,], {'2':2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_update( (1,), {'2':2} ) # self.assertEquals( res[ 'OK' ], False ) # # def test_export_update_ok( self ): # # res = self.handler.export_update( {'1':1}, {'2':2} ) # self.assertEquals( res[ 'OK' ], True ) # ################################################################################## # # def test_export_get_nok( self ): # # res = self.handler.export_get( 1, 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( ( 1, ), 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( 1, ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( 1, { 2: 2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( ( 1, ), ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( ( 1, ), { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [ 1, ], 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( 1, [ 2, ] ) # self.assertEquals( res['OK'], False ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [ 1, ], [ 2, ] ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( [1,], {'2':2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_get( (1,), {'2':2} ) # self.assertEquals( res[ 'OK' ], False ) # # def test_export_get_ok( self ): # # res = self.handler.export_get( {'1':1}, {'2':2} ) # self.assertEquals( res[ 'OK' ], True ) # ################################################################################## # # def test_export_delete_nok( self ): # # res = self.handler.export_delete( 1, 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( ( 1, ), 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( 1, ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( 1, { 2: 2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( ( 1, ), ( 2, ) ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( ( 1, ), { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [ 1, ], 2 ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( 1, [ 2, ] ) # self.assertEquals( res['OK'], False ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [ 1, ], [ 2, ] ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [ 1, ], { 2: 2 } ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( [1,], {'2':2} ) # self.assertEquals( res['OK'], False ) # res = self.handler.export_delete( (1,), {'2':2} ) # self.assertEquals( res[ 'OK' ], False ) # # def test_export_delete_ok( self ): # # res = self.handler.export_delete( {'1':1}, {'2':2} ) # self.assertEquals( res[ 'OK' ], True ) # #################################################################################
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
"""Configuration file parser. A configuration file consists of sections, lead by a "[section]" header, and followed by "name: value" entries, with continuations and such in the style of RFC 822. Intrinsic defaults can be specified by passing them into the ConfigParser constructor as a dictionary. class: ConfigParser -- responsible for parsing a list of configuration files, and managing the parsed database. methods: __init__(defaults=None, dict_type=_default_dict, allow_no_value=False, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True): Create the parser. When `defaults' is given, it is initialized into the dictionary or intrinsic defaults. The keys must be strings, the values must be appropriate for %()s string interpolation. When `dict_type' is given, it will be used to create the dictionary objects for the list of sections, for the options within a section, and for the default values. When `delimiters' is given, it will be used as the set of substrings that divide keys from values. When `comment_prefixes' is given, it will be used as the set of substrings that prefix comments in empty lines. Comments can be indented. When `inline_comment_prefixes' is given, it will be used as the set of substrings that prefix comments in non-empty lines. When `strict` is True, the parser won't allow for any section or option duplicates while reading from a single source (file, string or dictionary). Default is True. When `empty_lines_in_values' is False (default: True), each empty line marks the end of an option. Otherwise, internal empty lines of a multiline option are kept as part of the value. When `allow_no_value' is True (default: False), options without values are accepted; the value presented for these is None. sections() Return all the configuration section names, sans DEFAULT. has_section(section) Return whether the given section exists. has_option(section, option) Return whether the given option exists in the given section. options(section) Return list of configuration options for the named section. read(filenames, encoding=None) Read and parse the list of named configuration files, given by name. A single filename is also allowed. Non-existing files are ignored. Return list of successfully read files. read_file(f, filename=None) Read and parse one configuration file, given as a file object. The filename defaults to f.name; it is only used in error messages (if f has no `name' attribute, the string `<???>' is used). read_string(string) Read configuration from a given string. read_dict(dictionary) Read configuration from a dictionary. Keys are section names, values are dictionaries with keys and values that should be present in the section. If the used dictionary type preserves order, sections and their keys will be added in order. Values are automatically converted to strings. get(section, option, raw=False, vars=None, fallback=_UNSET) Return a string value for the named option. All % interpolations are expanded in the return values, based on the defaults passed into the constructor and the DEFAULT section. Additional substitutions may be provided using the `vars' argument, which must be a dictionary whose contents override any pre-existing defaults. If `option' is a key in `vars', the value from `vars' is used. getint(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to an integer. getfloat(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a float. getboolean(section, options, raw=False, vars=None, fallback=_UNSET) Like get(), but convert value to a boolean (currently case insensitively defined as 0, false, no, off for False, and 1, true, yes, on for True). Returns False or True. items(section=_UNSET, raw=False, vars=None) If section is given, return a list of tuples with (name, value) for each option in the section. Otherwise, return a list of tuples with (section_name, section_proxy) for each section, including DEFAULTSECT. remove_section(section) Remove the given file section and all its options. remove_option(section, option) Remove the given option from the given section. set(section, option, value) Set the given option. write(fp, space_around_delimiters=True) Write the configuration state in .ini format. If `space_around_delimiters' is True (the default), delimiters between keys and values are surrounded by spaces. """
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # Copyright (c), NAME <EMAIL>, 2012-2013 # Copyright (c), NAME <EMAIL>, 2015 # All rights reserved. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # # The match_hostname function and supporting code is under the terms and # conditions of the Python Software Foundation License. They were taken from # the Python3 standard library and adapted for use in Python2. See comments in the # source for which code precisely is under this License. PSF License text # follows: # # PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 # -------------------------------------------- # # 1. This LICENSE AGREEMENT is between the Python Software Foundation # ("PSF"), and the Individual or Organization ("Licensee") accessing and # otherwise using this software ("Python") in source or binary form and # its associated documentation. # # 2. Subject to the terms and conditions of this License Agreement, PSF hereby # grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, # analyze, test, perform and/or display publicly, prepare derivative works, # distribute, and otherwise use Python alone or in any derivative version, # provided, however, that PSF's License Agreement and PSF's notice of copyright, # i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are # retained in Python alone or in any derivative version prepared by Licensee. # # 3. In the event Licensee prepares a derivative work that is based on # or incorporates Python or any part thereof, and wants to make # the derivative work available to others as provided herein, then # Licensee hereby agrees to include in any such work a brief summary of # the changes made to Python. # # 4. PSF is making Python available to Licensee on an "AS IS" # basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR # IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND # DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS # FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT # INFRINGE ANY THIRD PARTY RIGHTS. # # 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON # FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS # A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, # OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. # # 6. This License Agreement will automatically terminate upon a material # breach of its terms and conditions. # # 7. Nothing in this License Agreement shall be deemed to create any # relationship of agency, partnership, or joint venture between PSF and # Licensee. This License Agreement does not grant permission to use PSF # trademarks or trade name in a trademark sense to endorse or promote # products or services of Licensee, or any third party. # # 8. By copying, installing or otherwise using Python, Licensee # agrees to be bound by the terms and conditions of this License # Agreement.
""" Wrappers to LAPACK library ========================== flapack -- wrappers for Fortran [*] LAPACK routines clapack -- wrappers for ATLAS LAPACK routines calc_lwork -- calculate optimal lwork parameters get_lapack_funcs -- query for wrapper functions. [*] If ATLAS libraries are available then Fortran routines actually use ATLAS routines and should perform equally well to ATLAS routines. Module flapack ++++++++++++++ In the following all function names are shown without type prefix (s,d,c,z). Optimal values for lwork can be computed using calc_lwork module. Linear Equations ---------------- Drivers:: lu,piv,x,info = gesv(a,b,overwrite_a=0,overwrite_b=0) lub,piv,x,info = gbsv(kl,ku,ab,b,overwrite_ab=0,overwrite_b=0) c,x,info = posv(a,b,lower=0,overwrite_a=0,overwrite_b=0) Computational routines:: lu,piv,info = getrf(a,overwrite_a=0) x,info = getrs(lu,piv,b,trans=0,overwrite_b=0) inv_a,info = getri(lu,piv,lwork=min_lwork,overwrite_lu=0) c,info = potrf(a,lower=0,clean=1,overwrite_a=0) x,info = potrs(c,b,lower=0,overwrite_b=0) inv_a,info = potri(c,lower=0,overwrite_c=0) inv_c,info = trtri(c,lower=0,unitdiag=0,overwrite_c=0) Linear Least Squares (LLS) Problems ----------------------------------- Drivers:: v,x,s,rank,info = gelss(a,b,cond=-1.0,lwork=min_lwork,overwrite_a=0,overwrite_b=0) Computational routines:: qr,tau,info = geqrf(a,lwork=min_lwork,overwrite_a=0) q,info = orgqr|ungqr(qr,tau,lwork=min_lwork,overwrite_qr=0,overwrite_tau=1) Generalized Linear Least Squares (LSE and GLM) Problems ------------------------------------------------------- Standard Eigenvalue and Singular Value Problems ----------------------------------------------- Drivers:: w,v,info = syev|heev(a,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0) w,v,info = syevd|heevd(a,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0) w,v,info = syevr|heevr(a,compute_v=1,lower=0,vrange=,irange=,atol=-1.0,lwork=min_lwork,overwrite_a=0) t,sdim,(wr,wi|w),vs,info = gees(select,a,compute_v=1,sort_t=0,lwork=min_lwork,select_extra_args=(),overwrite_a=0) wr,(wi,vl|w),vr,info = geev(a,compute_vl=1,compute_vr=1,lwork=min_lwork,overwrite_a=0) u,s,vt,info = gesdd(a,compute_uv=1,lwork=min_lwork,overwrite_a=0) Computational routines:: ht,tau,info = gehrd(a,lo=0,hi=n-1,lwork=min_lwork,overwrite_a=0) ba,lo,hi,pivscale,info = gebal(a,scale=0,permute=0,overwrite_a=0) Generalized Eigenvalue and Singular Value Problems -------------------------------------------------- Drivers:: w,v,info = sygv|hegv(a,b,itype=1,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0,overwrite_b=0) w,v,info = sygvd|hegvd(a,b,itype=1,compute_v=1,lower=0,lwork=min_lwork,overwrite_a=0,overwrite_b=0) (alphar,alphai|alpha),beta,vl,vr,info = ggev(a,b,compute_vl=1,compute_vr=1,lwork=min_lwork,overwrite_a=0,overwrite_b=0) Auxiliary routines ------------------ a,info = lauum(c,lower=0,overwrite_c=0) a = laswp(a,piv,k1=0,k2=len(piv)-1,off=0,inc=1,overwrite_a=0) Module clapack ++++++++++++++ Linear Equations ---------------- Drivers:: lu,piv,x,info = gesv(a,b,rowmajor=1,overwrite_a=0,overwrite_b=0) c,x,info = posv(a,b,lower=0,rowmajor=1,overwrite_a=0,overwrite_b=0) Computational routines:: lu,piv,info = getrf(a,rowmajor=1,overwrite_a=0) x,info = getrs(lu,piv,b,trans=0,rowmajor=1,overwrite_b=0) inv_a,info = getri(lu,piv,rowmajor=1,overwrite_lu=0) c,info = potrf(a,lower=0,clean=1,rowmajor=1,overwrite_a=0) x,info = potrs(c,b,lower=0,rowmajor=1,overwrite_b=0) inv_a,info = potri(c,lower=0,rowmajor=1,overwrite_c=0) inv_c,info = trtri(c,lower=0,unitdiag=0,rowmajor=1,overwrite_c=0) Auxiliary routines ------------------ a,info = lauum(c,lower=0,rowmajor=1,overwrite_c=0) Module calc_lwork +++++++++++++++++ Optimal lwork is maxwrk. Default is minwrk. minwrk,maxwrk = gehrd(prefix,n,lo=0,hi=n-1) minwrk,maxwrk = gesdd(prefix,m,n,compute_uv=1) minwrk,maxwrk = gelss(prefix,m,n,nrhs) minwrk,maxwrk = getri(prefix,n) minwrk,maxwrk = geev(prefix,n,compute_vl=1,compute_vr=1) minwrk,maxwrk = heev(prefix,n,lower=0) minwrk,maxwrk = syev(prefix,n,lower=0) minwrk,maxwrk = gees(prefix,n,compute_v=1) minwrk,maxwrk = geqrf(prefix,m,n) minwrk,maxwrk = gqr(prefix,m,n) """
# functions and views for supporting AJAX requests and responses # # this is a high-level package that exports most symbols from # sub-modules; see those packages for implementation details # AJAX is pretty simple. Requests look like ordinary HTTP requests # (and must be validated the same way) but the response needs the # correct MIME type, and the body is just JSON data, XML, or HTML. # # On the browser side, though, there are a couple of wrinkles. # If the server returns a redirect response, it won't reload the # host page in the browser, it will just cause the browser to # re-issue the AJAX request to the redirected URL. Also, we would # like to be able to return error messages that could be presented # in modal dialogs, rather than forcing the host page to be # redirected to an error page (thus potentially losing state). # # We therefore define some standards around how WE will be doing # AJAX: # # 1. All AJAX requests are POST requests. # # AJAX allows GET but the browser will cache responses. This # is only appropriate if we are absolutely, positively sure # the request will never, ever fail. And since we're not really # going to be that sure, we should always preserve our ability # to process errors in an intelligent fashion. # # 2. All AJAX responses will be JSON, not HTML or XML. # # If we return raw HTML, we have to try to guess when errors # happen and how to present them to the user. Also, the server # would have to know the context of the request in order to # format such an error in a way that makes sense. Skip it; # return HTML responses in a JSON wrapper (see below) so that # errors can be handled the same way for all AJAX calls. # # 3. All AJAX responses will adhere to a common format. # # { # # mark this as a properly-formatted response # 'sculpt': 'ajax', # # # mutually-exclusive error-ish responses # 'location': <url>, # 'exception': { # 'code': <error_code>, # 'title': <error_title>, # optional # 'message': <error_message> # }, # 'error': { # 'code': <error_code>, # 'title': <error_title>, # optional # 'message': <error_message> # }, # 'form_error': [ # [ <field_name>, <field_label>, [ <error_message>, ... ] ], # ... # ], # # # non-exclusive success-ish responses # 'results': <app-specific data> # 'modal': { # 'code': <error_code>, # 'title': <error_title>, # optional # 'message': <error_message> # }, # 'toast': [ # { # 'duration': <time_in_milliseconds>, # 'class_name': <css_class>, # 'message': <toast_message>, # 'delay': <delay> # optional # }, # ... # ], # 'html': [ # { # 'id': <DOM_node_id>, # 'html': <replacement_html>, # 'class_add': <classes_to_add_to_DOM_node>, # optional # 'class_remove': <classesto_remove_from_DOM_node> # optional # }, # ... # ], # } # # For error-ish responses, only ONE of the top-level keys # (location, exception, error, form_error) will be present. # For success-ish responses, at least one (but possibly more) # of the top-level keys is required. For details, see the # client-side code in sculpt_ajax.js. # # NOTE: ALL formatted responses are returned with HTTP status # 200 (OK), including errors. Other errors will be interpreted # by the JavaScript AJAX handler as hard server errors and # generate a canned response. # # NOTE: we extract exceptions as a different type because they # indicate a problem with the server code, and they're handled # differently on the client side (styled). # # 4. Forms will always be submitted via AJAX. # # The normal pattern is for a GET request to a form page to # return an empty (or perhaps pre-populated) form, and the POST # request to be the submission of that form data; successful # POST returns a redirect to the next page, and a failed POST # returns HTML with form errors embedded. # # Instead, we want the POST form submission to happen via AJAX # so that we can return a lighter-weight JSON blob of errors # that can then be used to do some dynamic error highlighting. # We could certainly submit the data in a non-AJAX form, but # that makes the resulting display of errors take longer because # we're not just sending the error data, but the whole page # around it. That's not a great user experience, so we submit # via AJAX and return lighter-weight data. # # This means that for form handlers, GET returns the regular # HTML and is not an AJAX request, but POST returns JSON data # because it IS an AJAX request.
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to read all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
"""This module contains the tools needed for satellite detection within an ACS/WFC image as published in `ACS ISR 2016-01 <http://www.stsci.edu/hst/acs/documents/isrs/isr1601.pdf>`_. .. note:: Only tested for ACS/WFC FLT and FLC images, but it should theoretically work for any instrument. Requires skimage version 0.11.x or later. :func:`skimage.transform.probabilistic_hough_line` gives slightly different results from run to run, but this should not matter since :func:`detsat` only provides crude approximation of the actual trail(s). Performance is faster when ``plot=False``, where applicable. Examples -------- >>> from acstools.satdet import detsat, make_mask, update_dq Find trail segments for a single image and extension without multiprocessing, and display plots (not shown) and verbose information: >>> results, errors = detsat( ... 'jc8m10syq_flc.fits', chips=[4], n_processes=1, ... plot=True, verbose=True) 1 file(s) found... Processing jc8m10syq_flc.fits[4]... Rescale intensity percentiles: 110.161376953, 173.693756409 Length of PHT result: 42 min(x0)= 1, min(x1)= 269, min(y0)= 827, min(y1)= 780 max(x0)=3852, max(x1)=4094, max(y0)=1611, max(y1)=1545 buf=200 topx=3896, topy=1848 ... Trail Direction: Right to Left 42 trail segment(s) detected ... End point list: 1. (1256, 1345), (2770, 1037) 2. ( 11, 1598), ( 269, 1545) ... Total run time: 22.4326269627 s >>> results[('jc8m10syq_flc.fits', 4)] array([[[1242, 1348], [2840, 1023]], [[1272, 1341], [2688, 1053]], ... [[2697, 1055], [2967, 1000]]]) >>> errors {} Find trail segments for multiple images and all ACS/WFC science extensions with multiprocessing: >>> results, errors = detsat( ... '*_flc.fits', chips=[1, 4], n_processes=12, verbose=True) 6 file(s) found... Using 12 processes Number of trail segment(s) found: abell2744-hffpar_acs-wfc_f814w_13495_11_01_jc8n11q9q_flc.fits[1]: 0 abell2744-hffpar_acs-wfc_f814w_13495_11_01_jc8n11q9q_flc.fits[4]: 4 abell2744_acs-wfc_f814w_13495_51_04_jc8n51hoq_flc.fits[1]: 2 abell2744_acs-wfc_f814w_13495_51_04_jc8n51hoq_flc.fits[4]: 34 abell2744_acs-wfc_f814w_13495_93_02_jc8n93a7q_flc.fits[1]: 20 abell2744_acs-wfc_f814w_13495_93_02_jc8n93a7q_flc.fits[4]: 20 j8oc01sxq_flc.fits[1]: 0 j8oc01sxq_flc.fits[4]: 0 jc8m10syq_flc.fits[1]: 0 jc8m10syq_flc.fits[4]: 38 jc8m32j5q_flc.fits[1]: 42 jc8m32j5q_flc.fits[4]: 12 Total run time: 34.6021330357 s >>> results[('jc8m10syq_flc.fits', 4)] array([[[1242, 1348], [2840, 1023]], [[1272, 1341], [2688, 1053]], ... [[2697, 1055], [2967, 1000]]]) >>> errors {} For a given image and extension, create a DQ mask for a satellite trail using the first segment (other segments should give similar masks) based on the results from above (plots not shown): >>> trail_coords = results[('jc8m10syq_flc.fits', 4)] >>> trail_segment = trail_coords[0] >>> trail_segment array([[1199, 1357], [2841, 1023]]) >>> mask = make_mask('jc8m10syq_flc.fits', 4, trail_segment, ... plot=True, verbose=True) Rotation: -11.4976988695 Hit image edge at counter=26 Hit rotate edge at counter=38 Run time: 19.476323843 s Update the corresponding DQ array using the mask from above: >>> update_dq('jc8m10syq_flc.fits', 6, mask, verbose=True) DQ flag value is 16384 Input... flagged NPIX=156362 Existing flagged NPIX=0 Newly... flagged NPIX=156362 jc8m10syq_flc.fits[6] updated """
""" ===================================== Sparse matrices (:mod:`scipy.sparse`) ===================================== .. currentmodule:: scipy.sparse SciPy 2-D sparse matrix package for numeric data. Contents ======== Sparse matrix classes --------------------- .. autosummary:: :toctree: generated/ bsr_matrix - Block Sparse Row matrix coo_matrix - A sparse matrix in COOrdinate format csc_matrix - Compressed Sparse Column matrix csr_matrix - Compressed Sparse Row matrix dia_matrix - Sparse matrix with DIAgonal storage dok_matrix - Dictionary Of Keys based sparse matrix lil_matrix - Row-based list of lists sparse matrix spmatrix - Sparse matrix base class Functions --------- Building sparse matrices: .. autosummary:: :toctree: generated/ eye - Sparse MxN matrix whose k-th diagonal is all ones identity - Identity matrix in sparse format kron - kronecker product of two sparse matrices kronsum - kronecker sum of sparse matrices diags - Return a sparse matrix from diagonals spdiags - Return a sparse matrix from diagonals block_diag - Build a block diagonal sparse matrix tril - Lower triangular portion of a matrix in sparse format triu - Upper triangular portion of a matrix in sparse format bmat - Build a sparse matrix from sparse sub-blocks hstack - Stack sparse matrices horizontally (column wise) vstack - Stack sparse matrices vertically (row wise) rand - Random values in a given shape random - Random values in a given shape Save and load sparse matrices: .. autosummary:: :toctree: generated/ save_npz - Save a sparse matrix to a file using ``.npz`` format. load_npz - Load a sparse matrix from a file using ``.npz`` format. Sparse matrix tools: .. autosummary:: :toctree: generated/ find Identifying sparse matrices: .. autosummary:: :toctree: generated/ issparse isspmatrix isspmatrix_csc isspmatrix_csr isspmatrix_bsr isspmatrix_lil isspmatrix_dok isspmatrix_coo isspmatrix_dia Submodules ---------- .. autosummary:: csgraph - Compressed sparse graph routines linalg - sparse linear algebra routines Exceptions ---------- .. autosummary:: :toctree: generated/ SparseEfficiencyWarning SparseWarning Usage information ================= There are seven available sparse matrix types: 1. csc_matrix: Compressed Sparse Column format 2. csr_matrix: Compressed Sparse Row format 3. bsr_matrix: Block Sparse Row format 4. lil_matrix: List of Lists format 5. dok_matrix: Dictionary of Keys format 6. coo_matrix: COOrdinate format (aka IJV, triplet format) 7. dia_matrix: DIAgonal format To construct a matrix efficiently, use either dok_matrix or lil_matrix. The lil_matrix class supports basic slicing and fancy indexing with a similar syntax to NumPy arrays. As illustrated below, the COO format may also be used to efficiently construct matrices. Despite their similarity to NumPy arrays, it is **strongly discouraged** to use NumPy functions directly on these matrices because NumPy may not properly convert them for computations, leading to unexpected (and incorrect) results. If you do want to apply a NumPy function to these matrices, first check if SciPy has its own implementation for the given sparse matrix class, or **convert the sparse matrix to a NumPy array** (e.g., using the `toarray()` method of the class) first before applying the method. To perform manipulations such as multiplication or inversion, first convert the matrix to either CSC or CSR format. The lil_matrix format is row-based, so conversion to CSR is efficient, whereas conversion to CSC is less so. All conversions among the CSR, CSC, and COO formats are efficient, linear-time operations. Matrix vector product --------------------- To do a vector product between a sparse matrix and a vector simply use the matrix `dot` method, as described in its docstring: >>> import numpy as np >>> from scipy.sparse import csr_matrix >>> A = csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]]) >>> v = np.array([1, 0, -1]) >>> A.dot(v) array([ 1, -3, -1], dtype=int64) .. warning:: As of NumPy 1.7, `np.dot` is not aware of sparse matrices, therefore using it will result on unexpected results or errors. The corresponding dense array should be obtained first instead: >>> np.dot(A.toarray(), v) array([ 1, -3, -1], dtype=int64) but then all the performance advantages would be lost. The CSR format is specially suitable for fast matrix vector products. Example 1 --------- Construct a 1000x1000 lil_matrix and add some values to it: >>> from scipy.sparse import lil_matrix >>> from scipy.sparse.linalg import spsolve >>> from numpy.linalg import solve, norm >>> from numpy.random import rand >>> A = lil_matrix((1000, 1000)) >>> A[0, :100] = rand(100) >>> A[1, 100:200] = A[0, :100] >>> A.setdiag(rand(1000)) Now convert it to CSR format and solve A x = b for x: >>> A = A.tocsr() >>> b = rand(1000) >>> x = spsolve(A, b) Convert it to a dense matrix and solve, and check that the result is the same: >>> x_ = solve(A.toarray(), b) Now we can compute norm of the error with: >>> err = norm(x-x_) >>> err < 1e-10 True It should be small :) Example 2 --------- Construct a matrix in COO format: >>> from scipy import sparse >>> from numpy import array >>> I = array([0,3,1,0]) >>> J = array([0,3,1,2]) >>> V = array([4,5,7,9]) >>> A = sparse.coo_matrix((V,(I,J)),shape=(4,4)) Notice that the indices do not need to be sorted. Duplicate (i,j) entries are summed when converting to CSR or CSC. >>> I = array([0,0,1,3,1,0,0]) >>> J = array([0,2,1,3,1,0,0]) >>> V = array([1,1,1,1,1,1,1]) >>> B = sparse.coo_matrix((V,(I,J)),shape=(4,4)).tocsr() This is useful for constructing finite-element stiffness and mass matrices. Further details --------------- CSR column indices are not necessarily sorted. Likewise for CSC row indices. Use the .sorted_indices() and .sort_indices() methods when sorted indices are required (e.g., when passing data to other libraries). """
# import unittest # import os # import tempfile,shutil,copy # from pyramid import testing # from pyramid.paster import get_appsettings # from askomics.libaskomics.TripleStoreExplorer import TripleStoreExplorer # import json # from interface_tps import InterfaceTPS # class tripleStoreExplorerTests(unittest.TestCase): # def setUp( self ): # self.settings = get_appsettings('configs/development.virtuoso.ini', name='main') # self.request = testing.DummyRequest() # self.request.session['upload_directory'] = os.path.join( os.path.dirname( __file__ ), "..", "test-data") # self.request.session['username'] = 'jdoe' # self.request.session['group'] = 'base' # self.temp_directory = tempfile.mkdtemp() # self.it = InterfaceTPS(self.settings,self.request) # self.it.empty() # self.it.load_test1() # def tearDown( self ): # shutil.rmtree( self.temp_directory ) # self.it.empty() # def test_start_points(self): # tse = TripleStoreExplorer(self.settings, self.request.session) # tse.get_start_points() # def test_build_sparql_query_from_json(self): # variates = ['?Personne1', '?label1', '?Age1', '?ID1', '?Sexe1'] # constraintesRelations = [['?URIPersonne1 rdf:type <http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Personne>', # '?URIPersonne1 rdfs:label ?Personne1', # '?URIPersonne1 <http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#label> ?label1', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#label> rdfs:domain <http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Personne>', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#label> rdfs:range <http://www.w3.org/2001/XMLSchema#string>', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#label> rdf:type owl:DatatypeProperty', # '?URIPersonne1 <http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Age> ?Age1', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Age> rdfs:domain <http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Personne>', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Age> rdfs:range <http://www.w3.org/2001/XMLSchema#decimal>', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Age> rdf:type owl:DatatypeProperty', # '?URIPersonne1 <http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#ID> ?ID1', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#ID> rdfs:domain <http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Personne>', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#ID> rdfs:range <http://www.w3.org/2001/XMLSchema#string>', # '<http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#ID> rdf:type owl:DatatypeProperty', # '?URIPersonne1 <http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#Sexe> ?Sexe1'],''] # limit = 10 # tse = TripleStoreExplorer(self.settings, self.request.session) # results,query = tse.build_sparql_query_from_json(variates,constraintesRelations,limit,True) # a = {'Age1': '23', # 'ID1': 'AZERTY', # 'Personne1': 'A', # 'Sexe1': 'http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#F', # 'label1': 'Alice'} # b = {'Age1': '25', # 'ID1': 'QSDFG', # 'Personne1': 'B', # 'Sexe1': 'http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#M', # 'label1': 'Bob'} # c = {'Age1': '34', # 'ID1': 'WXCVB', # 'Personne1': 'C', # 'Sexe1': 'http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#M', # 'label1': 'Charles'} # d = {'Age1': '45', # 'ID1': 'RTYU', # 'Personne1': 'D', # 'Sexe1': 'http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#M', # 'label1': 'Denis'} # e = {'Age1': '55', # 'ID1': 'RTYUIO', # 'Personne1': 'E', # 'Sexe1': 'http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#F','label1': 'Elise'} # f = {'Age1': '77', # 'ID1': 'DFGH', # 'Personne1': 'F', # 'Sexe1': 'http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#M', # 'label1': 'Fred'} # g = {'Age1': '99', # 'ID1': 'BNHJK', # 'Personne1': 'G', # 'Sexe1': 'http://www.semanticweb.org/irisa/ontologies/2016/1/igepp-ontology#M', # 'label1': 'Georges'} # for elt in results: # self.assertTrue(elt in [a,b,c,d,e,f,g]) # assert len(results) == 7 # constraintesR1 = copy.deepcopy(constraintesRelations) # constraintesR1[0].append('VALUES ?Sexe1 { :F }') # #results,query = tse.build_sparql_query_from_json(variates,constraintesR1,limit,True) # #for elt in results: # # print(elt) # # self.assertTrue(elt in [a,e]) # #assert len(results) == 2 # #constraintesR2 = copy.deepcopy(constraintesRelations) # #constraintesR2[0].append('FILTER ( ?Age1 < 25)') # #results,query = tse.build_sparql_query_from_json(variates,constraintesR2,limit,True) # #assert results == [a]
""" LsDisk - Command ``ls -lanR /dev/disk/by-*`` ============================================ The ``ls -lanR /dev/disk/by-*`` command provides information for the listing of the directories under ``/dev/disk/`` . Sample input is shown in the Examples. See ``FileListing`` class for additional information. Examples: >>> LS_DISK = ''' ... /dev/disk/by-id: ... total 0 ... drwxr-xr-x. 2 0 0 360 Sep 20 09:36 . ... drwxr-xr-x. 5 0 0 100 Sep 20 09:36 .. ... lrwxrwxrwx. 1 0 0 9 Sep 20 09:36 ata-VBOX_CD-ROM_VB2-01700376 -> ../../sr0 ... lrwxrwxrwx. 1 0 0 9 Sep 20 09:36 ata-VBOX_HARDDISK_VB4c56cb04-26932e6a -> ../../sdb ... lrwxrwxrwx. 1 0 0 10 Sep 20 09:36 ata-VBOX_HARDDISK_VB4c56cb04-26932e6a-part1 -> ../../sdb1 ... lrwxrwxrwx. 1 0 0 10 Sep 20 09:36 scsi-SATA_VBOX_HARDDISK_VB4c56cb04-26932e6a-part1 -> ../../sdb1 ... ... /dev/disk/by-path: ... total 0 ... drwxr-xr-x. 2 0 0 160 Sep 20 09:36 . ... drwxr-xr-x. 5 0 0 100 Sep 20 09:36 .. ... lrwxrwxrwx. 1 0 0 9 Sep 20 09:36 pci-0000:00:0d.0-scsi-1:0:0:0 -> ../../sdb ... lrwxrwxrwx. 1 0 0 10 Sep 20 09:36 pci-0000:00:0d.0-scsi-1:0:0:0-part1 -> ../../sdb1 ... ... /dev/disk/by-uuid: ... total 0 ... drwxr-xr-x. 2 0 0 100 Sep 20 09:36 . ... drwxr-xr-x. 5 0 0 100 Sep 20 09:36 .. ... lrwxrwxrwx. 1 0 0 10 Sep 20 09:36 3ab50b34-d0b9-4518-9f21-05307d895f81 -> ../../dm-1 ... lrwxrwxrwx. 1 0 0 10 Sep 20 09:36 51c5cf12-a577-441e-89da-bc93a73a1ba3 -> ../../sda1 ... lrwxrwxrwx. 1 0 0 10 Sep 20 09:36 7b0068d4-1399-4ce7-a54a-3e2fc1232299 -> ../../dm-0 ... ''' >>> from insights.tests import context_wrap >>> ls_disk = LsDisk(context_wrap(LS_DISK)) <__main__.LsDisk object at 0x7f674914c690> >>> "/dev/disk/by-path" in ls_disk True >>> ls_disk.files_of("/dev/disk/by-path") ['pci-0000:00:0d.0-scsi-1:0:0:0', 'pci-0000:00:0d.0-scsi-1:0:0:0-part1'] >>> ls_disk.dirs_of("/dev/disk/by-path") ['.', '..'] >>> ls_disk.specials_of("/dev/disk/by-path") [] >>> ls_disk.listing_of("/dev/disk/by-path").keys() ['pci-0000:00:0d.0-scsi-1:0:0:0-part1', 'pci-0000:00:0d.0-scsi-1:0:0:0', '..', '.'] >>> ls_disk.dir_entry("/dev/disk/by-path", "pci-0000:00:0d.0-scsi-1:0:0:0") {'group': '0', 'name': 'pci-0000:00:0d.0-scsi-1:0:0:0', 'links': 1, 'perms': 'rwxrwxrwx.', 'raw_entry': 'lrwxrwxrwx. 1 0 0 9 Sep 20 09:36 pci-0000:00:0d.0-scsi-1:0:0:0 -> ../../sdb', 'owner': '0', 'link': '../../sdb', 'date': 'Sep 20 09:36', 'type': 'l', 'size': 9} >>> ls_disk.listing_of('/dev/disk/by-path')['.']['type'] == 'd' True >>> ls_disk.listing_of('/dev/disk/by-path')['pci-0000:00:0d.0-scsi-1:0:0:0']['link'] '../../sdb' """
"""Doctest for method/function calls. We're going the use these types for extra testing >>> from UserList import UserList >>> from UserDict import UserDict We're defining four helper functions >>> def e(a,b): ... print a, b >>> def f(*a, **k): ... print a, test_support.sortdict(k) >>> def g(x, *y, **z): ... print x, y, test_support.sortdict(z) >>> def h(j=1, a=2, h=3): ... print j, a, h Argument list examples >>> f() () {} >>> f(1) (1,) {} >>> f(1, 2) (1, 2) {} >>> f(1, 2, 3) (1, 2, 3) {} >>> f(1, 2, 3, *(4, 5)) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *[4, 5]) (1, 2, 3, 4, 5) {} >>> f(1, 2, 3, *UserList([4, 5])) (1, 2, 3, 4, 5) {} Here we add keyword arguments >>> f(1, 2, 3, **{'a':4, 'b':5}) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *[4, 5], **{'a':6, 'b':7}) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **{'a':8, 'b': 9}) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} >>> f(1, 2, 3, **UserDict(a=4, b=5)) (1, 2, 3) {'a': 4, 'b': 5} >>> f(1, 2, 3, *(4, 5), **UserDict(a=6, b=7)) (1, 2, 3, 4, 5) {'a': 6, 'b': 7} >>> f(1, 2, 3, x=4, y=5, *(6, 7), **UserDict(a=8, b=9)) (1, 2, 3, 6, 7) {'a': 8, 'b': 9, 'x': 4, 'y': 5} Examples with invalid arguments (TypeErrors). We're also testing the function names in the exception messages. Verify clearing of SF bug #733667 >>> e(c=4) Traceback (most recent call last): ... TypeError: e() got an unexpected keyword argument 'c' >>> g() Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(*()) Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(*(), **{}) Traceback (most recent call last): ... TypeError: g() takes at least 1 argument (0 given) >>> g(1) 1 () {} >>> g(1, 2) 1 (2,) {} >>> g(1, 2, 3) 1 (2, 3) {} >>> g(1, 2, 3, *(4, 5)) 1 (2, 3, 4, 5) {} >>> class Nothing: pass ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not instance >>> class Nothing: ... def __len__(self): return 5 ... >>> g(*Nothing()) Traceback (most recent call last): ... TypeError: g() argument after * must be a sequence, not instance >>> class Nothing(): ... def __len__(self): return 5 ... def __getitem__(self, i): ... if i<3: return i ... else: raise IndexError(i) ... >>> g(*Nothing()) 0 (1, 2) {} >>> class Nothing: ... def __init__(self): self.c = 0 ... def __iter__(self): return self ... def next(self): ... if self.c == 4: ... raise StopIteration ... c = self.c ... self.c += 1 ... return c ... >>> g(*Nothing()) 0 (1, 2, 3) {} Make sure that the function doesn't stomp the dictionary >>> d = {'a': 1, 'b': 2, 'c': 3} >>> d2 = d.copy() >>> g(1, d=4, **d) 1 () {'a': 1, 'b': 2, 'c': 3, 'd': 4} >>> d == d2 True What about willful misconduct? >>> def saboteur(**kw): ... kw['x'] = 'm' ... return kw >>> d = {} >>> kw = saboteur(a=1, **d) >>> d {} >>> g(1, 2, 3, **{'x': 4, 'y': 5}) Traceback (most recent call last): ... TypeError: g() got multiple values for keyword argument 'x' >>> f(**{1:2}) Traceback (most recent call last): ... TypeError: f() keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' >>> h(*h) Traceback (most recent call last): ... TypeError: h() argument after * must be a sequence, not function >>> dir(*h) Traceback (most recent call last): ... TypeError: dir() argument after * must be a sequence, not function >>> None(*h) Traceback (most recent call last): ... TypeError: NoneType object argument after * must be a sequence, \ not function >>> h(**h) Traceback (most recent call last): ... TypeError: h() argument after ** must be a mapping, not function >>> dir(**h) Traceback (most recent call last): ... TypeError: dir() argument after ** must be a mapping, not function >>> None(**h) Traceback (most recent call last): ... TypeError: NoneType object argument after ** must be a mapping, \ not function >>> dir(b=1, **{'b': 1}) Traceback (most recent call last): ... TypeError: dir() got multiple values for keyword argument 'b' Another helper function >>> def f2(*a, **b): ... return a, b >>> d = {} >>> for i in xrange(512): ... key = 'k%d' % i ... d[key] = i >>> a, b = f2(1, *(2,3), **d) >>> len(a), len(b), b == d (3, 512, True) >>> class Foo: ... def method(self, arg1, arg2): ... return arg1+arg2 >>> x = Foo() >>> Foo.method(*(x, 1, 2)) 3 >>> Foo.method(x, *(1, 2)) 3 >>> Foo.method(*(1, 2, 3)) Traceback (most recent call last): ... TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) >>> Foo.method(1, *[2, 3]) Traceback (most recent call last): ... TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) A PyCFunction that takes only positional parameters shoud allow an empty keyword dictionary to pass without a complaint, but raise a TypeError if te dictionary is not empty >>> try: ... silence = id(1, *{}) ... True ... except: ... False True >>> id(1, **{'foo': 1}) Traceback (most recent call last): ... TypeError: id() takes no keyword arguments """
"""Generic socket server classes. This module tries to capture the various aspects of defining a server: For socket-based servers: - address family: - AF_INET{,6}: IP (Internet Protocol) sockets (default) - AF_UNIX: Unix domain sockets - others, e.g. AF_DECNET are conceivable (see <socket.h> - socket type: - SOCK_STREAM (reliable stream, e.g. TCP) - SOCK_DGRAM (datagrams, e.g. UDP) For request-based servers (including socket-based): - client address verification before further looking at the request (This is actually a hook for any processing that needs to look at the request before anything else, e.g. logging) - how to handle multiple requests: - synchronous (one request is handled at a time) - forking (each request is handled by a new process) - threading (each request is handled by a new thread) The classes in this module favor the server type that is simplest to write: a synchronous TCP/IP server. This is bad class design, but save some typing. (There's also the issue that a deep class hierarchy slows down method lookups.) There are five classes in an inheritance diagram, four of which represent synchronous servers of four types: +------------+ | BaseServer | +------------+ | v +-----------+ +------------------+ | TCPServer |------->| UnixStreamServer | +-----------+ +------------------+ | v +-----------+ +--------------------+ | UDPServer |------->| UnixDatagramServer | +-----------+ +--------------------+ Note that UnixDatagramServer derives from UDPServer, not from UnixStreamServer -- the only difference between an IP and a Unix stream server is the address family, which is simply repeated in both unix server classes. Forking and threading versions of each type of server can be created using the ForkingMixIn and ThreadingMixIn mix-in classes. For instance, a threading UDP server class is created as follows: class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass The Mix-in class must come first, since it overrides a method defined in UDPServer! Setting the various member variables also changes the behavior of the underlying server mechanism. To implement a service, you must derive a class from BaseRequestHandler and redefine its handle() method. You can then run various versions of the service by combining one of the server classes with your request handler class. The request handler class must be different for datagram or stream services. This can be hidden by using the request handler subclasses StreamRequestHandler or DatagramRequestHandler. Of course, you still have to use your head! For instance, it makes no sense to use a forking server if the service contains state in memory that can be modified by requests (since the modifications in the child process would never reach the initial state kept in the parent process and passed to each child). In this case, you can use a threading server, but you will probably have to use locks to avoid two requests that come in nearly simultaneous to apply conflicting changes to the server state. On the other hand, if you are building e.g. an HTTP server, where all data is stored externally (e.g. in the file system), a synchronous class will essentially render the service "deaf" while one request is being handled -- which may be for a very long time if a client is slow to read all the data it has requested. Here a threading or forking server is appropriate. In some cases, it may be appropriate to process part of a request synchronously, but to finish processing in a forked child depending on the request data. This can be implemented by using a synchronous server and doing an explicit fork in the request handler class handle() method. Another approach to handling multiple simultaneous requests in an environment that supports neither threads nor fork (or where these are too expensive or inappropriate for the service) is to maintain an explicit table of partially finished requests and to use select() to decide which request to work on next (or whether to handle a new incoming request). This is particularly important for stream services where each client can potentially be connected for a long time (if threads or subprocesses cannot be used). Future work: - Standard classes for Sun RPC (which uses either UDP or TCP) - Standard mix-in classes to implement various authentication and encryption schemes - Standard framework for select-based multiplexing XXX Open problems: - What to do with out-of-band data? BaseServer: - split generic "request" functionality out into BaseServer class. Copyright (C) 2000 NAME <EMAIL> example: read entries from a SQL database (requires overriding get_request() to return a table entry from the database). entry is processed by a RequestHandlerClass. """
""" ============= Miscellaneous ============= IEEE 754 Floating Point Special Values -------------------------------------- Special values defined in numpy: nan, inf, NaNs can be used as a poor-man's mask (if you don't care what the original value was) Note: cannot use equality to test NaNs. E.g.: :: >>> myarr = np.array([1., 0., np.nan, 3.]) >>> np.where(myarr == np.nan) >>> np.nan == np.nan # is always False! Use special numpy functions instead. False >>> myarr[myarr == np.nan] = 0. # doesn't work >>> myarr array([ 1., 0., NaN, 3.]) >>> myarr[np.isnan(myarr)] = 0. # use this instead find >>> myarr array([ 1., 0., 0., 3.]) Other related special value functions: :: isinf(): True if value is inf isfinite(): True if not nan or inf nan_to_num(): Map nan to 0, inf to max float, -inf to min float The following corresponds to the usual functions except that nans are excluded from the results: :: nansum() nanmax() nanmin() nanargmax() nanargmin() >>> x = np.arange(10.) >>> x[3] = np.nan >>> x.sum() nan >>> np.nansum(x) 42.0 How numpy handles numerical exceptions -------------------------------------- The default is to ``'warn'`` for ``invalid``, ``divide``, and ``overflow`` and ``'ignore'`` for ``underflow``. But this can be changed, and it can be set individually for different kinds of exceptions. The different behaviors are: - 'ignore' : Take no action when the exception occurs. - 'warn' : Print a `RuntimeWarning` (via the Python `warnings` module). - 'raise' : Raise a `FloatingPointError`. - 'call' : Call a function specified using the `seterrcall` function. - 'print' : Print a warning directly to ``stdout``. - 'log' : Record error in a Log object specified by `seterrcall`. These behaviors can be set for all kinds of errors or specific ones: - all : apply to all numeric exceptions - invalid : when NaNs are generated - divide : divide by zero (for integers as well!) - overflow : floating point overflows - underflow : floating point underflows Note that integer divide-by-zero is handled by the same machinery. These behaviors are set on a per-thread basis. Examples -------- :: >>> oldsettings = np.seterr(all='warn') >>> np.zeros(5,dtype=np.float32)/0. invalid value encountered in divide >>> j = np.seterr(under='ignore') >>> np.array([1.e-100])**10 >>> j = np.seterr(invalid='raise') >>> np.sqrt(np.array([-1.])) FloatingPointError: invalid value encountered in sqrt >>> def errorhandler(errstr, errflag): ... print "saw stupid error!" >>> np.seterrcall(errorhandler) <function err_handler at 0x...> >>> j = np.seterr(all='call') >>> np.zeros(5, dtype=np.int32)/0 FloatingPointError: invalid value encountered in divide saw stupid error! >>> j = np.seterr(**oldsettings) # restore previous ... # error-handling settings Interfacing to C ---------------- Only a survey of the choices. Little detail on how each works. 1) Bare metal, wrap your own C-code manually. - Plusses: - Efficient - No dependencies on other tools - Minuses: - Lots of learning overhead: - need to learn basics of Python C API - need to learn basics of numpy C API - need to learn how to handle reference counting and love it. - Reference counting often difficult to get right. - getting it wrong leads to memory leaks, and worse, segfaults - API will change for Python 3.0! 2) Cython - Plusses: - avoid learning C API's - no dealing with reference counting - can code in pseudo python and generate C code - can also interface to existing C code - should shield you from changes to Python C api - has become the de-facto standard within the scientific Python community - fast indexing support for arrays - Minuses: - Can write code in non-standard form which may become obsolete - Not as flexible as manual wrapping 4) ctypes - Plusses: - part of Python standard library - good for interfacing to existing sharable libraries, particularly Windows DLLs - avoids API/reference counting issues - good numpy support: arrays have all these in their ctypes attribute: :: a.ctypes.data a.ctypes.get_strides a.ctypes.data_as a.ctypes.shape a.ctypes.get_as_parameter a.ctypes.shape_as a.ctypes.get_data a.ctypes.strides a.ctypes.get_shape a.ctypes.strides_as - Minuses: - can't use for writing code to be turned into C extensions, only a wrapper tool. 5) SWIG (automatic wrapper generator) - Plusses: - around a long time - multiple scripting language support - C++ support - Good for wrapping large (many functions) existing C libraries - Minuses: - generates lots of code between Python and the C code - can cause performance problems that are nearly impossible to optimize out - interface files can be hard to write - doesn't necessarily avoid reference counting issues or needing to know API's 7) scipy.weave - Plusses: - can turn many numpy expressions into C code - dynamic compiling and loading of generated C code - can embed pure C code in Python module and have weave extract, generate interfaces and compile, etc. - Minuses: - Future very uncertain: it's the only part of Scipy not ported to Python 3 and is effectively deprecated in favor of Cython. 8) Psyco - Plusses: - Turns pure python into efficient machine code through jit-like optimizations - very fast when it optimizes well - Minuses: - Only on intel (windows?) - Doesn't do much for numpy? Interfacing to Fortran: ----------------------- The clear choice to wrap Fortran code is `f2py <http://docs.scipy.org/doc/numpy-dev/f2py/>`_. Pyfort is an older alternative, but not supported any longer. Fwrap is a newer project that looked promising but isn't being developed any longer. Interfacing to C++: ------------------- 1) Cython 2) CXX 3) Boost.python 4) SWIG 5) SIP (used mainly in PyQT) """
""" Work out the first ten digits of the sum of the following one-hundred 50-digit numbers. 37107287533902102798797998220837590246510135740250 46376937677490009712648124896970078050417018260538 74324986199524741059474233309513058123726617309629 91942213363574161572522430563301811072406154908250 23067588207539346171171980310421047513778063246676 89261670696623633820136378418383684178734361726757 28112879812849979408065481931592621691275889832738 44274228917432520321923589422876796487670272189318 47451445736001306439091167216856844588711603153276 70386486105843025439939619828917593665686757934951 62176457141856560629502157223196586755079324193331 64906352462741904929101432445813822663347944758178 92575867718337217661963751590579239728245598838407 58203565325359399008402633568948830189458628227828 80181199384826282014278194139940567587151170094390 35398664372827112653829987240784473053190104293586 86515506006295864861532075273371959191420517255829 71693888707715466499115593487603532921714970056938 54370070576826684624621495650076471787294438377604 53282654108756828443191190634694037855217779295145 36123272525000296071075082563815656710885258350721 45876576172410976447339110607218265236877223636045 17423706905851860660448207621209813287860733969412 81142660418086830619328460811191061556940512689692 51934325451728388641918047049293215058642563049483 62467221648435076201727918039944693004732956340691 15732444386908125794514089057706229429197107928209 55037687525678773091862540744969844508330393682126 18336384825330154686196124348767681297534375946515 80386287592878490201521685554828717201219257766954 78182833757993103614740356856449095527097864797581 16726320100436897842553539920931837441497806860984 48403098129077791799088218795327364475675590848030 87086987551392711854517078544161852424320693150332 59959406895756536782107074926966537676326235447210 69793950679652694742597709739166693763042633987085 41052684708299085211399427365734116182760315001271 65378607361501080857009149939512557028198746004375 35829035317434717326932123578154982629742552737307 94953759765105305946966067683156574377167401875275 88902802571733229619176668713819931811048770190271 25267680276078003013678680992525463401061632866526 36270218540497705585629946580636237993140746255962 24074486908231174977792365466257246923322810917141 91430288197103288597806669760892938638285025333403 34413065578016127815921815005561868836468420090470 23053081172816430487623791969842487255036638784583 11487696932154902810424020138335124462181441773470 63783299490636259666498587618221225225512486764533 67720186971698544312419572409913959008952310058822 95548255300263520781532296796249481641953868218774 76085327132285723110424803456124867697064507995236 37774242535411291684276865538926205024910326572967 23701913275725675285653248258265463092207058596522 29798860272258331913126375147341994889534765745501 18495701454879288984856827726077713721403798879715 38298203783031473527721580348144513491373226651381 34829543829199918180278916522431027392251122869539 40957953066405232632538044100059654939159879593635 29746152185502371307642255121183693803580388584903 41698116222072977186158236678424689157993532961922 62467957194401269043877107275048102390895523597457 23189706772547915061505504953922979530901129967519 86188088225875314529584099251203829009407770775672 11306739708304724483816533873502340845647058077308 82959174767140363198008187129011875491310547126581 97623331044818386269515456334926366572897563400500 42846280183517070527831839425882145521227251250327 55121603546981200581762165212827652751691296897789 32238195734329339946437501907836945765883352399886 75506164965184775180738168837861091527357929701337 62177842752192623401942399639168044983993173312731 32924185707147349566916674687634660915035914677504 99518671430235219628894890102423325116913619626622 73267460800591547471830798392868535206946944540724 76841822524674417161514036427982273348055556214818 97142617910342598647204516893989422179826088076852 87783646182799346313767754307809363333018982642090 10848802521674670883215120185883543223812876952786 71329612474782464538636993009049310363619763878039 62184073572399794223406235393808339651327408011116 66627891981488087797941876876144230030984490851411 60661826293682836764744779239180335110989069790714 85786944089552990653640447425576083659976645795096 66024396409905389607120198219976047599490197230297 64913982680032973156037120041377903785566085089252 16730939319872750275468906903707539413042652315011 94809377245048795150954100921645863754710598436791 78639167021187492431995700641917969777599028300699 15368713711936614952811305876380278410754449733078 40789923115535562561142322423255033685442488917353 44889911501440648020369068063960672322193204149535 41503128880339536053299340368006977710650566631954 81234880673210146739058568557934581403627822703280 82616570773948327592232845941706525094512325230608 22918802058777319719839450180888072429661980811197 77158542502016545090413245809786882778948721859617 72107838435069186155435662884062257473692284509516 20849603980134001723930671666823555245252804609722 53503534226472524250874054075591789781264330331690 """
""" Simple config ============= Although CherryPy uses the :mod:`Python logging module <logging>`, it does so behind the scenes so that simple logging is simple, but complicated logging is still possible. "Simple" logging means that you can log to the screen (i.e. console/stdout) or to a file, and that you can easily have separate error and access log files. Here are the simplified logging settings. You use these by adding lines to your config file or dict. You should set these at either the global level or per application (see next), but generally not both. * ``log.screen``: Set this to True to have both "error" and "access" messages printed to stdout. * ``log.access_file``: Set this to an absolute filename where you want "access" messages written. * ``log.error_file``: Set this to an absolute filename where you want "error" messages written. Many events are automatically logged; to log your own application events, call :func:`cherrypy.log`. Architecture ============ Separate scopes --------------- CherryPy provides log managers at both the global and application layers. This means you can have one set of logging rules for your entire site, and another set of rules specific to each application. The global log manager is found at :func:`cherrypy.log`, and the log manager for each application is found at :attr:`app.log<cherrypy._cptree.Application.log>`. If you're inside a request, the latter is reachable from ``cherrypy.request.app.log``; if you're outside a request, you'll have to obtain a reference to the ``app``: either the return value of :func:`tree.mount()<cherrypy._cptree.Tree.mount>` or, if you used :func:`quickstart()<cherrypy.quickstart>` instead, via ``cherrypy.tree.apps['/']``. By default, the global logs are named "cherrypy.error" and "cherrypy.access", and the application logs are named "cherrypy.error.2378745" and "cherrypy.access.2378745" (the number is the id of the Application object). This means that the application logs "bubble up" to the site logs, so if your application has no log handlers, the site-level handlers will still log the messages. Errors vs. Access ----------------- Each log manager handles both "access" messages (one per HTTP request) and "error" messages (everything else). Note that the "error" log is not just for errors! The format of access messages is highly formalized, but the error log isn't--it receives messages from a variety of sources (including full error tracebacks, if enabled). If you are logging the access log and error log to the same source, then there is a possibility that a specially crafted error message may replicate an access log message as described in CWE-117. In this case it is the application developer's responsibility to manually escape data before using CherryPy's log() functionality, or they may create an application that is vulnerable to CWE-117. This would be achieved by using a custom handler escape any special characters, and attached as described below. Custom Handlers =============== The simple settings above work by manipulating Python's standard :mod:`logging` module. So when you need something more complex, the full power of the standard module is yours to exploit. You can borrow or create custom handlers, formats, filters, and much more. Here's an example that skips the standard FileHandler and uses a RotatingFileHandler instead: :: #python log = app.log # Remove the default FileHandlers if present. log.error_file = "" log.access_file = "" maxBytes = getattr(log, "rot_maxBytes", 10000000) backupCount = getattr(log, "rot_backupCount", 1000) # Make a new RotatingFileHandler for the error log. fname = getattr(log, "rot_error_file", "error.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.error_log.addHandler(h) # Make a new RotatingFileHandler for the access log. fname = getattr(log, "rot_access_file", "access.log") h = handlers.RotatingFileHandler(fname, 'a', maxBytes, backupCount) h.setLevel(DEBUG) h.setFormatter(_cplogging.logfmt) log.access_log.addHandler(h) The ``rot_*`` attributes are pulled straight from the application log object. Since "log.*" config entries simply set attributes on the log object, you can add custom attributes to your heart's content. Note that these handlers are used ''instead'' of the default, simple handlers outlined above (so don't set the "log.error_file" config entry, for example). """
#!/usr/bin/env python # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is font utility code. # # The Initial Developer of the Original Code is Mozilla Corporation. # Portions created by the Initial Developer are Copyright (C) 2009 # the Initial Developer. All Rights Reserved. # # Contributor(s): # NAME <EMAIL> # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** */ # eotlitetool.py - create EOT version of OpenType font for use with IE # # Usage: eotlitetool.py [-o output-filename] font1 [font2 ...] # # OpenType file structure # http://www.microsoft.com/typography/otspec/otff.htm # # Types: # # BYTE 8-bit unsigned integer. # CHAR 8-bit signed integer. # USHORT 16-bit unsigned integer. # SHORT 16-bit signed integer. # ULONG 32-bit unsigned integer. # Fixed 32-bit signed fixed-point number (16.16) # LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer. # # SFNT Header # # Fixed sfnt version // 0x00010000 for version 1.0. # USHORT numTables // Number of tables. # USHORT searchRange // (Maximum power of 2 <= numTables) x 16. # USHORT entrySelector // Log2(maximum power of 2 <= numTables). # USHORT rangeShift // NumTables x 16-searchRange. # # Table Directory # # ULONG tag // 4-byte identifier. # ULONG checkSum // CheckSum for this table. # ULONG offset // Offset from beginning of TrueType font file. # ULONG length // Length of this table. # # OS/2 Table (Version 4) # # USHORT version // 0x0004 # SHORT xAvgCharWidth # USHORT usWeightClass # USHORT usWidthClass # USHORT fsType # SHORT ySubscriptXSize # SHORT ySubscriptYSize # SHORT ySubscriptXOffset # SHORT ySubscriptYOffset # SHORT ySuperscriptXSize # SHORT ySuperscriptYSize # SHORT ySuperscriptXOffset # SHORT ySuperscriptYOffset # SHORT yStrikeoutSize # SHORT yStrikeoutPosition # SHORT sFamilyClass # BYTE panose[10] # ULONG ulUnicodeRange1 // Bits 0-31 # ULONG ulUnicodeRange2 // Bits 32-63 # ULONG ulUnicodeRange3 // Bits 64-95 # ULONG ulUnicodeRange4 // Bits 96-127 # CHAR achVendID[4] # USHORT fsSelection # USHORT usFirstCharIndex # USHORT usLastCharIndex # SHORT sTypoAscender # SHORT sTypoDescender # SHORT sTypoLineGap # USHORT usWinAscent # USHORT usWinDescent # ULONG ulCodePageRange1 // Bits 0-31 # ULONG ulCodePageRange2 // Bits 32-63 # SHORT sxHeight # SHORT sCapHeight # USHORT usDefaultChar # USHORT usBreakChar # USHORT usMaxContext # # # The Naming Table is organized as follows: # # [name table header] # [name records] # [string data] # # Name Table Header # # USHORT format // Format selector (=0). # USHORT count // Number of name records. # USHORT stringOffset // Offset to start of string storage (from start of table). # # Name Record # # USHORT platformID // Platform ID. # USHORT encodingID // Platform-specific encoding ID. # USHORT languageID // Language ID. # USHORT nameID // Name ID. # USHORT length // String length (in bytes). # USHORT offset // String offset from start of storage area (in bytes). # # head Table # # Fixed tableVersion // Table version number 0x00010000 for version 1.0. # Fixed fontRevision // Set by font manufacturer. # ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum. # ULONG magicNumber // Set to 0x5F0F3CF5. # USHORT flags # USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines. # LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # SHORT xMin // For all glyph bounding boxes. # SHORT yMin # SHORT xMax # SHORT yMax # USHORT macStyle # USHORT lowestRecPPEM // Smallest readable size in pixels. # SHORT fontDirectionHint # SHORT indexToLocFormat // 0 for short offsets, 1 for long. # SHORT glyphDataFormat // 0 for current format. # # # # Embedded OpenType (EOT) file format # http://www.w3.org/Submission/EOT/ # # EOT version 0x00020001 # # An EOT font consists of a header with the original OpenType font # appended at the end. Most of the data in the EOT header is simply a # copy of data from specific tables within the font data. The exceptions # are the 'Flags' field and the root string name field. The root string # is a set of names indicating domains for which the font data can be # used. A null root string implies the font data can be used anywhere. # The EOT header is in little-endian byte order but the font data remains # in big-endian order as specified by the OpenType spec. # # Overall structure: # # [EOT header] # [EOT name records] # [font data] # # EOT header # # ULONG eotSize // Total structure length in bytes (including string and font data) # ULONG fontDataSize // Length of the OpenType font (FontData) in bytes # ULONG version // Version number of this format - 0x00020001 # ULONG flags // Processing Flags (0 == no special processing) # BYTE fontPANOSE[10] // OS/2 Table panose # BYTE charset // DEFAULT_CHARSET (0x01) # BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise # ULONG weight // OS/2 Table usWeightClass # USHORT fsType // OS/2 Table fsType (specifies embedding permission flags) # USHORT magicNumber // Magic number for EOT file - 0x504C. # ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1 # ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2 # ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3 # ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4 # ULONG codePageRange1 // OS/2 Table ulCodePageRange1 # ULONG codePageRange2 // OS/2 Table ulCodePageRange2 # ULONG checkSumAdjustment // head Table CheckSumAdjustment # ULONG reserved[4] // Reserved - must be 0 # USHORT padding1 // Padding - must be 0 # # EOT name records # # USHORT FamilyNameSize // Font family name size in bytes # BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16 # USHORT Padding2 // Padding - must be 0 # # USHORT StyleNameSize // Style name size in bytes # BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16 # USHORT Padding3 // Padding - must be 0 # # USHORT VersionNameSize // Version name size in bytes # bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16 # USHORT Padding4 // Padding - must be 0 # # USHORT FullNameSize // Full name size in bytes # BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16 # USHORT Padding5 // Padding - must be 0 # # USHORT RootStringSize // Root string size in bytes # BYTE RootString[RootStringSize] // Root string, little-endian UTF-16
#!/usr/bin/env python3 # Compatible with Python 2.7 and 3.2+, can be used either as a module # or a standalone executable. # # Copyright 2017, 2018 Institute of Formal and Applied Linguistics (UFAL), # Faculty of Mathematics and Physics, Charles University, Czech Republic. # # This Source Code Form is subject to the terms of the Mozilla Public # License, v. 2.0. If a copy of the MPL was not distributed with this # file, You can obtain one at http://mozilla.org/MPL/2.0/. # # Authors: NAME NAME <EMAIL> # # Changelog: # - [12 Apr 2018] Version 0.9: Initial release. # - [19 Apr 2018] Version 1.0: Fix bug in MLAS (duplicate entries in functional_children). # Add --counts option. # - [02 May 2018] Version 1.1: When removing spaces to match gold and system characters, # consider all Unicode characters of category Zs instead of # just ASCII space. # - [25 Jun 2018] Version 1.2: Use python3 in the she-bang (instead of python). # In Python2, make the whole computation use `unicode` strings. # Command line usage # ------------------ # conll18_ud_eval.py [-v] gold_conllu_file system_conllu_file # # - if no -v is given, only the official CoNLL18 UD Shared Task evaluation metrics # are printed # - if -v is given, more metrics are printed (as precision, recall, F1 score, # and in case the metric is computed on aligned words also accuracy on these): # - Tokens: how well do the gold tokens match system tokens # - Sentences: how well do the gold sentences match system sentences # - Words: how well can the gold words be aligned to system words # - UPOS: using aligned words, how well does UPOS match # - XPOS: using aligned words, how well does XPOS match # - UFeats: using aligned words, how well does universal FEATS match # - AllTags: using aligned words, how well does UPOS+XPOS+FEATS match # - Lemmas: using aligned words, how well does LEMMA match # - UAS: using aligned words, how well does HEAD match # - LAS: using aligned words, how well does HEAD+DEPREL(ignoring subtypes) match # - CLAS: using aligned words with content DEPREL, how well does # HEAD+DEPREL(ignoring subtypes) match # - MLAS: using aligned words with content DEPREL, how well does # HEAD+DEPREL(ignoring subtypes)+UPOS+UFEATS+FunctionalChildren(DEPREL+UPOS+UFEATS) match # - BLEX: using aligned words with content DEPREL, how well does # HEAD+DEPREL(ignoring subtypes)+LEMMAS match # - if -c is given, raw counts of correct/gold_total/system_total/aligned words are printed # instead of precision/recall/F1/AlignedAccuracy for all metrics. # API usage # --------- # - load_conllu(file) # - loads CoNLL-U file from given file object to an internal representation # - the file object should return str in both Python 2 and Python 3 # - raises UDError exception if the given file cannot be loaded # - evaluate(gold_ud, system_ud) # - evaluate the given gold and system CoNLL-U files (loaded with load_conllu) # - raises UDError if the concatenated tokens of gold and system file do not match # - returns a dictionary with the metrics described above, each metric having # three fields: precision, recall and f1 # Description of token matching # ----------------------------- # In order to match tokens of gold file and system file, we consider the text # resulting from concatenation of gold tokens and text resulting from # concatenation of system tokens. These texts should match -- if they do not, # the evaluation fails. # # If the texts do match, every token is represented as a range in this original # text, and tokens are equal only if their range is the same. # Description of word matching # ---------------------------- # When matching words of gold file and system file, we first match the tokens. # The words which are also tokens are matched as tokens, but words in multi-word # tokens have to be handled differently. # # To handle multi-word tokens, we start by finding "multi-word spans". # Multi-word span is a span in the original text such that # - it contains at least one multi-word token # - all multi-word tokens in the span (considering both gold and system ones) # are completely inside the span (i.e., they do not "stick out") # - the multi-word span is as small as possible # # For every multi-word span, we align the gold and system words completely # inside this span using LCS on their FORMs. The words not intersecting # (even partially) any multi-word span are then aligned as tokens.
""" ========================================== Statistical functions (:mod:`scipy.stats`) ========================================== .. module:: scipy.stats This module contains a large number of probability distributions as well as a growing library of statistical functions. Each univariate distribution is an instance of a subclass of `rv_continuous` (`rv_discrete` for discrete distributions): .. autosummary:: :toctree: generated/ rv_continuous rv_discrete Continuous distributions ======================== .. autosummary:: :toctree: generated/ alpha -- Alpha anglit -- Anglit arcsine -- Arcsine beta -- Beta betaprime -- Beta Prime bradford -- Bradford burr -- Burr (Type III) burr12 -- Burr (Type XII) cauchy -- Cauchy chi -- Chi chi2 -- Chi-squared cosine -- Cosine dgamma -- Double Gamma dweibull -- Double Weibull erlang -- Erlang expon -- Exponential exponnorm -- Exponentially Modified Normal exponweib -- Exponentiated Weibull exponpow -- Exponential Power f -- F (Snecdor F) fatiguelife -- Fatigue Life (Birnbaum-Saunders) fisk -- Fisk foldcauchy -- Folded Cauchy foldnorm -- Folded Normal frechet_r -- Frechet Right Sided, Extreme Value Type II (Extreme LB) or weibull_min frechet_l -- Frechet Left Sided, Weibull_max genlogistic -- Generalized Logistic gennorm -- Generalized normal genpareto -- Generalized Pareto genexpon -- Generalized Exponential genextreme -- Generalized Extreme Value gausshyper -- Gauss Hypergeometric gamma -- Gamma gengamma -- Generalized gamma genhalflogistic -- Generalized Half Logistic gilbrat -- Gilbrat gompertz -- Gompertz (Truncated Gumbel) gumbel_r -- Right Sided Gumbel, Log-Weibull, Fisher-Tippett, Extreme Value Type I gumbel_l -- Left Sided Gumbel, etc. halfcauchy -- Half Cauchy halflogistic -- Half Logistic halfnorm -- Half Normal halfgennorm -- Generalized Half Normal hypsecant -- Hyperbolic Secant invgamma -- Inverse Gamma invgauss -- Inverse Gaussian invweibull -- Inverse Weibull johnsonsb -- NAME johnsonsu -- NAME kappa4 -- Kappa 4 parameter kappa3 -- Kappa 3 parameter ksone -- Kolmogorov-Smirnov one-sided (no stats) kstwobign -- Kolmogorov-Smirnov two-sided test for Large N (no stats) laplace -- Laplace levy -- Levy levy_l levy_stable logistic -- Logistic loggamma -- Log-Gamma loglaplace -- Log-Laplace (Log Double Exponential) lognorm -- Log-Normal lomax -- Lomax (Pareto of the second kind) maxwell -- Maxwell mielke -- Mielke's Beta-Kappa nakagami -- Nakagami ncx2 -- Non-central chi-squared ncf -- Non-central F nct -- Non-central Student's T norm -- Normal (Gaussian) pareto -- Pareto pearson3 -- Pearson type III powerlaw -- Power-function powerlognorm -- Power log normal powernorm -- Power normal rdist -- R-distribution reciprocal -- Reciprocal rayleigh -- Rayleigh rice -- Rice recipinvgauss -- Reciprocal Inverse Gaussian semicircular -- Semicircular skewnorm -- Skew normal t -- Student's T trapz -- Trapezoidal triang -- Triangular truncexpon -- Truncated Exponential truncnorm -- Truncated Normal tukeylambda -- Tukey-Lambda uniform -- Uniform vonmises -- Von-Mises (Circular) vonmises_line -- Von-Mises (Line) wald -- Wald weibull_min -- Minimum Weibull (see Frechet) weibull_max -- Maximum Weibull (see Frechet) wrapcauchy -- Wrapped Cauchy Multivariate distributions ========================== .. autosummary:: :toctree: generated/ multivariate_normal -- Multivariate normal distribution matrix_normal -- Matrix normal distribution dirichlet -- Dirichlet wishart -- Wishart invwishart -- Inverse Wishart multinomial -- Multinomial distribution special_ortho_group -- SO(N) group ortho_group -- O(N) group random_correlation -- random correlation matrices Discrete distributions ====================== .. autosummary:: :toctree: generated/ bernoulli -- Bernoulli binom -- Binomial boltzmann -- Boltzmann (Truncated Discrete Exponential) dlaplace -- Discrete Laplacian geom -- Geometric hypergeom -- Hypergeometric logser -- Logarithmic (Log-Series, Series) nbinom -- Negative Binomial planck -- Planck (Discrete Exponential) poisson -- Poisson randint -- Discrete Uniform skellam -- Skellam zipf -- Zipf Statistical functions ===================== Several of these functions have a similar version in scipy.stats.mstats which work for masked arrays. .. autosummary:: :toctree: generated/ describe -- Descriptive statistics gmean -- Geometric mean hmean -- Harmonic mean kurtosis -- Fisher or Pearson kurtosis kurtosistest -- mode -- Modal value moment -- Central moment normaltest -- skew -- Skewness skewtest -- kstat -- kstatvar -- tmean -- Truncated arithmetic mean tvar -- Truncated variance tmin -- tmax -- tstd -- tsem -- variation -- Coefficient of variation find_repeats trim_mean .. autosummary:: :toctree: generated/ cumfreq histogram2 histogram itemfreq percentileofscore scoreatpercentile relfreq .. autosummary:: :toctree: generated/ binned_statistic -- Compute a binned statistic for a set of data. binned_statistic_2d -- Compute a 2-D binned statistic for a set of data. binned_statistic_dd -- Compute a d-D binned statistic for a set of data. .. autosummary:: :toctree: generated/ obrientransform signaltonoise bayes_mvs mvsdist sem zmap zscore iqr .. autosummary:: :toctree: generated/ sigmaclip threshold trimboth trim1 .. autosummary:: :toctree: generated/ f_oneway pearsonr spearmanr pointbiserialr kendalltau linregress theilslopes f_value .. autosummary:: :toctree: generated/ ttest_1samp ttest_ind ttest_ind_from_stats ttest_rel kstest chisquare power_divergence ks_2samp mannwhitneyu tiecorrect rankdata ranksums wilcoxon kruskal friedmanchisquare combine_pvalues ss square_of_sums jarque_bera .. autosummary:: :toctree: generated/ ansari bartlett levene shapiro anderson anderson_ksamp binom_test fligner median_test mood .. autosummary:: :toctree: generated/ boxcox boxcox_normmax boxcox_llf entropy .. autosummary:: :toctree: generated/ chisqprob betai Circular statistical functions ============================== .. autosummary:: :toctree: generated/ circmean circvar circstd Contingency table functions =========================== .. autosummary:: :toctree: generated/ chi2_contingency contingency.expected_freq contingency.margins fisher_exact Plot-tests ========== .. autosummary:: :toctree: generated/ ppcc_max ppcc_plot probplot boxcox_normplot Masked statistics functions =========================== .. toctree:: stats.mstats Univariate and multivariate kernel density estimation (:mod:`scipy.stats.kde`) ============================================================================== .. autosummary:: :toctree: generated/ gaussian_kde For many more stat related functions install the software R and the interface package rpy. """
#!/usr/bin/env python # ***** BEGIN LICENSE BLOCK ***** # Version: MPL 1.1/GPL 2.0/LGPL 2.1 # # The contents of this file are subject to the Mozilla Public License Version # 1.1 (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # http://www.mozilla.org/MPL/ # # Software distributed under the License is distributed on an "AS IS" basis, # WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License # for the specific language governing rights and limitations under the # License. # # The Original Code is font utility code. # # The Initial Developer of the Original Code is Mozilla Corporation. # Portions created by the Initial Developer are Copyright (C) 2009 # the Initial Developer. All Rights Reserved. # # Contributor(s): # NAME <EMAIL> # # Alternatively, the contents of this file may be used under the terms of # either the GNU General Public License Version 2 or later (the "GPL"), or # the GNU Lesser General Public License Version 2.1 or later (the "LGPL"), # in which case the provisions of the GPL or the LGPL are applicable instead # of those above. If you wish to allow use of your version of this file only # under the terms of either the GPL or the LGPL, and not to allow others to # use your version of this file under the terms of the MPL, indicate your # decision by deleting the provisions above and replace them with the notice # and other provisions required by the GPL or the LGPL. If you do not delete # the provisions above, a recipient may use your version of this file under # the terms of any one of the MPL, the GPL or the LGPL. # # ***** END LICENSE BLOCK ***** */ # eotlitetool.py - create EOT version of OpenType font for use with IE # # Usage: eotlitetool.py [-o output-filename] font1 [font2 ...] # # OpenType file structure # http://www.microsoft.com/typography/otspec/otff.htm # # Types: # # BYTE 8-bit unsigned integer. # CHAR 8-bit signed integer. # USHORT 16-bit unsigned integer. # SHORT 16-bit signed integer. # ULONG 32-bit unsigned integer. # Fixed 32-bit signed fixed-point number (16.16) # LONGDATETIME Date represented in number of seconds since 12:00 midnight, January 1, 1904. The value is represented as a signed 64-bit integer. # # SFNT Header # # Fixed sfnt version // 0x00010000 for version 1.0. # USHORT numTables // Number of tables. # USHORT searchRange // (Maximum power of 2 <= numTables) x 16. # USHORT entrySelector // Log2(maximum power of 2 <= numTables). # USHORT rangeShift // NumTables x 16-searchRange. # # Table Directory # # ULONG tag // 4-byte identifier. # ULONG checkSum // CheckSum for this table. # ULONG offset // Offset from beginning of TrueType font file. # ULONG length // Length of this table. # # OS/2 Table (Version 4) # # USHORT version // 0x0004 # SHORT xAvgCharWidth # USHORT usWeightClass # USHORT usWidthClass # USHORT fsType # SHORT ySubscriptXSize # SHORT ySubscriptYSize # SHORT ySubscriptXOffset # SHORT ySubscriptYOffset # SHORT ySuperscriptXSize # SHORT ySuperscriptYSize # SHORT ySuperscriptXOffset # SHORT ySuperscriptYOffset # SHORT yStrikeoutSize # SHORT yStrikeoutPosition # SHORT sFamilyClass # BYTE panose[10] # ULONG ulUnicodeRange1 // Bits 0-31 # ULONG ulUnicodeRange2 // Bits 32-63 # ULONG ulUnicodeRange3 // Bits 64-95 # ULONG ulUnicodeRange4 // Bits 96-127 # CHAR achVendID[4] # USHORT fsSelection # USHORT usFirstCharIndex # USHORT usLastCharIndex # SHORT sTypoAscender # SHORT sTypoDescender # SHORT sTypoLineGap # USHORT usWinAscent # USHORT usWinDescent # ULONG ulCodePageRange1 // Bits 0-31 # ULONG ulCodePageRange2 // Bits 32-63 # SHORT sxHeight # SHORT sCapHeight # USHORT usDefaultChar # USHORT usBreakChar # USHORT usMaxContext # # # The Naming Table is organized as follows: # # [name table header] # [name records] # [string data] # # Name Table Header # # USHORT format // Format selector (=0). # USHORT count // Number of name records. # USHORT stringOffset // Offset to start of string storage (from start of table). # # Name Record # # USHORT platformID // Platform ID. # USHORT encodingID // Platform-specific encoding ID. # USHORT languageID // Language ID. # USHORT nameID // Name ID. # USHORT length // String length (in bytes). # USHORT offset // String offset from start of storage area (in bytes). # # head Table # # Fixed tableVersion // Table version number 0x00010000 for version 1.0. # Fixed fontRevision // Set by font manufacturer. # ULONG checkSumAdjustment // To compute: set it to 0, sum the entire font as ULONG, then store 0xB1B0AFBA - sum. # ULONG magicNumber // Set to 0x5F0F3CF5. # USHORT flags # USHORT unitsPerEm // Valid range is from 16 to 16384. This value should be a power of 2 for fonts that have TrueType outlines. # LONGDATETIME created // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # LONGDATETIME modified // Number of seconds since 12:00 midnight, January 1, 1904. 64-bit integer # SHORT xMin // For all glyph bounding boxes. # SHORT yMin # SHORT xMax # SHORT yMax # USHORT macStyle # USHORT lowestRecPPEM // Smallest readable size in pixels. # SHORT fontDirectionHint # SHORT indexToLocFormat // 0 for short offsets, 1 for long. # SHORT glyphDataFormat // 0 for current format. # # # # Embedded OpenType (EOT) file format # http://www.w3.org/Submission/EOT/ # # EOT version 0x00020001 # # An EOT font consists of a header with the original OpenType font # appended at the end. Most of the data in the EOT header is simply a # copy of data from specific tables within the font data. The exceptions # are the 'Flags' field and the root string name field. The root string # is a set of names indicating domains for which the font data can be # used. A null root string implies the font data can be used anywhere. # The EOT header is in little-endian byte order but the font data remains # in big-endian order as specified by the OpenType spec. # # Overall structure: # # [EOT header] # [EOT name records] # [font data] # # EOT header # # ULONG eotSize // Total structure length in bytes (including string and font data) # ULONG fontDataSize // Length of the OpenType font (FontData) in bytes # ULONG version // Version number of this format - 0x00020001 # ULONG flags // Processing Flags (0 == no special processing) # BYTE fontPANOSE[10] // OS/2 Table panose # BYTE charset // DEFAULT_CHARSET (0x01) # BYTE italic // 0x01 if ITALIC in OS/2 Table fsSelection is set, 0 otherwise # ULONG weight // OS/2 Table usWeightClass # USHORT fsType // OS/2 Table fsType (specifies embedding permission flags) # USHORT magicNumber // Magic number for EOT file - 0x504C. # ULONG unicodeRange1 // OS/2 Table ulUnicodeRange1 # ULONG unicodeRange2 // OS/2 Table ulUnicodeRange2 # ULONG unicodeRange3 // OS/2 Table ulUnicodeRange3 # ULONG unicodeRange4 // OS/2 Table ulUnicodeRange4 # ULONG codePageRange1 // OS/2 Table ulCodePageRange1 # ULONG codePageRange2 // OS/2 Table ulCodePageRange2 # ULONG checkSumAdjustment // head Table CheckSumAdjustment # ULONG reserved[4] // Reserved - must be 0 # USHORT padding1 // Padding - must be 0 # # EOT name records # # USHORT FamilyNameSize // Font family name size in bytes # BYTE FamilyName[FamilyNameSize] // Font family name (name ID = 1), little-endian UTF-16 # USHORT Padding2 // Padding - must be 0 # # USHORT StyleNameSize // Style name size in bytes # BYTE StyleName[StyleNameSize] // Style name (name ID = 2), little-endian UTF-16 # USHORT Padding3 // Padding - must be 0 # # USHORT VersionNameSize // Version name size in bytes # bytes VersionName[VersionNameSize] // Version name (name ID = 5), little-endian UTF-16 # USHORT Padding4 // Padding - must be 0 # # USHORT FullNameSize // Full name size in bytes # BYTE FullName[FullNameSize] // Full name (name ID = 4), little-endian UTF-16 # USHORT Padding5 // Padding - must be 0 # # USHORT RootStringSize // Root string size in bytes # BYTE RootString[RootStringSize] // Root string, little-endian UTF-16